report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
During this Committee’s hearings on S. 981, one of the witnesses indicated that Congress should determine the effectiveness of previously enacted regulatory reforms before enacting additional reforms. Perhaps the most directly relevant of those reforms to S. 746 is title II of the Unfunded Mandates Reform Act of 1995 (UMRA), which requires that agencies take a number of analytical and procedural steps during the rulemaking process. We examined the implementation of UMRA during its first 2 years of operation and, for several reasons, concluded that it had little effect on agencies’ rulemaking actions. First, the act’s cost-benefit requirement did not apply to many of the rulemaking actions that were considered “economically significant”actions under Executive Order 12866 (78 out of 110 issued in the 2-year period). Second, UMRA gave agencies discretion not to take certain actions if they determined that those actions were duplicative or unfeasible. For example, subsection 202(a)(3) of the act requires agencies to estimate future compliance costs and any disproportionate budgetary effects of the actions “if and to the extent that the agency determines that accurate estimates are reasonably feasible.” Third, UMRA requires agencies to take actions that they were already required to take. For example, the act required agencies to conduct cost- benefit analyses for all covered rules, but Executive Order 12866 required such analyses for more than a year before UMRA was enacted and for a broader set of rules than UMRA covered. Like UMRA, S. 746 contains some of the same requirements contained in Executive Order 12866 and in previous legislation. However, the requirements in the bill are also different from existing requirements in many respects. For example, S. 746 would address a number of topics that are not addressed by either UMRA or the executive order, including risk assessments and peer review. These requirements could have the effect of improving the quality of the cost-benefit analyses that agencies are currently required to perform. Also, S. 746 applies to rules issued by independent regulatory agencies that are not covered by Executive Order 12866. “an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities.” “(A) the agency proposing the rule or the Director (of the Office of Management and Budget) reasonably determines is likely to have an annual effect on the economy of $100,000,000 or more in reasonably quantifiable costs; or (B) is otherwise designated a major rule by the Director on the ground that the rule is likely to adversely affect, in a material way, the economy, a sector of the economy, including small business, productivity, competition, jobs, the environment, public health or safety, or State, local or tribal governments, or communities.” Therefore, a rule that is economically significant under Executive Order 12866 because it is likely to have more than $100 million in benefits (but perhaps only $90 million in costs) would not be covered by the analytical requirements in S. 746 (unless designated by the Director). Also, the bill does not cover a rule if the agency determines that it imposes $90 million in costs plus other costs that are not “reasonably quantifiable.” If the intent of the bill is not to exclude these kinds of rules covered by the executive order, the definition of a major rule in subsection 621(7)(A) could be amended to eliminate the words “in reasonably quantifiable costs.” The centerpiece of S. 746 is its emphasis on cost-benefit analysis for major rules. The bill establishes detailed procedures for preparing those analyses and using them in the rulemaking process. Therefore, it is important to understand how agencies are currently preparing cost-benefit analyses. Mr. Chairman, in a 1998 report prepared at your and Senator Glenn’s request, we examined 20 cost-benefit analyses at 5 agencies to determine the extent to which those analyses contain the “best practices” elements recommended in the Office of Management and Budget’s (OMB) January 1996 guidance for conducting cost-benefit analyses. We concluded that some of these 20 analyses did not incorporate OMB’s best practices. For example, the guidance states that the cost-benefit analysis should show that the agency has considered the most important alternative approaches to the problem addressed by the proposed regulatory action. However, 5 of the 20 analyses that we examined did not discuss any alternatives to the proposed action, and some of the studies that discussed alternatives did so in a limited fashion. For example, the Food and Drug Administration’s (FDA) regulation on adolescents’ use of tobacco examined six regulatory alternatives but contained only a few paragraphs on the five that were ultimately rejected. A more thorough discussion of the alternatives that FDA considered would have better enabled the public to understand why the agency chose the proposed action. Six of the cost-benefit studies did not assign dollar values to benefits, and only six analyses specifically identified net benefits (benefits remaining after costs have been accounted for)—a key element in OMB’s guidance. Executive Order 12866, on which OMB’s guidance is based, emphasizes that agencies should select approaches that maximize net benefits unless a statute requires another regulatory approach. assumptions were not identified or were not explained in 8 of the analyses. For example, one analysis assumed a value of life that ranged from $1.6 million to $8.5 million while another analysis that was prepared in the same year assumed a value of life that ranged from $3 million to $12 million. In neither case did the analysis clearly explain why the values were chosen. Eight of the 20 cost-benefit analyses that we examined in our 1998 report did not include an executive summary that could help Congress, decisionmakers, the public, and other users quickly identify key information addressed in the analyses. In our 1997 report, 10 of the 23 analyses supporting air quality regulations did not have executive summaries. We have previously recommended that agencies’ cost-benefit analyses contain such summaries whenever possible, identifying (1) all benefits and costs, (2) the range of uncertainties associated with the benefits and costs, and (3) a comparison of all feasible alternatives. S. 746 addresses many of these areas of concern. For example, when an agency publishes a notice of proposed rulemaking (NPRM) for a major rule, section 623 of the bill would require agencies to prepare and place in the rulemaking file an initial regulatory analysis containing an analysis of the benefits and costs of the proposed rule and an evaluation of the benefits and costs of a reasonable number of alternatives. Section 623 also requires an evaluation of the relationship of the benefits of the proposed rule to its costs, including whether the rule is likely to substantially achieve the rulemaking objective in a more cost-effective manner or with greater net benefits than other reasonable alternatives. Finally, it requires agencies to include an executive summary in the regulatory analysis that describes, among other things, the key assumptions and scientific or economic information upon which the agency relied. vary from one analysis to another, the agencies should explain those variations. If enacted, Congress may want to review the implementation of this part of S. 746 to ensure that the initial regulatory analysis requirements apply to all of the rules that it anticipated. As I previously noted, the bill’s analytical requirements apply to all major rules at the time they are published as an NPRM. The Administrative Procedure Act of 1946 (APA) permits agencies to issue final rules without NPRMs when they find, for “good cause,” that the procedures are impracticable, unnecessary, or contrary to the public interest. When agencies use this exception, the APA requires the agencies to explicitly say so and provide an explanation for the exception’s use when the rule is published in the Federal Register. In a report we issued last April, we pointed out that 23 of the 122 final rules that were considered “major” under the Small Business Regulatory Enforcement Fairness Act and published between March 29, 1996, and March 29, 1998, were issued without a previous NPRM. If the same proportion holds true for the major rules covered by S. 746, the initial analytical requirements in the bill would not apply to nearly one-fifth of all final major rules. We also examined the issuance of final rules without NPRMs in another report that we issued last year. In some of the actions that we reviewed, agencies’ stated rationales for using the good cause exception were not clear or understandable. For example, in one such action, the agencies said in the preamble to the final rule that a 1993 executive order that imposed a 1994 deadline for implementation and incorporation of its policies into regulations prevented the agencies from obtaining public comments before issuing a final rule in 1995. In other actions, the agencies made only broad assertions in the preambles to the rules that an NPRM would delay the issuance of rules that were, in some general sense, in the public interest. appropriate situations. Similarly, we believe that using the issuance of NPRMs as the trigger for analytical requirements may be entirely appropriate. However, as a result, some major rules will probably not be subject to these requirements. S. 746 also requires agencies to provide for an independent peer review of any required risk assessments and cost-benefit analyses of major rules that the agencies or the OMB Director reasonably anticipate are likely to have a $500 million effect on the economy. Peer review is the critical evaluation of scientific and technical work products by independent experts. The bill states that the peer reviews should be conducted through panels that are “broadly representative” and involve participants with relevant expertise who are “independent of the agency.” We believe that important economic analyses should be peer reviewed. Given the uncertainties associated with predicting the future economic impacts of various regulatory alternatives, the rigorous, independent review of economic analyses should help enhance the quality, credibility, and acceptability of agencies’ decisionmaking. In our 1998 study of agencies’ cost-benefit analysis methods that I mentioned previously, only 1 of the 20 analyses that we examined received an independent peer review. Of the five agencies whose analyses we examined, only EPA had a formal peer review policy in place. Although OMB does not require peer reviews, the Administrator of OMB’s Office of Information and Regulatory Affairs (OIRA) testified in September 1997 that the administration supports peer review. However, she also said that the administration realizes that peer review is not cost-free in terms of agencies’ resources or time. The peer review requirements in S. 746 provide agencies with substantial flexibility. If an agency head certifies that adequate peer review has already been conducted, and the OMB Director concurs, the bill requires no further peer review. However, agencies will need to carefully plan for such reviews given the bill’s requirement that they be done for all risk assessments and each cost-benefit analysis for which the associated rule is expected to have a $500 million effect on the economy. Agencies will also need to ensure that a broad range of affected parties are represented on the panels and (as S. 746 requires) that panel reports reflect the diversity of opinions that exist. Mr. Chairman, last year we issued a report which you and Senator Glenn requested, assessing the implementation of the regulatory review transparency requirements in Executive Order 12866. Those requirements are similar to the public disclosure requirements in S. 746 in that they require agencies to identify for the public the substantive changes made during the period that the rules are being reviewed by OIRA, as well as changes made at the suggestion or recommendation of OIRA. We reviewed four major rulemaking agencies’ public dockets and concluded that it was usually very difficult to locate the documentation that the executive order required. In many cases, the dockets contained some evidence of changes made during or because of OIRA’s review, but we could not be sure that all such changes had been documented. In other cases, the files contained no evidence of OIRA changes, and we could not tell if that meant that there had been no such changes to the rules or whether the changes were just not documented. Also, the information in the dockets for some of the rules was quite voluminous, and many did not have indexes to help the public find the required documents. Therefore, we recommended that the OIRA Administrator issue guidance on how to implement the executive order’s transparency requirements. The OIRA Administrator’s comments in reaction to our recommendation appeared at odds with the requirements and intent of the executive order. Her comments may also signal a need for ongoing congressional oversight and, in some cases, greater specificity as Congress codifies agencies’ public disclosure responsibilities and OIRA’s role in the regulatory review process. For example, in response to our recommendation that OIRA issue guidance to agencies on how to improve the accessibility of rulemaking dockets, the Administrator said “it is not the role of OMB to advise other agencies on general matters of administrative practice.” The OIRA Administrator also indicated that she believed the executive order did not require agencies to document changes made at OIRA’s suggestion before a rule is formally submitted to OIRA for formal review. However, the Administrator also said that OIRA can become deeply involved in important agency rules well before they are submitted to OIRA. Therefore, adherence to her interpretation of the order would result in agencies’ failing to document OIRA’s early role in the rulemaking process. Those transparency requirements were put in place because of earlier congressional concerns regarding how rules were changed during the regulatory review process. Finally, the OIRA Administrator said that an “interested individual” could identify changes made to a draft rule by comparing drafts of the rule. This position seems to change the focus of responsibility in Executive Order 12866. The order requires agencies to identify for the public changes made to draft rules. It does not place the responsibility on the public to identify changes made to agency rules. Also, comparison of a draft rule submitted for review with the draft on which OIRA concluded review would not indicate which of the changes were made at OIRA’s suggestion—a specific requirement of the order. We believe that enactment of the public disclosure requirements in S. 746 would provide a statutory foundation to help ensure the public’s access to regulatory review information. In particular, the bill’s requirement that these rule changes be described in a single document would make it easier for the public to understand how rules change during the review process. We are also pleased to see that S. 746 requires agencies to document when no changes were made to the rules. Additional refinements to the bill may help clarify agencies’ responsibilities in light of the OIRA Administrator’s comments responding to our report. For example, S. 746 could state more specifically that agencies must document the changes made to rules at the suggestion or recommendation of OIRA whenever they occur, not just the changes made during the period of OIRA’s formal review. Similarly, if Congress wants OIRA to issue guidance on how agencies can structure rulemaking dockets to facilitate public access, S. 746 may need to specifically instruct the agencies to do so. S. 746 contains a number of provisions designed to improve regulatory management. These provisions strive to make the regulatory process more intelligible and accessible to the public, more effective, and better managed. Passage of S. 746 would provide a statutory foundation for such principles as openness, accountability, and sound science in rulemaking. This Committee has been diligent in its oversight of the federal regulatory process. However, our reviews of current regulatory requirements suggest that, even if S. 746 is enacted into law, congressional oversight will continue to be important to ensure that the principles embodied in the bill are faithfully implemented. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed S.746, the Regulatory Improvement Act of 1999, focusing on: (1) the effectiveness of previous regulatory reform initiatives; (2) agencies' cost-benefit analysis practices and the trigger for the analytical requirements; (3) peer review of agencies' regulatory analyses; and (4) the transparency of the regulatory development and review process. GAO noted that: (1) GAO examined the implementation of the Unfunded Mandates Reform Act (UMRA) during its first 2 years of operation and, for several reasons, concluded that it had little effect on agencies' rulemaking actions; (2) the act's cost-benefit requirement did not apply to many of the rulemaking actions considered economically significant under Executive Order 12866; (3) UMRA gave agencies discretion not to take certain actions if they determined that those actions were duplicative or unfeasible; (4) UMRA requires agencies to take actions that they were already required to take; (5) the centerpiece of S. 746 is its emphasis on cost-benefit analysis for major rules; (6) in 1998, GAO examined 20 cost-benefit analyses at 5 agencies to determine the extent to which those analyses contain the best practices elements recommended in the Office of Management and Budget's (OMB) guidance for conducting cost-benefit analysis; (7) GAO concluded that some of these 20 analyses did not incorporate OMB's best practices; (8) 6 of the cost-benefit studies did not assign dollar values to benefits, and only 6 analyses specifically identified net benefits; (9) 8 of the 20 cost-benefit analyses that GAO examined did not include an executive summary that could help Congress, decisionmakers, the public, and other users quickly identify key information addressed in the analyses; (10) S. 746 addresses many of these areas of concern; (11) enactment of the analytical, transparency, and executive summary requirements in S. 746 would extend and underscore Congress' previous statutory requirements that agencies identify how regulatory decisions are made; (12) S. 746 also requires agencies to provide for an independent peer review of any required risk assessments and cost-benefit analyses of major rules that the agencies or the OMB Director reasonably anticipate are likely to have a $500 million effect on the economy; (13) GAO believes that important economic analyses should be peer reviewed; (14) given the uncertainties associated with predicting the future economic impacts of various regulatory alternatives, the rigorous, independent review of economic analyses should help enhance the quality, credibility, and acceptability of agencies' decisionmaking; (15) GAO believes that enactment of the public disclosure requirements in S. 746 would provide a statutory foundation to help ensure the public's access to regulatory review information; and (16) in particular, the bill's requirement that rule changes be described in a single document would make it easier for the public to understand how rules change during the review process.
Since 1930—when there was virtually no public or private health insurance—VHA’s health care system has evolved into a direct delivery system, with government ownership and operation of facilities. However, of the 26 million veterans who are eligible for care, about half live more than 25 miles from a VHA hospital and about one-third live more than 25 miles from a VHA clinic. Of the approximately 3.4 million veterans VHA currently serves, we estimate that about 1 million travel more than 25 miles to access VHA primary care from a VHA hospital or clinic. In addition, many eligible veterans who are not currently receiving care say that they do not use VHA primary care services because they live too far from a VHA facility. In the early 1990s, VHA began developing a strategy to expand its ability to provide primary care, especially for veterans who had to travel many miles to receive care from existing facilities. In January 1994, the VHA hospital in Amarillo, Texas—now a part of the Southwest Network—established what is commonly recognized as the first VHA community-based clinic. Until the establishment of the Amarillo clinic, VHA had required its hospitals to meet rigid criteria to establish separate outpatient facilities apart from hospitals. These criteria included that clinics had to serve a projected workload of 3,000 visits or more per year and be located at least 100 miles or 3 hours travel time away from the nearest VHA facility. Subsequently, VHA encouraged its hospitals to consider establishing community-based clinics similar to Amarillo’s. In doing this, VHA eliminated its restrictions concerning workload and location. It also encouraged hospitals to consider contracting with other providers when it was in the interest of the veteran and the hospital. In late 1995, VHA reorganized its field operations into 22 Veterans Integrated Service Networks (VISN). (See fig. 1.) These networks are the basic budgetary and decisionmaking units of VHA’s health care system. Networks have responsibility for making a wide range of decisions about care delivery options, including planning and establishing community-based outpatient clinics. In January and February 1995, VHA approved 15 proposals for new community-based clinics. Although networks submitted many more proposals, VHA did not approve any additional clinics until October 1996. Since the first community-based outpatient clinic was established by VHA’s Amarillo Medical Center in 1994, VHA has approved a total of 198 community-based clinics. VHA issued its initial set of guidelines for community-based clinics in February 1995. In essence, these guidelines gave networks wide discretion to establish community-based clinics wherever they deemed appropriate to better serve veterans. Networks were required, however, to submit a brief summary, called a “white paper,” for each planned clinic to VHA for review. These summaries were to describe certain key operational elements, such as target population, service availability, and cost. VHA revised its guidelines in August 1996 to require networks to establish clinics primarily for current users who live more than 30 minutes from an existing VHA clinic. VHA separately required networks to develop annual business plans that, among other things, are to include information on the number of community-based clinics to be established and projected time frames. In addition, VHA provided guidance to networks for developing proposals—which were to provide more details than the original white papers—and implemented a process to help networks develop more consistent and thorough proposals, in accordance with VHA’s guidelines. In our 1996 testimony and report, we concluded that VHA had not adequately defined target veteran populations for new community-based clinics or reasonable travel goals for use in locating new clinics. Given VHA’s limited resources, we expressed concern about the propriety of using clinics to provide convenient geographic access for new users while current users continue to experience inconvenient access. We recommended that VHA state which veterans were to be the primary group to be served by these new clinics and that they establish a travel time or distance standard when planning for new clinics. In response, VHA instructed networks to establish clinics primarily to provide more convenient access to care for current users. Toward this end, VHA stated that it is desirable that a community-based clinic be located generally within 30 minutes’ travel time from a veteran’s home. VHA noted, however, that differences in veterans’ medical conditions and other regional factors may affect veterans’ access to VHA care. As a result, VHA also included several exceptions to the 30-minute travel standard, including traffic congestion, weather conditions, or overcrowding at existing VHA facilities. In our prior work, we concluded that networks were not planning new community-based clinics on a strategic basis and that an overall plan was not available to permit an assessment of network activities from a systemwide perspective. Essentially, networks submitted proposals for individual clinics to headquarters on an ad hoc basis, and headquarters considered the proposals on their individual merit. We expressed concern that this approach would make it difficult, if not impossible, to assess networks’ planning efforts individually or systemwide. As part of its overall restructuring efforts, VHA requires networks to develop annual business plans that are to show how a network intends to spend its resources. In response to our concern, VHA instructed networks to include, as part of their business plans, the number of clinics to be established, time frames for establishing clinics, and locations of planned clinics. VHA stated its intent to consolidate the 22 network plans into a national business plan, which would permit an assessment of network activities from a systemwide perspective. With the networks’ 1997 and 1998 business plans completed—and their 1999 business plans to be completed over the next 3 months—the evolving nature of clinic planning activities can be seen. In their 1997 business plans, the 22 networks reported their intent to establish 211 clinics by the year 2001. As networks gained experience in planning and operating clinics, the number of clinics to be established grew significantly. Networks’ 1998 business plans show that 402 additional clinics are to be established by 2002, although a target year had not been selected for 93 of these clinics. (See table 1.) We surveyed networks about 2 months after their business plans were submitted and found that they had since decided to establish most of the 93 clinics in years 1999 through 2002. The 22 business plans are intended to provide estimates of the number of clinics networks plan to propose and projected time frames for when the clinics will become operational. Collectively, the estimates and projections have been fairly reliable. For example, of the 124 clinics total planned for in the networks’ 1997 business plans, 122 were actually proposed. However, only four networks proposed the same number of clinics as they had indicated in their business plans. Of the remaining 18 networks, 10 proposed a total of 29 more clinics than they had planned and 8 proposed 31 fewer. Still, the majority of networks that proposed more or less than they had stated in their business plans were within three clinics of their estimates. In April 1997, VHA consolidated the 22 network plans into its national business plan. The consolidated plan summarizes the number of community-based clinics that networks have established and plan to establish. However, the plan does not provide sufficient information to assess the impact of community-based clinics on veterans’ access to care on a systemwide basis. In July 1997, VHA was directed by the Senate Appropriations Committee to address the need for a national plan and to respond to the findings and recommendations in our October 1996 report.In its report to the Committee, submitted in April 1998, VHA stated that its response to our October 1996 report remained essentially unchanged from its original comments summarized in our report. In other words, VHA believes that its April 1997 national plan—based on the total number of clinics contained in the 22 individual business plans—is responsive to our concerns. At the time of our earlier report, we did agree that VHA’s intended national business plan could provide a means to achieve the intent of our recommendations. However, it was not known at the time whether the plan would ultimately provide sufficient detail to afford the Congress enough information to determine the overall extent and cost of establishing community-based clinics. Now that we have had the opportunity to review networks’ 1997 and 1998 business plans, VHA’s national plan, and VHA’s report to the Senate, our concerns remain. In contradiction to VHA’s contention that the sum of clinics contained in 22 individual network business plans can serve as an adequate national plan for establishing community-based clinics, we believe it contains less information than do the individual business plans upon which it is based. If it is impossible to determine equity of access issues from individual network business plans, it is, therefore, also impossible to make that same assessment with VHA’s national business plan. When planning a new clinic, networks must describe, as required by VHA’s 1995 guidelines, justification for the clinic, service delivery options, the targeted veteran population and anticipated workload, services to be provided, an implementation plan, and stakeholder comments. While networks were required to involve stakeholders, such as veterans, in the development of clinic proposals, the guidelines afforded networks considerable discretion in deciding how to describe these elements and present their results in proposal documents. To obtain greater consistency among proposals for community-based clinics, VHA provided in August 1996 additional guidance on determining how key elements apply to the needs of the veterans who would be served by the proposed clinics and the network’s ability to fund such clinics. The guidance also provided a standard format for presenting their planning assessment results. Along with the new proposal guidelines, VHA also implemented a new management process to help ensure more thorough and consistent oversight of network proposals. VHA established a task force to assist networks in developing their proposals and to serve as a resource to both network and VHA management. The task force was responsible for ensuring that the information contained in the proposals was complete, accurate, and met VHA requirements. In doing its work, the task force also trained and developed network staff in the skills of preparing clinic proposals. As part of its work, it prepared and distributed guidelines that contained a standardized proposal format with examples and sample wording network planners could use to develop their own proposals. The task force reviewed proposals for 178 clinics and determined that each met VHA’s guidelines. The task force was disbanded in February 1998 and its duties transferred to VHA’s Network Office in headquarters. Our assessment of the 133 proposals reviewed by the task force during fiscal years 1996 and 1997 shows they are designed to serve primarily current users, as VHA guidelines suggest. Of the 272,000 veterans expected to use the new clinics, about 17 percent are estimated to be new VHA users, ranging from 0 percent to 62 percent. Clinics are to be operated in accordance with options contained in VHA’s clinic guidance. Of the 133 proposed clinics we reviewed, 77 will be operated by VHA, 53 by contractual arrangement with other healthcare providers, 2 by combined VHA-contractor arrangement, and 1 by the Department of Defense. On average, VHA-operated clinics plan to serve more veterans than will non-VHA-operated clinics (2,400 versus 1,800). The distance between community-based clinics and VHA hospitals range from 2 to 250 miles. Twenty-one community-based clinics are located 25 miles or less from a VHA hospital to reduce overcrowding in existing facilities or help veterans avoid traffic-congested areas, and 112 are located 25 miles or more from a VHA hospital. This geographic distribution of VHA health care facilities meets VHA standards. In our April 1996 testimony and October 1996 report, we expressed long-standing concerns about inequities in veterans’ access to VHA care. We concluded that given VHA’s limited resources, networks should focus on improving geographic access for current users in a manner that ensures that a comparable percentage of users in each network has reasonable access as defined by VHA’s travel standards. VHA agreed with the need to minimize inequities in access among networks but preferred to encourage such outcomes without mandating national standards for equity of access. As stated in VA’s fiscal year 1999 performance plan, VHA has established a goal of increasing the number of community-based clinics as part of its efforts to implement the Government Performance and Results Act of 1993 (GPRA). This goal, however, focuses on outputs—the number of clinics—rather than on the desired outcome of increasing the percentage of current users having reasonable geographic access to primary care. As a result, networks’ planning efforts focus on the number of community-based clinics to be established and do not address the extent to which new clinics will achieve equity of access for current users among networks or enroll new users in accordance with statutory priorities. Moreover, VHA has not tried to measure networks’ progress in planning community-based clinics to achieve these outcomes. Consequently, we remain concerned about how effectively these clinics are used to equalize veterans’ access to VHA primary care within and among networks. Networks do not present information on how the 402 clinics included in business plans or the 198 approved clinic proposals will reduce access inequities for current users within networks or among networks. Moreover, network officials told us that they do not collect on a networkwide basis information needed to determine the number of current users who have reasonable access or the number who have unreasonable access. As a result, data are not available on the magnitude of access inequities or the impact of networks’ planned clinics on reducing such inequities. To demonstrate how access inequities could be measured and a results-oriented performance goal established, we asked networks to estimate the percentage of current users who had reasonable access in 1997 and met VHA’s 30-minute travel standard and will have reasonable access by 2002 if new clinics are established as planned. Of the 22 networks, 14 provided estimates to us. These 14 networks account for nearly two-thirds of the clinics VHA has approved to date and nearly three-quarters of the clinics planned to be established by 2002. Our analysis of the 14 networks’ estimates shows that accessibility among networks currently varies widely and inequities are likely to remain for many years. Networks’ estimates suggest that their levels of access differed significantly when they started establishing community-based clinics, and these differences remain largely unchanged today. Our assessment of the networks’ estimates shows that the 14 networks had averaged about 53 percent of their total users residing within 30 minutes of one of their primary care facilities in 1995. The 14 networks estimate that 63 percent of users resided within 30 minutes in 1997, with this increase attributable primarily to the new clinics. Despite these improvements, the variability in the percentage of veterans having reasonable access in the 14 networks remains large. (See table 2.) The 14 networks’ estimates show that, on average, about 85 percent of current users are expected to have reasonable access by 2002. This is attributable primarily to the additional clinics that the networks plan to establish over the next 5 years. If established as planned, these clinics could significantly reduce access variabilities among networks, while greatly raising the accessibility levels within networks. (See table 3.) Overall, the 14 networks expect to provide reasonable access for 36 percent more current users in 2002. Most of these networks, however, are increasing access at widely varying rates. For example, four networks estimate that they will provide reasonable access to 50 percent more current users in 2002 than in 1997. Networks’ estimates, however, suggest that it will take several years beyond 2002 for the least accessible networks to achieve equity with the most accessible networks. For example, 5 of the 14 estimate that their accessibility level will be below the estimated network average of 85 percent in 2002. (See table 4.) We estimate that the five networks could provide reasonable access for 85 percent of users between 2003 and 2008 if they continue to establish clinics at their current 1997 to 2002 rates. To achieve 85-percent accessibility, these five networks would have to increase the number of new clinics established over the next 5 years from the 119 currently planned to 178—an average of approximately 12 additional clinics per network. Nine of the 14 networks estimate that less than 90 percent of current users will have reasonable access by 2002. We estimate that these nine could achieve a 90-percent accessibility level between 2003 and 2011 if they continued establishing clinics at their current rates. To achieve 90-percent accessibility, these nine networks would have to increase the number of new clinics established over the next 5 years from the 199 currently planned to 312—an average of approximately 13 additional clinics per network. By law and under VA regulations, veterans are accorded different priorities for enrollment and care based on several factors. Generally, veterans with service-connected disabilities have the highest priority, followed by lower income veterans, and then higher income veterans. While VHA has directed networks to establish new clinics to improve access for current users who have been “historically underserved,” VHA does not specify who these veterans are or how priority applies to such veterans. Our assessment of network business plans and proposals for the 133 clinics suggests that the result of network planning will be to improve access for thousands of lower priority new users in 1998 and 1999, while thousands of higher priority current users may wait until 2000 or beyond for improved access. To date, networks have generally defined historically underserved veterans to be those traveling greater than 30 minutes to a VHA primary facility, regardless of whether they currently receive care in a VHA facility. Because networks seldom consider the statutory priorities when they plan clinics, data are not available to show whether networks’ plans will improve access for high-priority veterans first. Business plans provide no information on the target populations to be served and only 18 of the proposals for the 133 clinics we examined considered service-connected disabilities when differentiating among other current and future users to be served. This approach assumes that veterans with varying priorities and conditions are evenly distributed geographically and throughout each network. Networks are establishing new clinics over a 5-year period, in large part because of the limited resources available. VHA requires networks to establish clinics with existing resources, and most networks are implementing efficiency initiatives as a primary means to generate the resources needed for new clinics. To date, networks have budgeted about $85 million to establish 178 clinics, or about $258 per veteran served. Networks may spend $190 million to establish the 402 clinics planned for the next 5 years if their cost per veteran continues to average $258. Networks included a description of their evaluation plans in their clinic proposals, as VHA guidelines require. The actual evaluation plans vary widely, and some are still being developed. In addition, few have been implemented, primarily because most clinics have operated less than 6 months. VHA obtains information on clinic performance as needed rather than periodically receiving network evaluation results on a systematic basis. All networks included a description of their plans to evaluate their clinics’ performance in their clinic proposals, as VHA requires. Our analysis of the proposals for the 133 community-based clinics approved as of November 1997 shows that evaluation plans were broadly defined and items to be evaluated were described in general terms. Proposals rarely contained an explanation of exactly what would be measured, how it would be measured, the frequency of measure, who would conduct the evaluation, or how the results of an evaluation would be used and by whom. VHA’s August 1996 guidelines added a requirement that networks develop evaluation plans for each new clinic proposed. VHA gave networks wide discretion in how evaluations are to be conducted and results used. In essence, VHA directed networks to evaluate how clinics are achieving their purposes, overall goals, and objectives. Each network is to coordinate evaluation efforts among clinics to ensure that “the same minimal criteria” are evaluated throughout the network. Networks are to define “specific performance measures” for assessing their clinics’ effectiveness. Toward this end, VHA’s guidance identified a number of key indicators that networks can use to measure their clinics’ operational effectiveness. These include reduced beneficiary travel expenditures (by having patients travel to nearby clinics rather than compensating them for traveling greater distances to a medical center), shortened waiting times (by scheduling appointments with clinics that serve fewer clients), and reduced fee-basis care (by serving veterans at VHA-operated or VHA-funded clinics rather than sending them to a private provider). VHA also issued guidance to help networks develop evaluations. This guidance defines a program evaluation as a method used to provide specific information about a clinical or administrative initiative’s activities, outcomes, costs, and effectiveness in meeting its goals. It further explains that new programs should build in monitoring systems for capturing near-term and long-term data to provide information about how well the program is meeting its goals and that a deliberately planned and executed program evaluation is most likely to be useful to managers. Evaluations should be ongoing in order to provide managers with information they can use to adjust or fundamentally change the structure and processes of a program to improve its outcomes. Policymakers, managers, and clinicians alike use program evaluation as a tool to assist them in making informed decisions on the objectives, implementation, and progress of their programs. We included several questions in our network survey about their evaluation plans to understand how networks implemented the broadly described evaluation plans contained in their proposals. First, we asked if they were using a standard networkwide evaluation, a clinic-specific evaluation, or some other evaluation plan. Three indicated they were using a networkwide evaluation; 11 indicated they would use a clinic-specific evaluation; and 7 indicated they were still developing their evaluation plans, would use some other plan—such as a product-line approach—or would establish a task force to develop an evaluation plan. One network did not answer the question. Second, we asked the 18 that said they would conduct either a clinic-specific evaluation or some other plan if there was a common set of minimal criteria that would be evaluated throughout the network for other community-based clinics, as required by VHA. Five networks reported that they did not have a common set of criteria. Of the five, two did not clarify further. Of the remaining three, one reported that it collected data—but not on a regular basis—and that it intended to develop a core set of data items. One reported that evaluations are the responsibility of the clinic’s parent medical center, which can develop its own criteria. The fourth network reported that a clinical practice council would develop and perform community-based evaluations of its clinics. Our assessment of the evaluations performed to date shows that clinic evaluations do not adequately address VHA’s intent that clinics be evaluated to show how they are achieving the network’s purposes, goals, and objectives. Nor do the evaluations include specific performance measures that can be used to manage clinics or assess their effectiveness. As of November 1997, only 6 of the 22 networks reported completing 20 clinic evaluations. This is because most clinics had either not yet opened or had operated for less than 6 months. Nine had operated 1 year or longer, and 11 operated less than 1 year. We asked networks to give us copies of the 20 completed evaluations; networks were able to provide us with 15. Our assessment of the 15 shows considerable variability in terms of what had been described in the proposals and what was actually done. With the exception of one clinic, the evaluations were limited to processes and did not include results-oriented outcomes. For example, 6 of the 15 evaluations were memorandums documenting site visits where administrative and patient records were reviewed for legibility, physicians were checked for proper credentialing, and checks were performed to ensure that data entry was being performed correctly and in a timely fashion. In one instance, where the clinic had been operating for more than 1 year, the memorandum documenting the evaluation stated “This review was intuitive, not explicit. The goal was to obtain a general idea of how well Dr. was doing ongoing review should probably be done at 6-month intervals.” In the instance of the one clinic that we considered to have been evaluated, the evaluation plan contained a list of indicators with measurable criteria that could be used to compare against actual performance. (See table 5.) We believe that using indicators with measurable criteria such as these could be helpful in measuring the effectiveness of VHA’s community-based clinics and is consistent with VHA’s evaluation guidance and the intent of its clinic evaluation requirement. Since networks started establishing new community-based clinics in 1995, VHA has generally collected information on clinic operations as questions or concerns are raised by VA officials and others, such as in the following cases: VHA surveyed the 22 networks in July 1997 to gather selected information on the status of 90 approved clinics, including whether clinics had started operating, budget information, and the number of visits clinics had actually experienced compared with what had been estimated. VHA prepared a report for the Senate Appropriations Committee addressing the need for a national plan for community-based clinics and to respond to the findings and recommendations contained in our October 1996 report. The VHA report basically held that a national plan for such clinics is unnecessary and presented no information that had not already been presented or discussed. VA’s Capital Budgeting and Oversight Service examined the operations of four clinics in one network in spring 1997. The report of that examination is still in draft form, but VHA officials told us that they looked at problems associated with contractors and monitoring clinics. Our assessment of VHA’s evaluation and community-based clinic guidance, evaluations conducted so far, and VHA’s call for information on an as-needed basis suggests that VHA’s guidance is not being implemented as it was intended and that VHA may not be aware that this is happening. VHA continues to lack the information needed to help ensure that networks are establishing community-based clinics in a consistent and equitable manner. Neither VHA nor network officials are able to adequately answer basic questions such as the following: How many VHA primary care facilities in each network meet VHA’s travel standard by providing veterans reasonable access to health care (within 30 minutes of their homes)? How many current users in each network do not have reasonable access to VHA primary care? Of those veterans, how many have service-connected disabilities (highest priority for care)? How many current users will obtain reasonable access through the establishment of new clinics in the next 5 years? Of those veterans, how many have service-connected disabilities? How many newly established clinics meet VHA’s performance goals and objectives? Network business plans, proposals, and responses to our surveys failed to provide adequate information to answer these key questions. Information available suggests considerable variation among networks, which raises concerns about the equity of veterans’ access to care even though networks have improved access for thousands of current users. This is because networks started at different access levels and have established clinics at widely varying rates. Moreover, networks appear to be planning without regard to the priorities. As a result, they will spend limited resources on lower priority new users in 1997 and 1998, while improved access for thousands of higher priority current users will not be available until 2000 and beyond. In order to avoid such potential undesirable situations, and consistent with GPRA, VHA would need to establish results-oriented goals to ensure that each network affords reasonable access to VHA primary care for a minimum percentage of current users by 2002 with the intent of equalizing access systemwide to the maximum extent practical, establishes clinics so as to provide veterans improved access consistent with statutory priorities for care, and evaluates its clinics’ performance using a consistent set of minimal criteria. VHA appears to have a timely opportunity to improve network planning activities, given the networks plan to complete their 1999 business plans within the next 3 months. Additional VHA guidance and other VHA assistance in developing networks’ 1999 business plans could result in a more consistent and thorough strategy for using clinics to equalize veterans’ geographic access to VHA primary care systemwide. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: Set a national target level of performance that focuses each network on a goal of providing reasonable geographic access to VHA primary care for the highest percentage of current users practical by 2002. Require networks to include in their business plans the percentage of (1) current users, by priority status, who have reasonable access; (2) the remaining current users (without reasonable access), by priority status, who are targeted to receive improved access through the establishment of community clinics by 2002; and (3) current users, by priority status, who will not have reasonable access by 2002. Require networks to plan and propose new community-based clinics in a manner that ensures that veterans with highest statutory priorities achieve reasonable access as quickly as possible, consistent with the requirements of the Veteran Health Care Reform Act of 1996 (P.L. 104-292). Establish minimum criteria that all networks are to use annually for evaluating new clinics’ performance. Require networks to annually report their evaluation results to the Capital Budgeting and Oversight Service, a unit within VA, and to others for their use in reviewing proposals for new clinics and other purposes. In commenting on our draft report, VHA officials generally agreed with our findings and recommendations. To improve planning, actions are being taken to incorporate in networks’ business plans information on current users’ access to care now and by 2002. While agreeing that there is variation in access, VHA pointed out that it was not clear that a national target for access is required to focus networks. VHA bases this response on its preliminary analysis of clinic data, which indicates that by 2002, 80 percent of high-priority veterans will, on average, have improved access to care. We agree that networks seem focused on improving access for current users by 2002, but we remain concerned about the potentially large variability among networks, which could be between 70 and 95 percent based on estimates provided to us. As such, we believe that establishment of a national target or goal could help ensure that networks remain focused on achieving reasonable access for the highest percentage of veterans practical, while reducing the variations among networks to the greatest extent practical. VHA also agreed that minimum criteria should be established to evaluate clinic performance; VHA said it will identify a minimum criteria set for all networks that will focus on evaluation of outcomes. While noting that annual reporting seems excessive, VHA said it will perform annual evaluations until it can determine what a more reasonable time frame would be. VA also said it will report the results to the Capital Budgeting and Oversight Service, as recommended. Thereafter, VA suggested, and we agreed, that it seems reasonable to review and adjust clinic performance as part of the networks’ planning processes. VHA officials agreed with the spirit of our recommendation requiring networks to plan and propose clinics to ensure that the highest statutory priority veterans (those with service-connected disabilities) achieve access as quickly as possible. VHA explained, however, that the Veteran Health Care Reform Act of 1996 will change veterans’ eligibility for medical services beginning October 1, 1998, by requiring veterans to enroll for care. Service-connected veterans are in the higher enrollment priorities, but once veterans are enrolled, it will no longer differentiate among enrolled veterans by priority status. In other words, all enrolled veterans—not just those with the highest priorities—are to have equal access to needed services, and networks will necessarily need to address access for all enrolled veterans when planning community-based clinics. VHA suggested, and we agreed, that our recommendation require networks to plan clinics for veterans with the highest priorities in a manner consistent with the act. Please call me at (202) 512-7101 if you have any questions or need additional assistance. Other major contributors to this report include Paul Reynolds, Assistant Director; Michael O’Dell, Senior Social Science Analyst; Carolina Morgan, Senior Evaluator; Lawrence Moore, Evaluator; Barry Bedrick, Associate General Counsel; and Joan Vogel, Senior Evaluator (Computer Science). Figures I.1 through I.3 provide brief profiles—including the number of facilities, fiscal year budgets, total veteran population, and number of veteran patients served in the network area—for the three networks we visited. Clinic shared with the VA Healthcare Network Upstate New York. Hudson Valley Healthcare System. New Jersey Healthcare System. Part of the multifacility Brooklyn Medical Center. Clinic shared with the VA Stars and Stripes Healthcare Network. These clinics comprise the Southern California System of Clinics. VISN 1: New England Healthcare System Bennington, Vt. Essex County/Lynn, Mass. Framingham, Mass. Haverhill, Mass. Hyannis, Mass. Portsmouth, N.H. Torrington, Conn. Waterbury, Conn. Windham, Conn. VISN 2: Healthcare Network Upstate New York Binghamton, N.Y. Glens Falls, N.Y. Kingston, N.Y. (with VISN 3) Niagara Falls, N.Y. Rensselaer County, N.Y. Schenectady County, N.Y. South Saratoga County, N.Y. VISN 3: New York/New Jersey Network Bergen County, N.J. Central Harlem, N.Y. Elizabeth, N.J. Ft. Dix, N.J. (with VISN 4) Jersey City, N.J. New Brunswick, N.J. Rockland County, N.Y. Staten Island, N.Y. Trenton, N.J. Yonkers, N.Y. VISN 4: Stars and Stripes Healthcare Network Aliquippa, Pa. Armstrong County, Pa. Bucks County, Pa. Cape May, N.J. Centre, Pa. Clarion, Pa. Clearfield, Pa. (continued) Crawford County, Pa. Greensburg, Pa. Lancaster, Pa. Lawrence County, Pa. McKean County, Pa. Mercer County, Pa. Schuylkill, Pa. Seaford, Del. Tobyhanna, Pa. West Middlesex, Pa. Williamsport, Pa. VISN 5: Capitol Network Charlotte Hall, Md. Fairfax, Va. (Vet Center) Hagerstown, Md. VISN 6: Mid Atlantic Network Charlotte, N.C. Greenville, N.C. Tazewell, Va. VISN 7: Healthcare System of Atlanta Albany, Ga. Dothan, Ala. Florence, S.C. Macon, Ga. Myrtle Beach, S.C. Walker County, Ala. VISN 8: Florida/Puerto Rico Sunshine Healthcare Network Bartow, Fla. Brookville, Fla. Cecil Field, Fla. Ft. Pierce, Fla. Homestead, Fla. North Pinellas County, Fla. Ocala, Fla. Sarasota, Fla. South St. Petersburg, Fla. Southwest Broward County, Fla. Valdosta, Ga. (continued) VISN 9: Mid South Healthcare Network Bowling Green, Ky. Charleston, W.V. Ft. Knox, Ky. Hopkinsville, Ky. Madison, Tenn. Smithville, Miss. Somerset, Ky. VISN 10: Healthcare System of Ohio Sandusky, Ohio (with VISN 11) VISN 11: Veterans Integrated Service Network South Bend, Ind. Yale, Mich. VISN 12: Great Lakes Healthcare System Aurora, Ill. Chicago Heights, Ill. Elgin, Ill. Hancock, Mich. LaSalle County, Ill. Menominee, Mich. Rhinelander, Wis. Union Grove, Wis. Wausau, Wis. Woodlawn, Ill. VISN 13: Upper Midwest Network Bismarck, N.D. Brainerd, Minn. Fergus Falls, Minn. (continued) Hibbing, Minn. Mankato, Minn. Owatonna, Minn. Pierre, S.D. Worthington, Minn. VISN 14: Central Plains Network Norfolk, Nebr. VISN 15: Heartland Network Cape Girardeau, Mo. Carmi, Ill. Ft. Leonardwood, Mo. Garden City, Kans. Hays, Kans. Kirksville, Mo. Mt. Vernon, Ill. Paducah, Ky. Paragould, Ark. Richards-Gebaur/Belton, Mo. St. Joseph, Mo. West Plains, Mo. VISN 16: Veterans Integrated Service Network Durant, Miss. Greenville, Miss. McAlester, Okla. Meridian, Miss. Mountain Home, Ark. Panama City, Fla. Ponca City, Okla. VISN 17: Heart of Texas Healthcare Alice, Tex. Beeville, Tex. Bonham, Tex. Brownsville, Tex. Brownwood, Tex. Decatur, Tex. Del Rio, Tex. Denton, Tex. Eagle Pass, Tex. (continued) Eastland, Tex. Ft. Worth, Tex. Hamilton, Tex. Kingsville, Tex. McKinney, Tex. Palestine, Tex. Pleasant Grove, Tex. Tyler, Tex. Uvalde, Tex. VISN 18: Southwest Healthcare Network Abilene, Tex. Casa Grande, Ariz. Ft. Stockton, Tex. Hobbs, N.M. Kingman, Ariz. Liberal, Kans. Monahans, Tex. Odessa, Tex. Safford, Ariz. San Angelo, Tex. Santa Rosa, N.M. Sierra Vista, Ariz. Stamford, Tex. Yuma, Ariz. VISN 19: Rocky Mountain Network Aurora, Colo. Casper, Wyo. Gallatin Valley, Mont. Great Falls, Mont. Greeley, Colo. Missoula, Mont. Montrose County, Colo. Riverton, Wyo. VISN 20: Northwest Network Bend, Oreg. Brookings, Oreg.; Cresent City, Calif. Salem, Oreg. Seattle/Puget Sound, Wash. (continued) Tri-Cities Area, Wash. VISN 21: Sierra Pacific Network Auburn, Calif. Merced, Calif. Vallejo, Calif. VISN 22: Desert Pacific Healthcare Network Anaheim, Calif. Chula Vista, Calif. Culver City, Calif. El Centro, Calif. Gardena, Calif. Henderson, Nev. Hollywood, Calif. Lancaster, Calif. Las Vegas, Nev. Lompoc, Calif. Oxnard, Calif. San Louis Obispo, Calif. Santa Ana, Calif. Victorville, Calif. Vista, Calif. VA Hospitals: Issues and Challenges for the Future (GAO/HEHS-98-32, Apr. 30, 1998). VA Health Care: Status of Efforts to Improve Efficiency and Access (GAO/HEHS-98-48, Feb. 6, 1998). VA Health Care: Improving Veterans’ Access Poses Financial and Mission-Related Challenges (GAO/HEHS-97-7, Oct. 25, 1996). VA Health Care: Opportunities for Service Delivery Efficiencies Within Existing Resources (GAO/HEHS-96-121, July 25, 1996). Veterans’ Health Care: Challenges for the Future (GAO/T-HEHS-96-172, June 27, 1996). VA Health Care: Efforts to Improve Veterans’ Access to Primary Care Services (GAO/T-HEHS-96-134, Apr. 24, 1996). VA Health Care: Opportunities to Increase Efficiency and Reduce Resource Needs (GAO/T-HEHS-96-99, Mar. 8, 1996). VA Health Care: Exploring Options to Improve Veterans’ Access to VA Facilities (GAO/HEHS-96-52, Feb. 6, 1996). VA Health Care: How Distance From VA Facilities Affects Veterans’ Use of VA Services (GAO/HEHS-96-31, Dec. 20, 1995). VA Clinic Funding (GAO/HEHS-95-273R, Sept. 19, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Veterans Health Administration's (VHA) use of community-based clinics to improve veterans' access to primary care, focusing on: (1) VHA's planning process for new community-based clinics; (2) networks' implementation of VHA's planning guidelines; and (3) VHA and network oversight of clinic operations. GAO noted that: (1) VHA has strengthened the process that networks are to use when establishing new community-based clinics, thereby addressing several of GAO's recommendations; (2) VHA provided more detailed guidance, including a 30-minute travel standard and an expectation that clinics be established primarily to benefit current users rather than attract new users; (3) VHA developed a more structured planning process, including the development of network business plans covering a 5-year period, and established a task force in accordance with VHA's guidelines; (4) VHA's long-range goal is to increase the number of community-based clinics; (5) to that end, VHA has approved 198 clinics, and network business plans show that 402 additional clinics are to be established between 1998 and 2002; (6) the plans, however, do not address the percentage of current users who have reasonable access, or what percentage of those without reasonable access are targeted to receive enhanced access through the establishment of new clinics; (7) as a result, VHA's network business plans cannot be used to determine on a systemwide basis how well networks are using clinics to equalize veterans' access to primary care; (8) based on the limited information that networks can provide, it appears that the geographic accessibility of VHA primary care currently varies widely among networks and that while networks' efforts should reduce this variation, thousands of the VHA's 3.4 million current users will likely continue to have inequitable access for many years; (9) moreover, it appears that networks are planning to improve access for thousands of lower priority new users over the next two years, while thousands of higher priority current users are waiting considerably longer periods of time for reasonable access; (10) networks, which have primary responsibility for monitoring community-based clinic performance, have developed evaluation plans for proposed clinics, as VHA requires; (11) to date, few clinics have operated for more than 12 months; (12) as a result, most evaluation plans have not been implemented; and (13) network evaluation plans, however, vary widely, with few containing a common set of criteria or indicators that appear necessary to effectively assess clinic evaluations to monitor performance within or among networks.
During the last decade, Congress and the administration have taken several steps to improve the transparency of federal spending data. In 2006, Congress passed and the President signed the Federal Funding Accountability and Transparency Act of 2006 (FFATA) to increase the transparency and accountability of federal contracts and financial assistance awards. Among other things, FFATA required OMB to establish USAspending.gov, containing obligational data on federal awards and subawards, which was launched in December 2007. One of the stated purposes of the DATA Act is to expand FFATA to include direct federal agency expenditures and link contract, loan, and grant spending information to federal programs so that taxpayers and policy makers can more effectively track federal spending. Reporting throughout the federal spending cycle. Full and effective implementation of the DATA Act will allow funds to be tracked at multiple points in the federal spending lifecycle. For example, once fully implemented, amounts appropriated, obligated, and subsequently outlayed for a particular program activity would all be publicly available on USAspending.gov or a successor website. These additional federal spending cycle data on appropriations, obligations, and outlays will provide more transparency on federal awards. USAspending.gov provides information on award amounts for grants, contracts, and other types of information, but the only information currently available is data on federal award obligations. The DATA Act represents a significant change to the types of data reported by requiring additional budget and financial information, which, to date, has not been reported on USAspending.gov. The act requires budget and financial information to be reported on a monthly basis if practicable, but not less than quarterly. However, OMB’s May 2015 guidance directs agencies to continue reporting on awards data at least bi-weekly. To cover appropriations and outlays in addition to obligations, OMB and Treasury officials noted that data will need to be pulled from budgetary and financial systems in addition to the multiple contract and assistance systems currently used. It is essential that all of this data be appropriately linked to achieve the full potential for users of this data inside and outside of government. Reporting on more types of federal spending. The DATA Act requires reporting on almost all types of federal spending. Currently, USAspending.gov reports data on federal awards including grants, contracts, and loans. Under the DATA Act, however, more budget and financial information will be available that should allow users of the data to organize and analyze the data in ways that are not currently possible. Some of these new types of spending information include: Budget and financial information on the different types of goods and services purchased by the federal government, such as personnel compensation, will be reported in the aggregate. Budget and financial information from financial arrangements of the federal government, such as public-private partnerships, interagency agreements, and user charges, will also be reported. As part of their guidance to agencies on DATA Act implementation, OMB lowered the threshold at which agencies must report data on financial assistance and procurement prime awards from $25,000 or greater to those awards greater than the micro-purchase threshold, which is currently $3,500. Improving data quality. Our prior work found that unclear guidance and weaknesses in executive branch oversight contributed to persistent challenges with data on USAspending.gov. These challenges relate to the quality and completeness of data submitted by federal agencies. For example, in 2010, we reported that USAspending.gov did not include information on awards from 15 programs at 9 agencies for fiscal year 2008. In that report we also reviewed a sample of 100 awards on the website and found that each award had at least one data error. In June 2014, we reported that roughly $619 billion in assistance awards were not properly reported in fiscal year 2012. In addition, we found that few reported awards—between 2 and 7 percent—contained information that was fully consistent with agency records for all 21 data elements we examined. A factor that contributed to this error rate was the lack of guidance on how to interpret some data elements including award description. See appendix II for more information on our recommendations related to these findings and OMB’s and Treasury’s actions to date. The DATA Act identifies the improvement of data quality as one of its purposes. Toward that end, the act requires that inspectors general conduct reviews of data samples submitted by their respective agency and subsequently assess and report on the data’s completeness, timeliness, quality, and accuracy. We are required to review these reports and then assess and compare the completeness, timeliness, quality, and accuracy of the data across the federal government. OMB and Treasury issued initial guidance to federal agencies in May 2015 on reporting requirements pursuant to FFATA as well as the new requirements that agencies must employ pursuant to the DATA Act. The guidance also directs agencies to implement data definition standards for the collection and reporting of agency-level and award-level data by May 9, 2017; implement a standard data exchange format for providing data to Treasury to be displayed on USAspending.gov or a successor site; and link agency financial systems with award systems by continuing the use of specified unique identification numbers for financial assistance awards and contracts. OMB asked agencies to submit DATA Act implementation plans in September 2015, concurrent with the fiscal year 2017 budget request. According to OMB staff, as of December 2015, all 24 CFO Act agencies as well as 27 smaller federal agencies have submitted implementation plans. OMB required the plans to include: (1) a timeline of tasks and steps toward implementing the requirements of this guidance; (2) an estimate of costs to implement these tasks and steps; (3) a detailed narrative that explains the required steps, identifies the underlying assumptions, and outlines the potential difficulties and risks to successfully implement the plan; and (4) a detailed project plan that agencies will develop over time. Additionally, OMB and Treasury issued a DATA Act Implementation Playbook in June 2015, which recommends eight key steps for agencies to fulfill their requirements under the DATA Act (see table 1). To support this effort, OMB and Treasury issued guidance to help agencies develop the plans and hosted workshops and conference calls to address agency questions. The DATA Act requires OMB and Treasury to establish government-wide financial data standards for any federal funds made available to or expended by federal agencies and recipients of federal funds. The specific items to be reported under the act are generally referred to as data elements. The overall data standardization effort consists of two distinct, but related, components: (1) establishing definitions which describe what is included in each data element with the aim of ensuring that information will be consistent and comparable, and (2) creating a data exchange standard with technical specifications which describe the format, structure, tagging, and transmission of each data element. The data exchange standard is also intended to depict the relationships between standardized data elements. On May 8, 2015, a year after the passage of the DATA Act, OMB and Treasury issued the first 15 standardized data element definitions, including definitions for 8 new elements introduced by the DATA Act. From June through August 2015, OMB and Treasury released an additional 42 standardized data element definitions for reporting under FFATA, as amended by the DATA Act. During this time, OMB and Treasury released data element definitions in stages and opened a 3- week feedback period for federal and nonfederal stakeholders to provide public input on the definitions before they were issued. During this period we separately met with OMB and Treasury staff several times to share our views and identify issues and concerns with proposed definitions. See figure 1 for a listing of the 57 standardized data elements grouped by type. See appendix III for the definitions of each of the data elements. The DATA Act requires that data standards—to the extent reasonable and practicable—incorporate widely-accepted common data elements, such as those developed by international standards-setting bodies, federal agencies with authority over contracting and financial assistance, and accounting standards organizations. Incorporating leading practices from international standards organizations offers one way to help reduce uncertainty and confusion when reporting and interpreting data standards. Developing a well-crafted data element definition is one key component to ensuring that a data standard produces consistent and comparable information. The ISO, a standards-setting body composed of international experts in various fields of study, has developed 13 leading practices for formulating data definitions for the purposes of specifying, describing, explaining, and clarifying the meaning of data. These practices include that definitions be precise and unambiguous, avoid circular reasoning, and be expressed without embedding definitions of other data or underlying concepts, among others. We found that the 57 DATA Act data element definitions largely followed ISO leading practices for the formulation of data definitions. Specifically, 12 data element definitions met all of the ISO leading practices and each of the remaining 45 definitions met no fewer than 9 leading practices, meaning that even the lowest-rated data elements in our review adhered to almost 70 percent of the ISO leading practices. We also found variation in which of the leading practices each definition satisfied. For example, our analysis found that all 57 definitions followed the leading practices of avoiding circular reasoning and being stated as a descriptive phrase or sentence, whereas 38 of the 57 were determined to be sufficiently precise and unambiguous. Table 2 provides a summary of our findings applying the ISO leading practices for formulating data definitions to the definitions developed by OMB and Treasury as part of DATA Act implementation. Although most of the definitions generally adhered to ISO leading practices, examples where data elements did not do so raise potential concerns regarding an increased risk that agencies may not apply the definitions consistently, thus affecting the comparability of reported data. Data element definitions that are imprecise or ambiguous may allow for more than one interpretation by agency staff collecting, compiling, and reporting on these data and thus could result in inconsistent and potentially misleading reporting when aggregated across government or compared between agencies. For example, OMB and Treasury defined Award Description as “a brief description of the purpose of the award.” In our previous work on the data quality of USAspending.gov, we identified challenges with the Award Description data element, citing the wide range of information that agencies report as the description or purpose. Specifically, we found that agencies routinely provided information for this data element using shorthand descriptions, acronyms, or terminology that could only be understood by officials at the agency that made the award. For example, in our 2010 report we found that the description for one contract we reviewed read “4506135384!DUMMY LOA,” while the award records indicated that the award was for the purchase of metal pipes. Another was described as “Cont Renewals All Types,” while the award records showed the contract was for an apartment building. This lack of basic clarity would make the data element difficult for others outside the agency to understand and would also limit the ability to meaningfully aggregate or compare this data across the federal government. We made recommendations to OMB in 2010 and 2014 and to Treasury in 2014 to improve the accuracy and completeness of Award Description, which have yet to be addressed. At that time, Treasury officials neither agreed nor disagreed with our recommendations, while OMB staff generally agreed with them stating that they were consistent with actions required under the DATA Act. These OMB staff said while they would consider interim steps to improve data quality, they did not want to inhibit agency efforts to work toward implementation of the act. Appendix II provides more information on the status of these recommendations. In subsequent discussions, OMB staff stated that they are hesitant to make substantial changes to the reporting of Award Description, which focuses on the purpose of a federal award, before additional progress is made on the related and more complex issue of how to ascribe spending data to a specific government program. However, it is unclear why this should prevent them from taking steps such as providing agencies with guidance on how to avoid excessive jargon, provide a specific level of detail, or develop a standardized taxonomy of appropriate responses. While the data quality concerns presented by the definition of Award Description are relatively straightforward to address, other definitions that we found to be imprecise and ambiguous present greater challenges due to long-standing differences in reporting across agencies and among the federal grant, procurement, and loan communities. An example of this is the four data elements that OMB and Treasury have issued that collectively represent the concept of Primary Place of Performance. The location or place of performance of specific grant, contract, or other federal spending has long been a data element collected by agencies. However, in the past, agencies have taken varied approaches to reporting place of performance information—sometimes describing where the funded activity takes place, sometimes the recipient of the product or activity, or sometimes the location of the administrative headquarters of the provider or a sub-entity. The definitions issued by OMB and Treasury standardize some of the mechanics of what Primary Place of Performance covers, such as city, county, state, and ZIP+4 codes. In addition, OMB staff told us that, by using the words “where the predominant performance of the award will be accomplished” the definitions are intended to focus on where the majority of the activity actually takes place rather than, for example, the location of the ultimate recipient of the product or service funded by federal spending. However, OMB’s and Treasury’s definitions still leave room for differing interpretations that could result in agencies capturing and reporting this information differently. For example, OMB staff told us that they interpret the term “predominant performance” to mean “more than half,” but this clarification is not contained in the definition itself, nor in the accompanying white paper that was issued with the data element definitions. Other questions exist regarding the appropriate unit of analysis for making such a determination. For example, it is unclear if “where the predominant performance of the award will be accomplished” is determined by the amount of time spent in a particular location when carrying out the award or by some other metric such as number of staff deployed or the amount of financial resources expended in a particular location. The standardized definitions for Primary Place of Performance do not address this level of detail and according to OMB staff they have not issued guidance or other resources, such as a FAQ document, to help agencies operationalize this concept in a consistent and comparable way. Another concern involves how to assign a value for Primary Place of Performance when the activity being described does not readily lend itself to a discrete geospatial location (such as a consulting service provided in many locations) or if it spans multiple locations (such as a road traversing multiple counties or states). One approach that has been previously used for reporting the location of federal spending for road projects on USAspending.gov is to assign the spending to the county or state capitol in the jurisdiction where the majority of the road was constructed. OMB staff told us that they would likely follow such an approach when reporting on Primary Place of Performance using the newly standardized definition in the future. While this may be potentially misleading in some situations, in the absence of a clearly better alternative it is critical that the particular decision rules OMB decides to follow are documented and clearly communicated to agencies providing this data as well as end-users. Figure 2 provides a notional illustration of some of the different places of performance that agencies could report for federally funded road projects based on the current definitions of these data elements. Despite the potential for multiple interpretations of what should be reported for Primary Place of Performance, OMB staff told us that federal agencies have not raised this as a significant reporting challenge. However, feedback OMB and Treasury received from both federal and nonfederal stakeholders identified a number of concerns with these definitions including the need to more clearly define what is meant by “primary” place of performance and how to interpret the word “performance” for this definition. In responding to this feedback, OMB and Treasury acknowledged the difficulty of addressing stakeholder concerns through a single data element and that in the future, as part of their plans to adopt a more formal data governance structure, they expect to identify and standardize other location-related data elements to address other needs. In some cases OMB and Treasury will need to take additional steps to make data standards consistent and comparable for federal and nonfederal entities. For example, OMB and Treasury standardized the definition of Program Activity as required by the DATA Act and we found that this definition adhered to all 13 ISO leading practices. However, concerns still remain regarding the use of this data element. For example, OMB’s and Treasury’s guidance on Program Activity acknowledges that program activities can change from one year to the next and that Program Activity does not necessarily match “programs” as specified in the GPRA Modernization Act of 2010 or the Catalog of Federal Domestic Assistance. In responding to this guidance, officials at USDA said that when program activities change it is difficult to make comparisons of federal spending over time. Moreover, USDA officials noted that more guidance is needed to ensure that the public can accurately interpret Program Activity compared to the other common representations of federal programs. In our July 2015 testimony on DATA Act implementation, we reported that OMB and Treasury will need to build on the program activity structure and provide agencies with guidance if they are to meet one of the stated purposes of the DATA Act to link federal contract, loan, and grant spending information to federal programs to enable taxpayers and policy makers to track federal spending more effectively. In that testimony, we made a recommendation that OMB accelerate efforts to develop a federal program inventory to ensure that federal program spending data are provided to the public in a transparent, useful, and timely manner. During the hearing, an OMB official testified that, because the staff that would be involved in working on the program inventories is heavily involved in DATA Act implementation, he would not expect an update of the program inventories to happen before May 2017. Much remains to be done to effectively implement standard data element definitions across the federal government in a consistent and comparable way for reporting purposes. OMB and Treasury told us that they are making policy decisions and developing guidance to help agencies with implementing data standards. They expect to issue this guidance in spring 2016, and we will review it at that time. Consequently, many issues remain unanswered regarding the extent to which agencies may need to change their policies, processes, and systems in order to report their financial data in compliance with the act. A senior HHS official told us that they have communicated to OMB and Treasury that in the absence of detailed guidance related to the policy, process, and technology changes that accompany the data element definitions, agencies cannot develop effective implementation plans or appropriately commit the necessary resources toward implementing the DATA Act because implementation efforts and timelines are highly dependent on this information. Agencies must begin reporting data using the data definitions established under the DATA Act by 2017. It remains uncertain the extent to which these data will be consistent and comparable if OMB and Treasury do not address concerns with the quality of data definitions. The DATA Act calls for OMB and Treasury to establish government-wide data standards, to the extent reasonable and practicable, that produce consistent and comparable data available in machine-readable formats. Treasury has taken the lead in drafting a technical schema intended to standardize the way financial assistance awards, contracts, and other financial data will be collected and reported under the DATA Act. Toward that end, the technical schema describes the standard format for data elements including their description, type, and length. In July 2015, we identified several potential concerns with version 0.2 of the schema, including that the schema might not prevent inconsistent reporting because it allowed alphabetic characters to be entered into a data field that should only accept numeric data. We also noted that the schema did not identify a computer markup language that agencies can use for communicating financial data standards. Identification of such a language provides standards for annotating or tagging information so that data can be transmitted over the Internet and can be readily interpreted by a variety of computer systems. OMB and Treasury addressed several of the concerns we raised in version 0.6 of the DATA Act schema issued in October 2015. For example, version 0.6 of the schema addressed inconsistencies between machine-readable and human-readable documentation and simplified the schema so that data elements, names, and definitions are consistent across all award types including grants, loans, and contracts. According to Treasury officials, subsequent versions of the schema will include additional information about complex data types and introduce extensible business reporting language (XBRL) formats in preparation for version 1.0. Treasury planned to issue version 1.0 by December 31, 2015, which it said would provide a more stable base to help agencies understand how to map their financial and award information to adhere to DATA Act requirements. However, instead of releasing version 1.0 as planned, they released another interim version—version 0.7. According to Treasury, this version incorporates additional financial data elements and attributes that are intended to support more accurate and detailed financial and budgetary accounting information. Given the importance of having a largely stable schema to serve as the foundation for developing subsequent technical processes at the agency level, any significant delay in releasing version 1.0 of the schema will likely have consequences for timely implementation of the act. Treasury officials told us they are not prepared to provide a time frame for completion of version 1.0. As previously mentioned, OMB’s and Treasury’s DATA Act Implementation Playbook outlines eight specific steps and timelines for implementing the DATA Act at the agency level. However, in some cases guidance that would help agencies carry out these steps has not been provided in time to coincide with when the agency was expected to carry out key activities outlined in the DATA Act Implementation Playbook. For example, step 3 of the 8-step plan calls for agencies to inventory agency data and associated business processes from February to September 2015 to identify where there are gaps in the data that are collected. OMB and Treasury provided technical tools including a template to help agencies inventory their financial and awards data to identify any gaps that could impede standardization. However, a stable DATA Act schema that specifies the form and content the data should be reported in was not available to agencies to help them fully carry out this step. Corporation for National and Community Service (CNCS) officials told us that because operational details for how data are to be exchanged have not yet been finalized, the agency has not taken steps to map agency financial and awards data to the schema. Treasury officials told us, that because they are using an iterative approach to technical implementation, they have not finalized an architecture for the collection and dissemination of government-wide data that could provide agencies with a description of the various technology layers, interoperability and structures, and reporting languages that they will be expected to use beginning in May 2017. In the absence of a clear and consistent set of technical specifications, agency technical staff, enterprise resource planning (ERP) vendors, and others tasked with adapting Treasury’s schema to work with the financial and award management environment at individual federal agencies may delay plans to carry out key steps until the schema is finalized. Alternatively, if agencies decide to move ahead and then significant changes are subsequently made to the schema, agencies could incur additional costs to revise their systems and processes to conform to a later version. In addition to the draft technical schema, Treasury is developing an intermediary service called a “broker” to standardize data formatting and assist reporting agencies in validating their data submissions before they are submitted to Treasury. As part of this effort, Treasury recently completed a limited-use pilot test of the broker service with the Small Business Administration (SBA) to test agency data submissions. Treasury has future plans to develop and test a broker prototype for contracts. The pilot demonstrated a broker prototype that could extract data from SBA’s grant and financial systems, perform data validation, and convert data to the DATA Act schema for submission to Treasury’s database. A Treasury official acknowledged, however, that it may be more difficult for larger or more complex agencies to extract their data and perform these necessary functions. In September 2015, Treasury posted the limited-use SBA broker prototype on GitHub, a public online collaboration website, so that agencies and the public could begin reviewing the broker prototype. Treasury also made a set of high-level conceptual models available to agencies on MAX.gov to help them understand how they might extract data from their own financial and award systems. Treasury told us that they plan to build and host a centralized broker service, but have not specified a time frame when it will be available. In addition, Treasury is exploring the option to allow agencies to use the Treasury broker service locally to work within their own operational environments. According to these officials, agencies may also choose to work through ERP vendors who could develop commercial products that would be made available to agencies. However, because the SBA broker prototype was primarily tested on grants, a broker prototype that extracts and validates data from other types of awards, such as contracts and loans, is still not available to agencies. Moreover, little is known about how the prototype would work with other forms of awards, which are often located in different systems and use different definitions. Agencies need this information to begin testing the broker using their own data so they can develop effective strategies for data submission within the time frame—October 2015 to February 2016—prescribed in the DATA Act Implementation Playbook. The prototype tested grants data from SBA’s award system which is already linked to SBA’s financial management system through unique award identifiers. It is not known whether and how the broker prototype would work for a number of agencies that have financial and award systems that are not yet linked. According to a Treasury official, most agencies have not established linkages between their financial and award systems. Our review of three selected agency implementation plans and interviews with agency officials indicates that agencies are waiting for technical guidance on the broker service so that they can begin to develop plans to extract data from their current systems and map it to the DATA Act schema. For example, CNCS’s implementation plan submitted to OMB in September of 2105 cites the lack of information about the broker as a significant challenge that could impede effective implementation of the data standards and new reporting requirements. As a result of this uncertainty, USDA officials told us that they decided to move ahead with the development of its own broker to compile and validate its data centrally and then forward it on to Treasury. Moreover, USDA officials noted that since much of Treasury’s technical guidance to date has focused on grants and cooperative agreements little is known about how the broker service would work with other financial assistance awards such as loans and insurance programs. The three agencies in our review—the Corporation for National and Community Service (CNCS), the Department of Health and Human Services (HHS), and the Department of Agriculture (USDA)—have begun addressing the requirements of the DATA Act by forming DATA Act teams, participating in government-wide deliberations on data standards, developing an inventory of their data, identifying systems containing pertinent data and the associated business practices, and assessing the policy, process, and technology changes that may be needed for successful implementation. In addition to providing guidance in the DATA Act Implementation Playbook, OMB and Treasury have regularly engaged agency officials to address questions and concerns related to implementing data standards. This outreach has included monthly conference calls with agency senior accountable officials (SAO), posted office hours for agencies to obtain feedback on the implementation process and raise OMB’s and Treasury’s awareness regarding specific implementation challenges, and a biweekly digest that is distributed to SAOs to keep agency staff informed about recent and upcoming DATA Act activities. Table 3 provides additional information regarding the status of DATA Act implementation activities for these three agencies. Once fully and effectively implemented, the DATA Act holds great promise for improving the transparency and accountability of federal spending data by providing consistent, reliable, and complete data on federal spending. In order to fully and effectively implement the DATA Act, the federal government will need to address complex policy and technical issues. Central among these is defining and developing common data elements across multiple reporting areas and standing up the necessary supporting systems and processes to enable reporting of the federal spending data required by the DATA Act. Toward that end, OMB and Treasury have made progress since the act was signed into law in May 2014, including issuing definitions for 57 data elements, developing an 8-step plan and timelines for agencies to follow as they move through the implementation process, and providing a variety of outreach approaches to address agency questions and to obtain feedback from federal and nonfederal stakeholders. The implementation accomplishments to date exist alongside continued challenges that OMB and Treasury need to address in order to successfully meet the requirements and objectives of the act. Although the majority of the 57 data element definitions generally follow leading practices, we identified limitations with some data element definitons and their documentation that, if not addressed, could lead to inconsistent reporting, and limit the ability to meaningfully aggregate or compare data for these elements across the federal government. Moreover, the standards will be of little value if agencies are not prepared to collect and report quality data in conformance with the standards. Therefore, it is of vital importance that OMB and Treasury provide federal agencies with timely information and support so that they are in a position to effectively implement these standards. We provided OMB and Treasury with input on identified challenges related to the data element definitions and draft technical schema to help ensure these challenges are addressed as implementation progresses. Moreover, as agencies work through the 8- step implementation process, it will be important for OMB and Treasury to provide them with finalized technical guidance that can serve as a foundation for developing the necessary systems and processes for agency implementation. If guidance is not timed to coincide with agencies’ expected milestones for key steps in the implementation process, agencies could incur additional costs as they revise implementation plans to align with later versions of the guidance or could be forced to delay implementation. 1. To help ensure that agencies report consistent and comparable data on federal spending, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, provide agencies with additional guidance to address potential clarity, consistency, or quality issues with the definitions for specific data elements including Award Description and Primary Place of Performance and that they clearly document and communicate these actions to agencies providing this data as well as to end-users. 2. To ensure that federal agencies are able to meet their reporting requirements and timelines, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, take steps to align the release of finalized technical guidance, including the DATA Act schema and broker, to the implementation time frames specified in the DATA Act Implementation Playbook. We provided a draft of this report to the Director of OMB, the Secretaries of the Treasury, HHS and USDA, and the Chief Executive Officer of CNCS for review and comment. Both OMB and Treasury submitted written comments, which provided additional clarifying information related to our recommendations. OMB’s and Treasury’s written comments are discussed below and reproduced in appendixes IV and V respectively. In addition, OMB, Treasury, CNCS, and HHS provided technical comments, which we incorporated as appropriate, and USDA had no comments. In his written response, the OMB Controller generally concurred with our first recommendation to provide agencies with additional guidance to address potential clarity, consistency, or quality issues with data element definitions. However, in the discussion of OMB’s efforts to date to expand and improve federal spending transparency, the OMB Controller distinguished between the 11 data elements that were standardized in May 2015 and the remaining 46 data elements that were issued in August 2015. OMB interpreted the DATA Act requirement to standardize data elements as only applying to the 11 data elements, and indicated that the remaining 46 elements were standardized pursuant to the overarching policy goal of improving the consistency of federal spending data on USAspending.gov. OMB stated that the additional 46 data elements provided an opportunity to increase comparability and data quality. However, both the statutory language and the purposes of the DATA Act support the interpretation that OMB and Treasury were required to establish data standards for award and awardee information in addition to account level information. The DATA Act states that the financial data standards OMB and Treasury are required to establish are to include financial and payment information required to be reported by federal agencies and entities receiving federal funds. Such information reported by entities receiving federal funds is information on awards and awardees, not account-level financial data. The act further provides that the data standards are to include, to the extent reasonable and practical, unique identifiers for federal awards and entities receiving federal awards. However, OMB does not interpret Award Identification Number and Awardee/Recipient Unique Identifier to be among those data elements they are required to standardize pursuant to the DATA Act. Lastly, OMB’s interpretation is inconsistent with Congress’s intent when it passed the DATA Act. As described in the legislative history of the act, Congress sought to address the known data quality issues with award and awardee information that had been reported under FFATA. To accomplish this, data standards for those elements were necessary. Without data standards for award and awardee information, the inconsistent and non-comparable reporting under FFATA that Congress sought to remedy through the DATA Act would continue. For these reasons, we conclude that the requirement in the DATA Act to establish data standards applies not only to account-level information, but also to award and awardee information. This is an important distinction for ensuring that federal agencies are held appropriately accountable for the completeness, quality, and accuracy of the spending data to be reported in the years to come. In addition to responding to the recommendations made in this report, OMB also addressed the recommendation made in our July 2015 testimony which called on OMB to accelerate efforts to determine how best to merge DATA Act purposes and requirements with requirements under the GPRA Modernization Act of 2010 (GPRAMA) to produce a federal program inventory. In response to this recommendation, the OMB Controller noted that OMB promulgated guidance in OMB Circular A-11, Sections 82 and 83, requiring agencies to start submitting object class and program activity information from their accounting systems to OMB. We recognize that requiring agencies to submit data on object class and program activity may be a step toward meeting the requirement of the DATA Act to report this information and may contribute toward the broader effort of developing a federal program inventory as required by GPRAMA. However, much still remains to be done in order to produce such an inventory. We continue to believe, as we previously recommended, that OMB should accelerate those efforts and we will continue to monitor progress in meeting this statutory requirement. Regarding our recommendation to align the release of finalized technical guidance to the implementation timelines specified in the DATA Act Implementation Playbook, OMB deferred matters of technical operationalization to Treasury, which has program responsibility for technical implementation. In their written response, Treasury officials deferred our first recommendation to provide agencies with additional guidance to address potential clarity, consistency, or quality issues with data element definitions to OMB. Regarding our second recommendation to align the release of finalized technical guidance to the implementation timelines specified in the DATA Act Implementation Playbook, Treasury officials generally concurred with our recommendation, noting that they recognize the importance of providing agencies with timely technical guidance and reporting submission specifications. We are sending copies of this report to the heads of the Departments of Agriculture, Health and Human Services, Treasury, OMB, and the Corporation for National and Community Service, as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-6806 or sagerm@gao.gov . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix VI. This report (1) identifies steps taken by the Office of Management and Budget (OMB) and the Department of Treasury (Treasury) to establish government-wide data element definitions and the extent to which those definitions are consistent with leading practices or face challenges that could affect data quality; (2) reviews efforts by OMB and Treasury to provide agencies with technical implementation guidance to standardize how data are collected and reported and related challenges; and (3) examines the status of selected federal agencies’ progress in meeting DATA Act requirements. This review is a part of an ongoing effort to provide interim reports on the progress being made in the implementation of the DATA Act, while also meeting our audit reporting requirements mandated by the act. For the first objective, we reviewed our past work that raised concerns about the quality of federal spending data on USAspending.gov to inform our review of OMB’s and Treasury’s efforts to establish data standards. We analyzed the definitions of the 57 data elements issued May 8, 2015 through August 31, 2015, and assessed the extent to which the data definitions are consistent with DATA Act requirements and leading practices from standards set by the International Organization for Standardization (ISO). To assess the extent to which the data standards are consistent with ISO standards, we had two analysts independently rate each of the 57 data element definitions against all 13 ISO leading practices and determine whether the data element (1) met the ISO leading practice, (2) did not meet the ISO leading practice, (3) partially met the ISO leading practice, or (4) whether the ISO leading practice was not applicable to the particular data element definition. When the two raters independently came to the same rating for a particular leading practice and data element definition, the raters were considered to be in concurrence and the agreed upon rating was carried forward as the assessment of record. After the first round of assessments, the initial raters were in concurrence on 630 of 741 necessary assessments. When the two raters independently came to different ratings for a particular leading practice and data element definition, a third rater independently assessed those data element definitions and leading practices to attempt to reach concurrence. This was necessary in 111 cases. When the third rater independently came to the same rating as one of the initial two raters, that rating was carried forward as the assessment of record. After this second round of assessments, the raters were in concurrence on 727 of 741 necessary assessments. When the third rater came to a different rating as both of the initial two raters for a particular leading practice and data element definition, the three raters met to discuss their application of the leading practice to the data element definition and come to consensus on a final assessment of record. After these discussions, the raters were in concurrence on 741 of 741 necessary assessments. For data element definitions related to federal budget terms, we supplemented our analysis with a legal review to ensure assessments were both accurate and complete. For purposes of reporting, when the final assessment of record was that a given data element definition met or partially met the ISO leading practice or that the ISO leading practice was not applicable, the data element definition was considered to adhere to the given leading practice. For the purposes of aggregating our assessments, we considered a “partial” response to be a “yes” because the ISO standards represented leading practices and not firm requirements for OMB and Treasury to follow. Therefore, we erred on the side of giving the agencies credit for the contents of their definitions meeting parts of the leading practice. When the assessment of record was “no” the data element definition was considered as not adhering to the given leading practice. For the second objective assessing OMB’s and Treasury’s development of a technical schema that specifies the format, structure, tagging, and transmission of each data element to allow consistency and comparability, we consulted the U.S. Digital Services Playbook and we reviewed and analyzed differences between version 0.2, version 0.5, and version 0.6 of the schema. We reviewed applicable agency guidance and documentation related to the data standards and technical schema on OMB’s and Treasury’s websites. We also interviewed knowledgeable agency officials about their standards-setting and technical schema development processes. For the third objective, we selected three agencies for review—the Department of Health and Human Services, the Department of Agriculture, and the Corporation for National and Community Service. Using a three-step selection process, we looked for agencies that met varying conditions: (1) compliance with requirements for federal financial management systems; (2) representation across multiple lines of business—grants, loans, and contracts; and (3) status as a Federal Shared Service Provider for financial management. Table 4 shows each selected agency in relation to these criteria. Although the results from our review of these three agencies are not generalizable to all agencies, they are designed to illustrate a range of conditions under which agencies are implementing the act. We assessed whether the selected agencies submitted their implementation plans and identified a senior accountable official (SAO) to report on progress. We also reviewed the implementation plans and related project plans and interviewed agency DATA Act team members for their assessment of implementation progress, including what controls are in place to ensure data quality, the challenges they have encountered thus far, and the extent to which identified challenges could impede timely and effective implementation. We conducted this performance audit from May 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions on our audit objectives. timeliness of all data submissions to the Office of Management and Budget’s (OMB) USAspending.gov website, the Director of OMB should revise guidance to federal agencies on reporting federal awards to clarify (1) the requirement that award titles describe the award’s purpose; and (2) requirements for validating and documenting agency awards data submitted by federal agencies. Implementation status Closed—not implemented. Provisions of the Digital Accountability and Transparency Act of 2014 could address this recommendation, but implementation will take several years. 2. To improve the accuracy, completeness, and timeliness of all data submissions to OMB’s USAspending.gov website, the Director of OMB should include information on the city where work is performed in OMB’s public reporting of the completeness of agency data submissions. Closed—not implemented. OMB no longer uses the reporting mechanism discussed in the recommendation. Open. As a result of passage of the Digital Accountability and Transparency Act (DATA Act) in May 2014, OMB is working with the Department of the Treasury (Treasury) and other members of the Government Accountability and Transparency Board to develop a long-term strategy to implement key transparency reforms including government-wide data standards. We will continue to monitor the progress of their efforts to implement key provisions of the act. Open. OMB and Treasury are working to implement the DATA Act, which includes several provisions that could address our recommendations once fully implemented. data submissions to the USAspending.gov website, the Director of OMB, in collaboration with Treasury’s Fiscal Service, should clarify guidance on (1) agency responsibilities for reporting awards funded by non-annual appropriations; (2) the applicability of USAspending.gov reporting requirements to non-classified awards associated with intelligence operations; (3) the requirement that award titles describe the award’s purpose (consistent with our prior recommendation); and (4) agency maintenance of authoritative records adequate to verify the accuracy of required data reported for use by USAspending.gov. Open. As part of their DATA Act implementation efforts, OMB and Treasury have outlined a process for agencies to identify authoritative systems to validate agency spending information. In addition, the inspector general community is working on standard audit methodologies to verify the accuracy and completeness of agency reporting. Implementation of these efforts is planned to begin in fiscal year 2016. Open. In commenting on a draft of this statement in July 2015, OMB staff stated that they neither agreed nor disagreed with this recommendation. Testifying before two subcommittees of the House Oversight and Government Reform Committee on July 29, 2015, OMB’s Acting Deputy Director for Management and Controller stated that the agency planned to address the issue of identifying “programs” for the purposes of DATA Act reporting but that such efforts would likely not start until sometime in fiscal year 2016, and would not be completed until after May 2017. Open. In an August 31, 2015, whitepaper published on their DATA Act collaboration website, OMB and Treasury stated their intent to address this recommendation by working to establish in fiscal year 2016 a formal, long-term governance process and structure for future data standards maintenance. This governance structure would be the forum to review recommendations for new data elements to be reported to USAspending.gov and for additional data standards to be adopted moving forward. Open. In commenting on a draft of the statement in July 2015, OMB staff stated that they neither agreed nor disagreed with this recommendation. addressed as implementation efforts continue, the Director of OMB, in collaboration with the Secretary of the Treasury, should build on existing efforts and put in place policies and procedures to foster ongoing and effective two-way dialogue with stakeholders including timely and substantive responses to feedback received on the Federal Spending Transparency GitHub website. Recommendation/matter for congressional consideration 1. To capitalize on the opportunity created by the DATA Act, the Secretary of the Treasury should reconsider whether certain assets—especially information and documentation such as memoranda of understanding (MOUs) that would help transfer the knowledge gained through the operation of the Recovery Operations Center— could be worth transferring to the Do Not Pay Center Business Center to assist in its mission to reduce improper payments. Additionally, the Secretary should document the decision on whether Treasury transfers additional information and documentation and what factors were considered in this decision. Open. Treasury concurred with our recommendation that it should consider additional knowledge transfers from the Recovery Operations Center to assist in the Do Not Pay Center Business Center’s mission to reduce improper payments and will document its rationale and final decision in this regard.. Matter for Congressional Consideration: 1. To help preserve a proven resource supporting the oversight community’s analytic capabilities, Congress may wish to consider directing the Council of the Inspectors General on Integrity and Efficiency (CIGIE) to develop a legislative proposal to reconstitute the essential capabilities of the Recovery Operations Center to help ensure federal spending accountability. The proposal should identify a range of options at varying scales for the cost of analytic tools, personnel, and necessary funding, as well as any additional authority CIGIE may need to ensure such enduring, robust analytical and investigative capability for the oversight community. Open. This appendix lists the data elements and their definitions broken out by type, as issued by the Office of Management and Budget (OMB) and the Department of Treasury (Treasury) on May 8, 2015 and August 31, 2015. Account Level Data Standards: These data elements describe the appropriations accounts from which agencies fund Federal awards. Data Definition The basic unit of an appropriation generally reflecting each unnumbered paragraph in an appropriation act. An appropriation account typically encompasses a number of activities or projects and may be subject to restrictions or conditions applicable to only the account, the appropriation act, titles within an appropriation act, other appropriation acts, or the Government as a whole. An appropriations account is represented by a TAFS created by Treasury in consultation with OMB (defined in OMB Circular A-11). A provision of law (not necessarily in an appropriations act) authorizing an account to incur obligations and to make outlays for a given purpose. Usually, but not always, an appropriation provides budget authority (defined in OMB Circular A-11). Categories in a classification system that presents obligations by the items or services purchased by the Federal Government. Each specific object class is defined in OMB Circular A-11 § 83.6 (defined in OMB Circular A-11). Obligation means a legally binding agreement that will result in outlays, immediately or in the future. When you place an order, sign a contract, award a grant, purchase a service, or take other actions that require the Government to make payments to the public or from one Government account to another, you incur an obligation. It is a violation of the Antideficiency Act (31 U.S.C. § 1341(a)) to involve the Federal Government in a contract or obligation for payment of money before an appropriation is made, unless authorized by law. This means you cannot incur obligations in a vacuum; you incur an obligation against budget authority in a Treasury account that belongs to your agency. It is a violation of the Antideficiency Act to incur an obligation in an amount greater than the amount available in the Treasury account that is available. This means that the account must have budget authority sufficient to cover the total of such obligations at the time the obligation is incurred. In addition, the obligation you incur must conform to other applicable provisions of law, and you must be able to support the amounts reported by the documentary evidence required by 31 U.S.C. § 1501. Moreover, you are required to maintain certifications and records showing that the amounts have been obligated (31 U.S.C. § 1108). Additional detail is provided in Circular A‐11. New borrowing authority, contract authority, and spending authority from offsetting collections provided by Congress in an appropriations act or other legislation, or unobligated balances of budgetary resources made available in previous legislation, to incur obligations and to make outlays (defined in OMB Circular A-11). Payments made to liquidate an obligation (other than the repayment of debt principal or other disbursements that are “means of financing” transactions). Outlays generally are equal to cash disbursements but also are recorded for cash-equivalent transactions, such as the issuance of debentures to pay insurance claims, and in a few cases are recorded on an accrual basis such as interest on public issues of the public debt. Outlays are the measure of Government spending (defined in OMB Circular A-11). A specific activity or project as listed in the program and financing schedules of the annual budget of the United States Government (defined in OMB Circular A-11). Data Definition Treasury Account Symbol: The account identification codes assigned by the Department of the Treasury to individual appropriation, receipt, or other fund accounts. All financial transactions of the Federal Government are classified by TAS for reporting to the Department of the Treasury and the Office of Management and Budget (defined in OMB Circular A-11). Treasury Appropriation Fund Symbol: The components of a Treasury Account Symbol – allocation agency, agency, main account, period of availability and availability type – that directly correspond to an appropriations account established by Congress (defined in OMB Circular A-11). Unobligated balance means the cumulative amount of budget authority that remains available for obligation under law in unexpired accounts at a point in time. The term “expired balances available for adjustment only” refers to unobligated amounts in expired accounts. Additional detail is provided in Circular A‐11. Data Definition The date the action being reported was issued / signed by the Government or a binding agreement was reached. A brief description of the purpose of the award. Award Identification (ID) Number The unique identifier of the specific award being reported, i.e. Federal Award Identification Number (FAIN) for financial assistance and Procurement Instrument Identifier (PIID) for procurement. The identifier of an action being reported that indicates the specific subsequent change to the initial award. Description (and corresponding code) that provides information to distinguish type of contract, grant, or loan and provides the user with more granularity into the method of delivery of the outcomes. A collection of indicators of different types of recipients based on socio-economic status and organization / business areas. The number assigned to a Federal area of work in the Catalog of Federal Domestic Assistance. The title of the area of work under which the Federal award was funded in the Catalog of Federal Domestic Assistance. The identifier that represents the North American Industrial Classification System Code assigned to the solicitation and resulting award identifying the industry in which the contract requirements are normally performed. The title associated with the NAICS Code. Data Definition For procurement, the date on which, for the award referred to by the action being reported, no additional orders referring to it may be placed. This date applies only to procurement indefinite delivery vehicles (such as indefinite delivery contracts or blanket purchase agreements). Administrative actions related to this award may continue to occur after this date. The period of performance end dates for procurement orders issued under the indefinite delivery vehicle may extend beyond this date. The identifier of the procurement award under which the specific award is issued, such as a Federal Supply Schedule. This data element currently applies to procurement actions only. The current date on which, for the award referred to by the action being reported, awardee effort completes or the award is otherwise ended. Administrative actions related to this award may continue to occur after this date. This date does not apply to procurement indefinite delivery vehicles under which definitive orders may be awarded. For procurement, the date on which, for the award referred to by the action being reported if all potential pre-determined or pre-negotiated options were exercised, awardee effort is completed or the award is otherwise ended. Administrative actions related to this award may continue to occur after this date. This date does not apply to procurement indefinite delivery vehicles under which definitive orders may be awarded. Period of Performance Start Date The date on which, for the award referred to by the action being reported, awardee effort begins or the award is otherwise effective. The address where the predominant performance of the award will be accomplished. The address is made up of six components: Address Lines 1 and 2, City, County, State Code, and ZIP+4 or Postal Code. U.S. congressional district where the predominant performance of the award will be accomplished. This data element will be derived from the Primary Place of Performance Address. Country code where the predominant performance of the award will be accomplished. Name of the country represented by the country code where the predominant performance of the award will be accomplished. Code indicating whether an action is an individual transaction or aggregated. Data Definition The cumulative amount obligated by the Federal Government for an award, which is calculated by USAspending.gov or a successor site. For procurement and financial assistance awards except loans, this is the sum of Federal Action Obligations. For loans or loan guarantees, this is the Original Subsidy Cost. For procurement, the total amount obligated to date on a contract, including the base and exercised options. Amount of Federal Government’s obligation, de-obligation, or liability, in dollars, for an award transaction. For financial assistance, the amount of the award funded by a non-Federal source(s), in dollars. Program Income (as defined in 2 C.F.R. § 200.80) is not included until such time that Program Income is generated and credited to the agreement. For procurement, the total amount that could be obligated on a contract, if the base and all options are exercised. Data Definition The name of the awardee or recipient that relates to the unique identifier. For U.S. based companies, this name is what the business ordinarily files in formation documents with individual states (when required). The unique identification number for an awardee or recipient. Currently the identifier is the 9-digit number assigned by Dun & Bradstreet referred to as the DUNS® number. First Name: The first name of an individual identified as one of the five most highly compensated “Executives.” “Executive” means officers, managing partners, or any other employees in management positions. Middle Initial: The middle initial of an individual identified as one of the five most highly compensated “Executives.” “Executive” means officers, managing partners, or any other employees in management positions. Last Name: The last name of an individual identified as one of the five most highly compensated “Executives.” “Executive” means officers, managing partners, or any other employees in management positions. The cash and noncash dollar value earned by the one of the five most highly compensated “Executives” during the awardee’s preceding fiscal year and includes the following (for more information see 17 C.F.R. § 229.402(c)(2)): salary and bonuses, awards of stock, stock options, and stock appreciation rights, earnings for services under non-equity incentive plans, change in pension value, above-market earnings on deferred compensation which is not tax qualified, and other compensation. The awardee or recipient’s legal business address where the office represented by the Unique Entity Identifier (as registered in the System for Award Management) is located. In most cases, this should match what the entity has filed with the State in its organizational documents, if required. The address is made up of five components: Address Lines 1 and 2, City, State Code, and ZIP+4 or Postal Code. Legal Entity Congressional District The congressional district in which the awardee or recipient is located. This is not a required data element for non-U.S. addresses. Code for the country in which the awardee or recipient is located, using the ISO 3166-1 Alpha-3 GENC Profile, and not the codes listed for those territories and possessions of the United States already identified as “states.” The name corresponding to the Country Code. Ultimate Parent Legal Entity Name The name of the ultimate parent of the awardee or recipient. Currently, the name is from the global parent DUNS® number. The unique identification number for the ultimate parent of an awardee or recipient. Currently the identifier is the 9-digit number maintained by Dun & Bradstreet as the global parent DUNS® number. Data Definition A department or establishment of the Government as used in the Treasury Account Fund Symbol (TAFS). The name associated with a department or establishment of the Government as used in the Treasury Account Fund Symbol (TAFS). Identifier of the level n organization that awarded, executed or is otherwise responsible for the transaction. Name of the level n organization that awarded, executed or is otherwise responsible for the transaction. Identifier of the level 2 organization that awarded, executed or is otherwise responsible for the transaction. Name of the level 2 organization that awarded, executed or is otherwise responsible for the transaction. Data Definition The 3-digit CGAC agency code of the department or establishment of the Government that provided the preponderance of the funds for an award and/or individual transactions related to an award. Name of the department or establishment of the Government that provided the preponderance of the funds for an award and/or individual transactions related to an award. Identifier of the level n organization that provided the preponderance of the funds obligated by this transaction. Name of the level n organization that provided the preponderance of the funds obligated by this transaction. Identifier of the level 2 organization that provided the preponderance of the funds obligated by this transaction. Name of the level 2 organization that provided the preponderance of the funds obligated by this transaction. In addition to the contact named above, J. Christopher Mihm (Managing Director), Peter Del Toro (Assistant Director), Kathleen Drennan (analyst- in-charge), Shirley Hwang, Jason Lyuke, Kiran Sreepada and David Watsula made major contributions to this report. Other key contributors include Shari Brewster; Mark Canter; Jenny Chanley; Robert Gebhart; Charles Jones; Lauren Kirkpatrick; Michael LaForge; Donna Miller; Laura Pacheco; Carl Ramirez; Paula Rascona; Andrew J. Stephens; James Sweetman, Jr.; and Carroll Warfield, Jr. Additional members of GAO’s DATA Act Working Group also contributed to the development of this report.
The DATA Act directed OMB and Treasury to establish government-wide data standards by May 2015 to improve the transparency and quality of federal spending data. Agencies must begin reporting spending data in accordance with these standards by May 2017 and must publicly post spending data in machine-readable formats by May 2018. Consistent with GAO’s mandate under the act, this report is part of a series of products that GAO will provide to the Congress as DATA Act implementation proceeds. This report (1) identifies steps taken by OMB and Treasury to standardize data element definitions and the extent to which those definitions are consistent with leading practices or face challenges that could affect data quality; (2) reviews efforts by OMB and Treasury to provide agencies with technical implementation guidance and related challenges; and (3) examines the implementation status of selected federal agencies. GAO analyzed data standards against leading practices; reviewed key implementation documents, technical specifications, and applicable guidance; and interviewed staff at OMB, Treasury, and other selected agencies. As required by the Digital Accountability and Transparency Act of 2014 (DATA Act), the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) issued definitions for 57 federal spending data elements. GAO found that most definitions adhered to leading practices derived from international standards for formulating data definitions. Specifically, 12 of the 57 definitions met all 13 leading practices and none met fewer than 9. However, GAO found several definitions that could lead to inconsistent reporting. For example, as shown in the figure below, the Primary Place of Performance definitions’ inclusion of the word “predominant“ leaves much open to interpretation. Without more interpretive clarification, agencies run the risk of reporting data that cannot be aggregated government-wide. OMB and Treasury addressed some of GAO’s earlier concerns on draft technical guidance for implementing data standards. However, final technical guidance has not been issued, which could impede agency implementation. While OMB and Treasury have released interim versions of technical guidance, they have not yet released final guidance to provide a stable base for agency implementation. They also are developing an intermediary service (“broker”) to standardize and validate agency data submissions. GAO’s review of selected implementation plans found that agencies need the technical guidance and the intermediary service to be finalized before they can develop detailed agency-level plans. If this guidance is not aligned with agency implementation timelines, agencies may delay taking key steps or need to revise existing plans once final technical guidance is released, thereby hindering their ability to meet DATA Act requirements and timelines. GAO found that the three agencies it reviewed—the Departments of Agriculture and Health and Human Services, as well as the Corporation for National and Community Service—have formed internal teams and are inventorying their data and assessing any needed changes to policies, processes, and technology to implement the DATA Act. GAO recommends that OMB and Treasury (1) provide agencies with clarifications to address potential quality issues with the definitions, and (2) take steps to align the release of finalized technical guidance and the broker service with agency implementation time frames. OMB and Treasury generally concurred with our recommendations.
Anatomic pathology services aid in the diagnosis and treatment of diseases such as cancers and gastroenteritis—a condition that causes irritation and inflammation of the stomach and intestines. Medicare pays providers for performing the services and subsequently interpreting the results. Payment for the performance of the services can be made through different payment systems, depending on where the anatomic pathology service is performed. In 2010, Medicare paid about $1.28 billion under the physician fee schedule for anatomic pathology services across all settings, of which about $945 million was for services performed in physician offices and independent laboratories. Anatomic pathology services involve the examination of tissues and other specimens to diagnose diseases, such as cancers and gastroenteritis, and guide patient care. The services may be performed after a biopsy procedure used to obtain tissue samples. For example, after removing tissue samples during a biopsy procedure on a patient’s prostate, a urologist may refer the patient’s tissue sample for examination to determine whether, on the basis of an analysis of the tissue sample or samples, the patient has prostate cancer. After collecting these tissue samples, a non-self-referring provider may send them to an independent diagnostic laboratory, a hospital laboratory, or pathology physician group for further preparation and analysis. In contrast, self-referring providers may prepare specimens or evaluate specimens or both at their practices, rather than involving an external diagnostic laboratory or pathology physician group. For example, the ordering provider’s group practice may have a technician who prepares specimens into slides or include a pathologist who interprets these specimens, or both. Providers have discretion in determining the number and type of tissue samples that become a specimen. For example, a provider referring anatomic pathology services may include more than one tissue sample in a specimen if the samples are from the same areas of abnormal tissue (see fig. 1). Alternatively, a provider may choose to create multiple specimens, each containing a single tissue sample. Providers differ on whether or to what extent tissue samples can be combined in creating a specimen or if each tissue sample must become a specimen. For example, urologists differ on whether it is clinically appropriate to combine tissue samples obtained through a prostate biopsy procedure or whether each tissue sample must became a specimen. The resultant number of specimens has implications for payment as each specimen submitted for analysis can be billed to Medicare separately as an anatomic pathology service. CMS policy states that specimens submitted for individual examination should be medically reasonable and necessary for diagnosis. Finally, a pathologist—a specialty provider trained to interpret specimens—examines the specimen with and without a microscope and prepares written results of this examination for the referring provider. Depending on the level of complexity, biopsy procedures can involve risks for patients. For example, biopsies of the skin to detect cancer are generally considered safe, but complications such as bleeding, bruising, or infection can occur. Additional complications, such as difficulty urinating and infections resulting in hospitalization, can occur from other biopsy procedures. Medicare’s payments for anatomic pathology services are separated into two components—the technical component (TC) and the professional component (PC). The TC payment is intended to cover the cost of preparing a specimen for analysis, including the costs for equipment, supplies, and nonphysician staff. The PC payment is intended to cover the provider’s time examining the specimen and writing a report on the findings. The PC and TC can be billed together, on what is called a global claim, or alternatively the components can be billed separately. For instance, a global claim could be billed if the same provider prepares and examines the specimen, whereas the TC and PC could be billed separately if the performing and interpreting providers are different. Medicare reimburses providers through different payment systems depending on where the anatomic pathology service is performed. When an anatomic pathology service is performed in a provider’s office or independent clinical laboratory, both the PC and TC are reimbursed under the Medicare physician fee schedule. Alternatively when the service is performed in an institutional setting such as a hospital inpatient department, the provider is reimbursed under the Medicare physician fee schedule for the PC, while the TC is reimbursed under a different Medicare payment system. For instance, the TC of an anatomic pathology service performed in a hospital inpatient setting is reimbursed through a facility payment made under Medicare Part A. In response to concerns about potential overutilization of anatomic pathology services due to physician self-referral, CMS established rules limiting the reimbursements allowed under certain self-referral arrangements. Specifically, in 2008 CMS imposed an “anti-markup rule” that prohibits providers from billing Medicare for anatomic pathology services for amounts that exceed what the providers themselves pay to subcontract the services from other providers or pathology laboratories. However, in the 2009 physician fee schedule final rule, CMS identified an exception to the anti-markup rule: a service may be marked up when performed by a physician who shares a practice with the billing provider.Since then, arrangements in which a provider group practice includes a pathologist in the practice’s office space have become a common self- referral arrangement. In the 2009 physician fee schedule, CMS also introduced a payment change for anatomic pathology services related to a specific biopsy procedure due to concern of overpayment. Specifically, CMS began paying for multiple anatomic pathology services from prostate saturation biopsy procedures through a single payment, rather than paying for each specimen individually. This specific biopsy procedure involves taking numerous tissue samples—typically 30 to 60—to increase the likelihood of detecting prostate cancer in a subgroup of high-risk individuals in whom previous conventional prostate biopsies had been negative. As a result, CMS introduced four new HCPCS codes to pay for these specimens, which were previously paid through HCPCS 88305. The four HCPCS codes noted are payment for 1 to 20 specimens (G0416), 21 to 40 specimens (G0417), 41 to 60 specimens (G0418), and more than 60 specimens (G0419). The payment change resulted in a substantial decrease in payment for anatomic pathology services resulting from prostate saturation biopsy procedures. CMS reduced its payment for anatomic pathology services in 2013 as part of its efforts to examine the payment for certain high-volume services. Specifically, CMS reduced its payment of anatomic pathology services in 2013 because it determined that fewer resources—equipment, supplies, and nonphysician staff—were required to prepare anatomic pathology services, which in turn reduced payment for the TC. Effective January 1, 2013, CMS reduced Medicare’s reimbursement for anatomic pathology services under the physician fee schedule by lowering reimbursement for the TC by approximately half. With this change, a payment for a global claim for an anatomic pathology service was reduced by approximately 30 percent. In 2010, there were about 16.2 million anatomic pathology services performed in all settings, including physician offices and hospitals. In 2010, expenditures for anatomic pathology services paid under the physician fee schedule totaled about $1.28 billion across all settings. About $945 million of the $1.28 billion—74 percent—in expenditures for anatomic pathology services in 2010 were for services performed in physician offices and independent laboratories. The number of self-referred anatomic pathology services increased at a faster rate than non-self-referred anatomic pathology services from 2004 through 2010. Similarly, expenditures for self-referred anatomic pathology services increased at a faster rate than expenditures for non-self-referred services. The share of anatomic pathology services that were self- referred increased overall during the period we reviewed. While both the number of self-referred and non-self-referred anatomic pathology services grew overall from 2004 through 2010, self-referred services increased at a faster rate than non-self-referred services. Specifically, the number of self-referred anatomic pathology services more than doubled over the period we reviewed, growing from about 1.06 million services in 2004 to about 2.26 million services in 2010 (see fig. 2). In contrast, the number of non-self-referred anatomic pathology services increased about 38 percent, growing from about 5.64 million services to about 7.77 million services. Because of the faster growth in self-referred anatomic pathology services, the proportion of anatomic pathology services that were self-referred grew from about 15.9 percent in 2004 to about 22.5 percent in 2010. Notably, the number of self- referred anatomic pathology services increased from 2004 through 2010, even after accounting for the decrease in the number of Medicare FFS beneficiaries. Specifically, the number of self-referred anatomic pathology services per 1,000 Medicare FFS beneficiaries grew from about 30 to about 64, an increase of about 113 percent. Although both self-referred and non-self-referred anatomic pathology services increased over the period of our study, the number of self- referred anatomic pathology services decreased slightly from about 1.67 million in 2007 to about 1.65 million in 2008 before increasing about 14 percent to 1.88 million services in 2009. This decrease in 2008 corresponds to the implementation of CMS’s anti-markup rule that limits reimbursement for anatomic pathology services in certain self-referral arrangements. In contrast, the number of non-self-referred anatomic pathology services increased every year during the period we studied, with the largest annual increase (about 13 percent) in 2008. While Medicare expenditures for self-referred and non-self-referred anatomic pathology services grew from 2004 through 2010, expenditures for the self-referred services increased at a faster rate. Specifically, expenditures for self-referred anatomic pathology services grew about 164 percent from 2004 to 2010, increasing from about $75 million in 2004 to $199 million in 2010 (see fig. 3). In contrast, non-self-referred anatomic pathology expenditures increased about 57 percent, from $473 million to about $741 million. Consistent with the overall trend, the proportion of anatomic pathology services that were self-referred increased for the three provider specialties—dermatology, gastroenterology, and urology—that accounted for over 90 percent of self-referred anatomic pathology services in 2010 (see table 1). For example, the proportion of anatomic pathology services self-referred by dermatologists increased from 24 percent in 2004 to about 29 percent in 2010. Self-referring providers in 2010 generally referred more anatomic pathology services on average than those providers who did not self-refer these services, even after accounting for differences in specialty, number of Medicare FFS beneficiaries seen, patient characteristics, or geography. Providers’ referrals for anatomic pathology services substantially increased the year after they began to self-refer. Across the three provider specialties—dermatology, gastroenterology, and urology—that refer the majority of anatomic pathology services, we found that in 2010, self-referring providers referred more anatomic pathology services, on average, than other providers, regardless of number of Medicare FFS beneficiaries seen. Specifically, we found this pattern for dermatologists, gastroenterologists, and urologists treating small, medium, and large numbers of Medicare beneficiaries (see table 2). Notably, for all provider specialties, providers who treated a large number of Medicare FFS beneficiaries—more than 500—had the highest relative rate within each specialty. Self-referring providers generally referred more anatomic pathology services on average than non-self-referring providers due to referring more services—specimens to be examined—per biopsy procedures and, in certain cases, performing a greater number of biopsy procedures. Across the three specialties we reviewed, self-referring providers referred more services per biopsy procedure, on average, than non-self-referring providers, regardless of the number of Medicare FFS beneficiaries seen. Specifically, self-referring providers referred from 7 percent to 52 percent more services per biopsy procedure for the provider specialty and size category combinations we examined. Further, we observed a greater number of biopsy procedures performed by self-referring providers in certain cases. Specifically, self-referring dermatologists treating medium and large numbers of Medicare FFS beneficiaries performed about 8 and 38 percent more biopsy procedures on average, respectively, than non- self-referring dermatologists treating similar numbers of Medicare beneficiaries. In the remaining provider specialty and size category combinations, the rate of biopsy procedures performed by self-referring providers was similar to that for non-self-referring providers. The higher number of referrals for anatomic pathology services among self-referring providers relative to other providers of the same size and specialty cannot, in general, be explained by differences in patient diagnoses, patient health status, other patient characteristics, or geography. Differences in referrals for anatomic pathology services between self- referring and non-self-referring providers of the same specialty treating a similar number of Medicare FFS beneficiaries could not be explained by differences in their patients’ diagnoses. Generally, we found that the types and proportions of patient diagnoses were similar for self-referring and non-self-referring providers of the same specialty. However, we found that self-referring providers referred more anatomic pathology services per biopsy procedure for nearly all—53 of 54—primary diagnoses for which beneficiaries were referred for anatomic pathology services. This pattern is particularly evident for those diagnoses that accounted for a large proportion of anatomic pathology services referred within each specialty (see table 3).For example, self-referring urology providers referred on average about 12.5 anatomic pathology services per biopsy procedure for diagnosis of elevated prostate specific antigen (790.93) while non-self-referring urology providers referred about 8.5 anatomic pathology services per biopsy procedure for this diagnosis. For further information on the average number of anatomic pathology services referred per biopsy procedure by beneficiary primary diagnosis, see appendix III. Differences in the number of referrals for anatomic pathology services between self-referring and non-self-referring providers of the same specialty treating similar numbers of Medicare FFS beneficiaries could generally not be explained by differences in patient health status. Specifically, for all three provider specialties we reviewed, the beneficiaries seen by self-referring providers treating a small, medium, or large number of Medicare FFS beneficiaries were of similar health status as patients seen by non-self-referring providers of the same specialty and size category (see table 4), as indicated by having similar average risk If self-referring providers saw relatively sicker beneficiaries, it scores.could have explained why these providers referred more anatomic pathology services on average than other providers of the same provider specialty and size categories. Differences in the number of anatomic pathology service referrals between self-referring and non-self-referring providers of the same specialty treating similar numbers of Medicare FFS beneficiaries could not be explained by differences in the age and sex of beneficiaries. In particular, the age and sex of Medicare FFS beneficiaries were generally consistent between those beneficiaries seen by self-referring providers and those seen by non-self-referring providers for all provider specialties and size categories we examined. For further information on the average age and sex of beneficiaries seen by self-referring and non-self-referring providers of the provider specialties we examined, see appendix IV. Differences in the number of anatomic pathology service referrals between self-referring and non-self-referring providers could not generally be explained by whether a provider practiced in an urban or rural area. Self-referring providers of the same specialty treating a similar number of Medicare beneficiaries generally referred more anatomic pathology services on average than non-self-referring providers, regardless of whether the provider practiced in an urban or rural area. For example, self-referring dermatologists and urologists treating a similar number of Medicare FFS beneficiaries had higher referral rates on average for anatomic pathology services, regardless of whether they practiced in an urban or rural location. Likewise, self-referring gastroenterologists treating a medium or large number of Medicare FFS beneficiaries referred a higher number of anatomic pathology services on average than non-self- referring gastroenterologists treating a similar number of Medicare beneficiaries, regardless of whether they practiced in an urban or rural location. For further information on referral of anatomic pathology services across provider specialties and size categories in urban and rural areas, see appendix V. Our analysis shows that, across the three provider specialties we reviewed, providers’ referrals for anatomic pathology services substantially increased the year after they began to self-refer. In our analysis we examined the number of anatomic pathology referrals made by “switchers”—those providers that did not self-refer in 2007 or 2008 but began to self-refer in 2009 and continued to do so in 2010—and compared these referrals to the number made by providers that did not begin to self-refer during this period. Providers could self-refer by setting up an in-office laboratory, contracting for laboratory services, or joining a group practice that already self-referred. We found that the switchers saw large increases in the number of anatomic referrals they made from 2008 to 2010 when compared with other providers (see table 5). Specifically, across the three provider specialties we reviewed, the switcher group of providers increased the number of anatomic pathology referrals they made from 2008 to 2010 by at least 14.0 percent and by as much as 58.5 percent. In contrast, providers that self-referred anatomic pathology services during the entire period experienced smaller changes in the number of referrals, ranging from a 1.4 percent increase to an 11.6 percent increase, depending on the provider specialty. Among providers that did not self-refer anatomic pathology services, the number of referrals the providers made for these services ranged from a decrease of 0.2 percent to an increase of 2.8 percent, depending on the provider specialty. Providers in the switcher groups for the three specialties we reviewed had an increase in the number of anatomic pathology referrals they made from 2008 to 2010 due to an increase, on average, in the number of anatomic pathology services referred per biopsy procedure. Across the three specialties we reviewed, the increase in the number of specimens submitted for examination from each biopsy procedure from 2008 to 2010 ranged from 13.3 percent to 48.9 percent. For all three specialties we reviewed, the increase in the number of anatomic pathology services referred per biopsy procedure was greater for providers in the switchers group than for providers who did not self-refer from 2008 through 2010 or those providers who self-referred for all 3 years. The increase in anatomic pathology referrals for providers that began self-referring in 2009 cannot be explained exclusively by factors such as providers joining practices with higher patient volumes, different patient populations, or different practice cultures. Specifically, providers that remained in the same practice from 2007 through 2010, but began self- referring in 2009, also had a bigger increase in the number of anatomic pathology referrals than did providers that did not change their self- referral status. The increase in the number of anatomic pathology services referred by providers in the switcher group that met this criterion from 2008 to 2010 ranged from an increase of 6.8 percent to an increase of 38.6 percent, depending on the provider specialty. We estimate that Medicare spent about $69 million more in 2010 than the program would have spent if self-referring providers performed biopsy procedures at the same rate as and referred the same number of services per biopsy procedure as non-self-referring providers of the same provider size and specialty (see fig. 4). This additional spending can be attributed to the fact that self-referring providers in the 3 provider specialties we examined referred about 918,000 more anatomic pathology services in 2010. In 2013, CMS reduced its payment of anatomic pathology services because it determined that fewer resources—equipment, supplies, and non-physician staff—were required to prepare anatomic pathology services. If the lower 2013 Medicare reimbursement rates for anatomic pathology services were in effect in 2010, Medicare would have spent approximately $48 million more than it would have if self-referring providers performed biopsy procedures at the same rate as and referred the same number of services per biopsy procedure as non-self-referring providers of the same provider size and specialty. This calculation likely underestimates the total amount of additional Medicare spending that can be attributed to self-referring providers because we did not include all Medicare providers in our analysis. Specifically, we limited our analysis to anatomic pathology services referred by dermatologists, gastroenterologists, and urologists. These specialties account for approximately 64 percent of anatomic pathology services referred across all settings and about 90 percent of all self- referred anatomic pathology services in 2010. Anatomic pathology services are vital services that help providers diagnose disease and guide treatment options for patient care. Proponents of self-referral contend that the ability of providers to self-refer anatomic pathology services has the potential benefit of more rapid diagnoses and better coordination of care. Our review indicates that across the major provider specialties that refer beneficiaries for anatomic pathology services, self-referring providers generally referred more anatomic pathology services on average than other providers of the same specialty treating similar numbers of Medicare patients. This increase is due to a greater number of specimens submitted for examination from each biopsy procedure, and in certain cases a greater number of biopsy procedures performed. This increase raises concerns, in part because biopsy procedures, although generally safe, can result in serious complications for Medicare beneficiaries. Further, our analysis shows that across these provider specialties, providers’ referrals for anatomic pathology services substantially increased the year after they began to self-refer. The relatively higher rate of anatomic pathology services among self- referring providers cannot be explained by patient diagnosis, patient health status, or geographic location. Taken together, this suggests that financial incentives for self-referring providers were likely a major factor driving the increase in anatomic pathology referrals. In 2010, providers who self-referred made an estimated 918,000 more referrals for anatomic pathology services than they likely would have if they were not self- referring. Notably, these additional referrals cost CMS about $69 million in 2010 alone. To the extent that these additional services are unnecessary, avoiding them could result in savings to Medicare and to beneficiaries. Despite the potential safety and financial implications of unnecessary anatomic pathology services, CMS does not have policies to address the effect of how self-referral affects the utilization of and expenditures for anatomic pathology services. CMS does not currently have the ability to identify anatomic pathology services that are self-referred so that the agency can track the extent to which anatomic pathology services are self-referred and identify services that may be unnecessary. Specifically, Medicare claims do not include an indicator or “flag” that identifies whether services are self-referred or non-self-referred. Thus, CMS does not currently have a method for easily identifying such services and cannot determine the effect of self-referral on utilization and expenditures for anatomic pathology services. Including a self-referral flag on Medicare Part B claims submitted by providers who bill for anatomic pathology services is likely the easiest and most cost-effective approach. If CMS could readily identify self-referred anatomic pathology services, the agency may be better positioned to identify potentially inappropriate utilization of biopsy procedures. CMS could, for example, consider performing targeted audits of providers who perform a higher average number of biopsy procedures, compared to providers of the same specialty treating a similar number of Medicare beneficiaries. Given our report findings, CMS may want to initially focus its efforts on self-referring dermatologists who treated a larger number of Medicare beneficiaries. While providers have discretion in determining the number of tissue samples that become specimens, CMS’s current payment system provides a financial incentive for providers to refer a higher number of specimens—or anatomic pathology services—per biopsy procedure. Providers can double their payment for anatomic pathology services, for example, by submitting four specimens from four tissue samples instead of combining the four tissue samples into two specimens. However, providers differ on whether or to what extent tissue samples can be combined in creating a specimen or if each tissue sample must become a specimen. CMS has already implemented a payment approach for one specific biopsy procedure—prostate saturation biopsy—that pays providers through a single payment rather than paying for each specimen individually within a given range of anatomic pathology services, such as 1 to 20 specimens. However, this policy does not apply to anatomic pathology services from other biopsy procedures. CMS could expand this payment approach to other biopsy procedures and associated anatomic pathology services. In order to improve CMS’s ability to identify self-referred anatomic pathology services and help CMS avoid unnecessary increases in these services, we recommend that the Administrator of CMS take the following three actions: 1. Insert a self-referral flag on Medicare Part B claim forms and require providers to indicate whether the anatomic pathology services for which the provider bills Medicare are self-referred or not. 2. Determine and implement an approach to ensure the appropriateness of biopsy procedures performed by self-referring providers. 3. Develop and implement a payment approach for anatomic pathology services that would limit the financial incentives associated with referring a higher number of specimens—or anatomic pathology services—per biopsy procedure. We provided a draft of this report to HHS, which oversees CMS, for comment. HHS provided written comments, which are reprinted in appendix VI. We also obtained comments from representatives from four professional associations selected because they represent an array of stakeholders with specific involvement in anatomic pathology services. Three associations provided oral comments: the College of American Pathologists (CAP), which represents pathologists; the American Academy of Dermatology Association (AADA), which represents dermatologists; and the American Gastroenterological Association (AGA), which represents gastroenterologists. The American Urological Association (AUA), which represents urologists, provided written comments. We summarize and respond to comments from HHS and representatives from the four professional associations in the following sections. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix VI. In its comments, HHS stated that it concurred with, and had addressed, one of our recommendations, but did not concur with our other two recommendations. HHS provided few comments on our findings that self-referring providers referred substantially more anatomic pathology services than non-self-referring providers. HHS stated that it concurred with, and has already addressed, our recommendation that CMS develop and implement a payment approach for anatomic pathology services that would limit the financial incentives associated with referring a higher number of anatomic pathology services per biopsy procedure. According to HHS, the payment revaluation for anatomic pathology services in 2013 decreased payment by approximately 30 percent and significantly reduced the financial incentives associated with self-referral for these services. We are pleased that CMS examines and revalues HCPCS codes to ensure that payment for services matches the resources involved and adjusts payment to the extent needed. However, the payment revaluation that occurred in 2013 does not address the higher referral of anatomic pathology services we found associated with self-referring providers. Although no consensus exists on the number and type of tissue samples that become a specimen—an anatomic pathology service—the current payment system pays more if providers create more specimens from the same number of samples. We continue to believe that CMS should develop a payment approach that addresses the incentive to provide more services. HHS did not concur with our recommendation that CMS insert a self- referral flag on the Medicare Part B claims form and require providers to indicate whether the anatomic pathology services for which a provider bills Medicare are self-referred or not. In its response, HHS did not provide reasons for not concurring with this recommendation, but stated that the President’s fiscal year 2014 budget proposal includes a provision to exclude certain services from the in-office ancillary services exception. HHS added that anatomic pathology services may share some characteristics with the services mentioned in the proposal. To the extent that self-referral for anatomic pathology services continues to be permitted, we believe that including an indicator or flag on the claims would likely be the easiest and most cost-effective approach to improve CMS’s ability to identify self-referred anatomic pathology services. Such a flag would allow CMS to monitor the behavior of self-referring providers and could be helpful to CMS in answering broader policy questions on self-referral. HHS did not concur with our recommendation that CMS determine and implement an approach to ensure the appropriateness of biopsy procedures performed by self-referring providers. In its response, HHS noted that it would be difficult to make recommendations regarding whether services are appropriate without reviewing large numbers of claims, reporting that 918,000 instances of self-referral that we identified would need to be reviewed. Further, the agency stated that it does not believe that this recommendation will address overutilization that occurs as a result of self-referral. We do not suggest or intend that CMS review every anatomic pathology service to determine whether it is appropriate. Self-referral, however, could be a factor CMS considers in its ongoing efforts to identify and address inappropriate use of Medicare services. As noted in the report, CMS could, for example, consider performing targeted audits of providers that perform a higher average number of biopsy procedures, compared to providers of the same specialty treating similar numbers of Medicare beneficiaries. In this regard, a flag that we also recommended to identify self-referred services would facilitate such audits. On the basis of HHS’s written response to our report, we are concerned that HHS does not appear to recognize the need to monitor the self- referral of anatomic pathology services on an ongoing basis and determine those services that may be inappropriate or unnecessary. HHS did not comment on our key finding that providers’ referrals for anatomic pathology services across the three specialties we examined substantially increased the year after they began to self-refer. Nor did HHS comment on our estimate that additional referrals for anatomic pathology services from self-referring providers cost CMS about $69 million in 2010 or $48 million based on the 2013 payment rates. Given these findings, we continue to believe that CMS should take steps to monitor the utilization of anatomic pathology services and ensure that the services for which Medicare pays are appropriate. By not monitoring the appropriateness of these services, CMS is missing an opportunity to save Medicare expenditures. Representatives from CAP expressed concern that our methodology to identify self-referral missed certain self-referral arrangements for anatomic pathology services and that our findings understate effects from self-referral. According to the CAP representatives, because our methodology did not identify providers who self-refer the PC only and did not include financial relationships that do not share TINs, effects from self-referral are greater than our findings suggest. As noted in the report, we excluded claims with a only a PC from our finding on utilization and expenditures trends for anatomic pathology services because we could not reliably determine that they were performed in the physician office or independent laboratory. We identified financial relationships among providers using TINs, which would identify the provider, the provider’s employer, or another entity to which the provider reassigns payment. To the extent that providers self-refer only the PCs of anatomic pathology services or self-refer to entities with which they do not share TINs, differences between self-referring and non-self-referring providers would be greater, and our estimate of the differences would be more conservative. CAP representatives also raised questions about self- referral that our report did not address, such as why the report did not examine cancer detection rates or whether anatomic pathology services should be included in the in-office ancillary services exception. These issues were outside the report’s objectives. While the representatives from CAP agreed with the recommendation to include a self-referral flag on the Medicare Part B claims form, they disagreed with our other recommendations, stating that they would not sufficiently address the report findings. We believe that our recommendations incorporate actions that address the problems we identified. Representatives from the AADA stated that dermatologists should continue to be allowed to prepare and review their own anatomic pathology services because they receive considerable training as part of their education and offered several possible explanations for the additional anatomic pathology services referred and biopsy procedures performed by self-referring providers. For example, they raised the possibility that the increase in anatomic pathology services referred and biopsy procedures performed by providers in the switcher group was due to increases in patient volume, providers joining a larger group practice or hiring a mid-level practitioner allowing the provider to see more patients. They also raised the possibility that providers in the switcher group became further specialized, resulting in a change in the number and type of diagnoses for their patients. Also, the AADA reported that our reliance on TINs to identify self-referral could be problematic because providers working in large, university-based practices would be flagged as self- referring, despite lacking a financial incentive to provide more services. As noted in the report, the increase in anatomic pathology referrals for providers that began self-referring in 2009 cannot be explained exclusively by factors such as providers joining practices with higher patient volumes, different patient populations, or different practice cultures. Specifically, providers that remained in the same practice from 2007 through 2010, but began self-referring in 2009, had a bigger increase in the number of anatomic pathology services referred than providers who did not change their self-referral status. Further, we found that the types and proportions of patient diagnoses were similar for self- referring and non-self-referring providers of the same specialty and that self-referring providers referred more anatomic pathology services per biopsy procedure for nearly all—53 of 54—primary diagnoses for which beneficiaries were referred for anatomic pathology services. To the extent that providers who share a TIN, but do not have a financial incentive to refer more services, are counted as self-referring, our findings would likely underestimate differences between self-referring and non-self- referring providers and would thus provide a conservative estimate of the effects of self-referral. The AADA agreed with our recommendation of determining an approach to ensure the appropriateness of biopsy procedures, but disagreed with our recommendation of a payment approach limiting the financial incentives associated with a higher number of services per biopsy procedure. Specifically, the AADA expressed concern about any disincentives for dermatologists to perform biopsy procedures. As noted in the report, increases in the number of anatomic pathology services per biopsy procedure were primarily responsible for the growth of anatomic pathology services referred by providers in the switcher group from 2008 to 2010 across provider specialties. We continue to believe that CMS should develop and implement a payment approach for anatomic pathology services that would limit the financial incentives associated with referring a higher number of services per biopsy procedure. Representatives from the AGA asked for further information about the providers in our analysis (particularly gastroenterologists in the switcher group) and diagnoses of the beneficiaries referred for anatomic pathology services, and offered several possible explanations for the additional anatomic pathology services referred and biopsy procedures performed by self-referring providers. Specifically, the AGA raised the possibility that providers in the switcher group joined larger groups that perform more anatomic pathology services or changed the types of biopsy procedures they performed. Finally, the AGA also noted that an appropriate number of anatomic pathology services is not known. Specifically, they reported that larger practices, which are also more likely to self-refer, are more likely to have formal peer review, which could result in more anatomic pathology services referred. We have included an appendix with additional information on the number of services per biopsy procedure for the most common diagnoses for which beneficiaries were referred for anatomic pathology services in 2010. As noted, the types and proportions of diagnoses for which beneficiaries were referred for anatomic pathology services were similar for self-referring and non-self-referring providers. We found that self-referring providers referred more services per biopsy procedure on average than non-self-referring providers for nearly all of these diagnoses. We acknowledge that the appropriate number of referrals for these services is not known, but the consistent pattern of self- referring providers’ higher use suggests that additional scrutiny is warranted. The AGA agreed with our recommendations, but did not think the recommendation on a payment approach for anatomic pathology services limiting the financial incentives associated with referring a higher number of specimens per biopsy procedure was as applicable to providers of their specialty. The AUA agreed with our recommendation to identify self-referred services but did not agree with our other recommendations to examine the appropriateness of biopsy procedures performed by self-referring providers or develop a payment approach that would limit the financial incentives for referring a higher number of anatomic pathology services. The AUA said that there were problems with the study’s design and data gathering methodology originating from problems with identifying self- referral which resulted in their questioning the validity of the report findings. AUA did not provide further detail on their methodological concerns. We believe our methodology to identify self-referred services and classify providers as self-referring using one hundred percent of Medicare Part B claims is a reasonable and valid approach. In designing our methodology, we consulted with officials from CMS, specialty societies, and other researchers. Our approach is similar to the one used by MedPAC for its study of the effect of physician self-referral on use of imaging services. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. As part of our work, we also analyzed additional anatomic pathology services—known as special stains—that providers may use in conjunction with anatomic pathology services to enhance their ability in making a diagnosis. We focused our special stains analysis on services with Healthcare Common Procedure Coding System (HCPCS) codes 88312, 88313, and 88342 that were used in conjunction with the anatomic pathology service, HCPCS code 88305. We considered special stains billed on the same date, by the same provider, for the same beneficiary as those used in conjunction with anatomic pathology services. In 2010, expenditures under the physician fee schedule for these three special stain services totaled approximately $387 million. Providers may utilize special stains with a HCPCS code of 88312 on specimens to detect the presence of infectious organisms such as bacteria and fungus, special stains with a HCPCS code of 88313 to detect the presence of iron, and special stains with a HCPCS code of 88342 to identify the origin of a Use of special stains is determined by the provider referring cancer.anatomic pathology services, the pathologist interpreting the anatomic pathology service, or both. We examined (1) trends in the number of and expenditures for self-referred and non-self-referred special stains, and (2) how the provision of special stains differs for providers who self-refer when compared with other providers. Similar to anatomic pathology services, the number of both self-referred and non-self-referred special stains increased from 2004 through 2010, with self-referred special stains increasing at a faster rate than services that were not self-referred. Specifically, the number of special stains that were self-referred increased from about 60,000 in 2004 to about 340,000 in 2010, an increase of more than 400 percent (see fig. 5). Further, we found that self-referred special stains increased more than non-self- referred special stains for each of the three special stains we studied. In contrast, non-self-referred special stains grew from about 710,000 services to about 1.80 million services, an increase of about 150 percent. Similar to expenditures for anatomic pathology services, Medicare’s expenditures for both self-referred and non-self-referred special stains used in conjunction with HCPCs code 88305 also grew rapidly during the period we studied, with the greater rate of increase among expenditures for self-referred services. Specifically, expenditures for self-referred special stains grew more than six-fold, increasing from about $4 million in 2004 to about $30 million in 2010 (see fig. 6). In comparison, expenditures for non-self-referred special stains more than tripled during these years, growing from about $46 million in 2004 to about $162 million in 2010. For two of the three specialties we examined, self-referring providers referred a higher number of special stains on average than non-self- referring providers of the same specialty treating similar numbers of Medicare fee-for-service (FFS) beneficiaries. Specifically, self-referring gastroenterologists and urologists referred more special stains on average than non-self-referring gastroenterologists and urologists treating a similar number of Medicare beneficiaries. We also found this pattern for self-referring dermatologists treating small numbers of Medicare FFS beneficiaries. Provider specialty and size category combinations we studied where self-referring providers referred more special stains on average than non-self-referring providers represented about 86 percent of special stains referred by these specialties. Our analysis shows that, for two of the three specialties we reviewed, providers’ referrals for special stains substantially increased the year after they began to self-refer. Specifically, urologists and gastroenterologists that were “switchers”—those providers that did not self-refer in 2007 or 2008 but began to self-refer in 2009 and continued to do so in 2010—saw larger increases in the number of special stain referrals they made relative to other providers (see table 7). Dermatologists in the switcher group had a 17.6 percent increase in the number of special stains referred from 2008 to 2010, but this was roughly equivalent to the increase for providers in the non-self-referring group and only slightly higher than the increase for providers in the self-referring group. This section describes the scope and methodology used to analyze our three objectives: (1) trends in the number of and expenditures for self- referred and non-self-referred anatomic pathology services from 2004 through 2010, (2) how the provision of anatomic pathology services may differ for providers who self-refer when compared with other providers, and (3) the implications of self-referral for Medicare spending on anatomic pathology services. For all three objectives, we used the Medicare Part B Carrier File, which contains final action Medicare Part B claims for noninstitutional providers, such as physicians. Claims can be for one or more services or for individual service components. Each service or service component is identified on a claim by its Healthcare Common Procedure Coding System (HCPCS) code, which the Centers for Medicare & Medicaid Services (CMS) assigns to products, supplies, and services for billing purposes. For the purposes of this report “anatomic pathology” services refer to HCPCS 88305 services, and “special stains” refer to HCPCS 88312, 88313, and 88342 services that were used in conjunction with these anatomic pathology services. with which he or she has a financial relationship without implicating the Stark law. Because there is no indicator or “flag” on the claim that identifies whether services were self-referred or non-self-referred, we developed a claims- based methodology to identify services as either self-referred or non-self- referred. Specifically, we classified services as self-referred if the provider that referred the beneficiary for an anatomic pathology service and the provider that performed the anatomic pathology service was identical or had a financial relationship. We used taxpayer identification number (TIN), an identification number used by the Internal Revenue Service, to determine providers’ financial relationships. The TIN could be that of the provider, the provider’s employer, or another entity to which the provider reassigns payment. referring and performing providers, we created a crosswalk of the performing provider’s unique physician identification number or national provider identifier (NPI) to the TIN that appeared on the claim and used that to assign TINs to the referring and performing providers. Some providers may be associated with TINs with which they do not have a direct or indirect financial relationship and thus would not have the same incentives as other self- referring providers. We anticipate that relatively few providers in our self-referring group meet this description but to the extent that they do, it may have limited the differences we found in utilization and expenditure rates between self-referring and non-self-referring providers. We considered global services and separately-billed TCs to be self- referred if one or more of the TINs of the referring and performing provider matched. However, we did not consider separately-billed PCs to be self-referred, even if they met the same criterion. We did not count claims with PC only as self-referred because we could not reliably determine whether they corresponded to anatomic pathology services that were performed in a provider’s office or laboratory. Further, we excluded claims where a HCPCS code of 88305 was billed with another anatomic pathology service and a special stain, because we could not determine which service required the use of the special stain. As part of developing this claims-based methodology to identify self-referred services, we interviewed officials from CMS, provider groups, and other researchers. To describe the trends in the number of and expenditures for self-referred anatomic pathology services from 2004 through 2010, we used the Medicare Part B Carrier file to calculate utilization and expenditures for self-referred and non-self-referred anatomic pathology services, both in aggregate and per beneficiary. We limited this portion of our analysis to global claims or claims for a separately-billed TC for anatomic pathology services, which indicates that the performance of the anatomic pathology service was billed under the physician fee schedule. As a result, the universe for this portion of our analysis is those anatomic pathology services performed in a provider’s office or in an independent clinical laboratory that both bill for the performance of an anatomic pathology service under the physician fee schedule. We focused on these settings because the financial incentive for providers to self-refer is most direct when the service is performed in a physician office. Further, we limited our analysis to self-referral of the preparation—as opposed to the interpretation—of these services, because we could determine the site of service as a physician’s office or laboratory. Accordingly, we did not examine self-referral of the interpretation of these services as we could not reliably determine their site of service. Approximately two-thirds of all anatomic pathology services billed under the physician fee schedule were performed in a physician’s office or clinical laboratory. To calculate the number of Medicare beneficiaries from 2004 through 2010 needed for per beneficiary calculations, we used the Denominator File, a database that contains enrollment information for all Medicare beneficiaries enrolled in a given year. We also examined the utilization for self-referred anatomic pathology services by provider specialty for 2004 and 2010. To determine the extent to which the provision of anatomic pathology services differs for providers who self-refer when compared with other providers, we first classified providers based on the type of referrals they made. Specifically, we classified providers as self-referring if they self- referred at least one beneficiary for an anatomic pathology service. We classified providers as non-self-referring if they referred a beneficiary for an anatomic pathology service, but did not self-refer any of the services. We assigned to each provider the anatomic pathology services, including those for the performance of an anatomic pathology service and those for the interpretation of the anatomic pathology service result. If the TC and PC were billed separately for the same beneficiary, we counted these two components as one referred service. As a result, we counted all services that a provider referred, regardless of whether it was performed in a provider office, independent clinical laboratory, or other setting. We classified anatomic pathology services as being from the same biopsy procedure if the services were referred by the same provider for the same beneficiary on the same day. We then performed two separate analyses. First, we compared the provision—that is, the number of referrals made— of anatomic pathology services by self-referring providers and non-self- referring providers in 2010, disaggregated by the number of Medicare beneficiaries seen by the provider, provider specialty, and geography (i.e., urban or rural) and patient characteristics. We used the number of unique Medicare fee-for-service (FFS) beneficiaries for which providers provided services in 2010 as a proxy for practice size, which we identified using 100 percent of providers’ claims from the Medicare Part B Carrier file. We defined urban settings as metropolitan statistical areas, a geographic entity defined by the Office of Management and Budget as a core urban area of 50,000 or more population. We used rural-urban commuting area codes—a Census tract-based classification scheme that utilizes the standard Bureau of Census Urbanized Area and Urban Cluster definitions in combination with work commuting information to characterize all of the nation’s Census tracts regarding their rural and urban status—to identify providers as practicing in metropolitan statistical areas.providers’ specialties on the basis of the specialties listed on the claims. These specialty codes include physician specialties, such as dermatology and urology, and nonphysician provider types, such as nurse practitioners and physician assistants. We also examined the extent to which the characteristics of the patient populations served by self-referring and non- self-referring providers differed. We used CMS’s risk score file to identify average risk score, which serves as a proxy for beneficiary health status. Information on additional patient characteristics, such as age and sex, came from the Medicare Part B Carrier file claims. We considered all other settings to be rural. We identified Second, we determined the extent to which the number of anatomic pathology service referrals made by providers changed after they began to self-refer. Specifically, we identified a group of providers that began to self-refer anatomic pathology services in 2009. We refer to this group of providers as “switchers” because it represents providers that did not self- refer in 2007 or 2008, but did self-refer in 2009 and 2010. We then calculated the change in the number of anatomic pathology referrals made from 2008 (i.e., the year before the switchers began self-referring) to 2010 (i.e., the year after they began self-referring). We compared the change in the number of referrals made by these providers to the change in the number of referrals made over the same time period by providers who did not change whether or not they self-referred anatomic pathology services. Specifically, we compared the change in the number of referrals made by switchers to those made by (1) self-referring providers— providers that self-referred in years 2007 through 2010, and (2) non-self- referring providers—providers that did not self-refer in years 2007 through 2010. For each provider, we also identified the most common TIN to which they referred anatomic pathology services. If the TIN was the same for all 4 years, we assumed that they remained part of the same practice for all 4 years. We calculated the number of referrals in 2008 and 2010 separately for providers that met this criterion. To determine the implications of self-referral for Medicare spending on anatomic pathology services, we summed the number of and expenditures for all anatomic pathology services performed in 2010 across the three provider specialties we reviewed. We then calculated the number of and expenditures for anatomic pathology services if self- referring providers performed biopsy procedures at the same rate as and referred the same number of services per biopsy procedure as non-self- referring providers of the same provider size and specialty. We repeated this analysis incorporating an approximation of the payment reduction for anatomic pathology services that became effective in 2013. We took several steps to ensure that the data used to produce this report were sufficiently reliable. Specifically, we assessed the reliability of the CMS data we used by interviewing officials responsible for overseeing these data sources, including CMS and Medicare contractor officials. We also reviewed relevant documentation, and examined the data for obvious errors, such as missing values and values outside of expected ranges. We determined that the data were sufficiently reliable for the purposes of our study, as they are used by the Medicare program as a record of payments to health care providers. As such, they are subject to routine CMS scrutiny. We conducted this performance audit from January 2012 through June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Age and Sex of Medicare Beneficiaries for Select Provider Specialties by Practice Size in 2010 Provider specialty and number of unique Medicare FFS beneficiaries Dermatology The number of unique Medicare FFS beneficiaries refers to the number of unique beneficiaries that received at least one service from a provider. The number of unique Medicare FFS beneficiaries refers to the number of unique beneficiaries that received at least one service from a provider. The number of unique Medicare FFS beneficiaries refers to the number of unique beneficiaries that received at least one service from a provider. In addition to the contact named above, Thomas Walke, Assistant Director; Todd D. Anderson; Manuel Buentello; Krister Friday; Gregory Giusto; Brian O’Donnell; and Daniel Ries made key contributions to this report.
Questions have been raised about self-referral's role in Medicare Part B expenditures' rapid growth. Self-referral occurs when providers refer patients to entities in which they or their family members have a financial interest. Services that can be self-referred under certain circumstances include anatomic pathology--the preparation and examination of tissue samples to diagnose disease. GAO was asked to examine the prevalence of anatomic pathology self-referral and its effect on Medicare spending. This report examines (1) trends in the number of and expenditures for self-referred and non-self-referred anatomic pathology services, (2) how provision of these services may differ on the basis of whether providers self-refer, and (3) implications of self-referral for Medicare spending. GAO analyzed Medicare Part B claims data from 2004 through 2010 and interviewed officials from the Centers for Medicare & Medicaid Services (CMS) and other stakeholders. GAO developed a claims-based approach to identify self-referred services because Medicare claims lack such an indicator. Self-referred anatomic pathology services increased at a faster rate than non-self-referred services from 2004 to 2010. During this period, the number of self-referred anatomic pathology services more than doubled, growing from 1.06 million services to about 2.26 million services, while non-self-referred services grew about 38 percent, from about 5.64 million services to about 7.77 million services. Similarly, the growth rate of expenditures for self-referred anatomic pathology services was higher than for non-self-referred services. Three provider specialties--dermatology, gastroenterology, and urology--accounted for 90 percent of referrals for self-referred anatomic pathology services in 2010. Referrals for anatomic pathology services by dermatologists, gastroenterologists, and urologists substantially increased the year after they began to self-refer. Providers that began self-referring in 2009--referred to as switchers--had increases in anatomic pathology services that ranged on average from 14.0 percent to 58.5 percent in 2010 compared to 2008, the year before they began self-referring, across these provider specialties. In comparison, increases in anatomic pathology referrals for providers who continued to self-refer or never self-referred services during this period were much lower. Thus, the increase in anatomic pathology referrals for switchers was not due to a general increase in use of these services among all providers. GAO's examination of all providers that referred an anatomic pathology service in 2010 showed that self-referring providers of the specialties we examined referred more services on average than non-self referring providers. Differences in referral for these services generally persisted after accounting for geography and patient characteristics such as health status and diagnosis. These analyses suggest that financial incentives for self-referring providers were likely a major factor driving the increase in referrals. GAO estimates that in 2010, self-referring providers likely referred over 918,000 more anatomic pathology services than if they had performed biopsy procedures at the same rate as and referred the same number of services per biopsy procedure as non-self-referring providers. These additional referrals for anatomic pathology services cost Medicare about $69 million. To the extent that these additional referrals were unnecessary, avoiding them could result in savings to Medicare and beneficiaries, as they share in the cost of services. CMS should identify self-referred anatomic pathology services and address their higher use. The Department of Health and Human Services, which oversees CMS, agreed with GAO's recommendation that CMS address higher use of self-referral through a payment approach, but disagreed with GAO's other two recommendations to identify self-referred services and address their higher use. GAO believes the recommended actions could result in Medicare savings.
The threat of terrorism against the United States has increased, according to the intelligence community. The experts believe that aviation is likely to remain an attractive target for terrorists well into the foreseeable future. Until the early 1990s, the Federal Bureau of Investigation (FBI), the State Department, FAA, the Department of Transportation (DOT), and airline officials had maintained that the threat of terrorism was far greater overseas than in the United States. However, the World Trade Center bombing and the recent convictions of individuals charged with plotting to bomb several landmarks in the New York area revealed that the international terrorist threat in the United States is more serious and more extensive then previously believed. By 1994, reports by several agencies indicated a change in the pattern of terrorism. In 1994, the State Department reported a decline in attacks worldwide by state-sponsored, secular terrorist groups but an increase in attacks by radical fundamentalist groups, who operate more autonomously. The FBI reported in the same year that the most important development in international terrorism inside the United States was the emergence of international radical terrorist groups with an infrastructure that can support terrorists’ activities. These groups are more difficult to infiltrate, and consequently, it is also more difficult to predict and prevent their attacks. As we reported in January 1994, terrorists’ activities are continually evolving and present unique challenges to FAA and law enforcement agencies. We further reported in March 1996 that the bombing of Philippines Airlines Flight 434 in December 1994, which resulted in the death of one passenger and injuries to several others, illustrated the potential extent of terrorists’ motivation and capabilities as well as the attractiveness of aviation as a target for terrorists. According to information that was accidentally uncovered in early January 1995, this bombing was a rehearsal for multiple attacks on specific U.S. flights in Asia. Officials told us that they rarely have the advantage of a detailed, verifiable plot to target U.S. airlines. They also said that the terrorists were aware both of airports’ vulnerabilities and how existing security measures could be defeated. Even though FAA has changed security procedures as the threat has changed, the domestic and international aviation system continues to have numerous vulnerabilities. Aviation security is a shared responsibility. The intelligence community—the Central Intelligence Agency (CIA), the National Security Agency, the FBI, among others—gathers information to prevent actions by terrorists and provides intelligence information to FAA. On the basis of this information, FAA makes judgments about the threat and establishes procedures to address it. The airlines and airports are responsible for implementing the procedures. For example, the airlines are responsible for screening passengers and property, and the airports are responsible for the security of the airport environment, including security personnel. FAA and the aviation community rely on a multifaceted approach that includes information from various intelligence and law enforcement agencies; contingency plans to meet a variety of threat levels; and the use of screening equipment, such as conventional X-ray devices and metal detectors. However, many of these measures, such as walk-through metal detectors, were primarily designed to avert hijackings during the 1970s and 1980s, as opposed to the more current threat of sophisticated attacks by terrorists that involve explosive devices. For flights within the United States, basic security measures include the use of walk-through metal detectors for passengers and X-ray screening of carry-on baggage; these measures are augmented by additional procedures that are based on an assessment of risk. These additional procedures are contained in the contingency plans developed by FAA in coordination with the aviation industry. FAA’s plans describe a wide range of procedures that can be invoked, depending on the nature and degree of the threat. Among these procedures are (1) passenger profiling, a method of identifying potentially threatening passengers who are then subjected to additional security measures, and (2) passenger-bag matching, a procedure to ensure that a passenger who checks a bag also boards the flight; if the passenger does not board, the bag is removed. FAA mandated higher levels of temporary security measures several times in 1995 because of the increased threat of terrorism, and the current measures in place are at the highest level invoked since the Gulf War. Because the threat of terrorism had been considered greater overseas, FAA has mandated more stringent security measures for international flights. Currently, for all international flights, FAA requires U.S. carriers to implement the International Civil Aviation Organization standards at a minimum, including the inspection of carry-on bags and passenger-bag matching. FAA also requires additional, more stringent measures—including interviewing passengers that meet certain criteria, screening every checked bag, and screening supplementary carry-on baggage—at all airports in Europe and the Middle East and many airports elsewhere. In the aftermath of the 1988 bombing of Pan Am 103, a Presidential Commission on Aviation Security and Terrorism was established to examine the nation’s aviation security system. This Commission reported that the system was seriously flawed and failed to provide adequate protection for the traveling public. In spite of the Commission’s finding and the Congress’s enactment of the Aviation Security Improvement Act of 1990, our work illustrates that many vulnerabilities are persistent. Providing effective security is a complex problem because of the size of the U.S. aviation system, differences among airlines and airports, and the unpredictable nature of terrorism. In our January and May 1994 reports on aviation security, we highlighted a number of vulnerabilities in the overall security framework, such as the screening of checked baggage, mail, and cargo. We also raised concerns about unauthorized individuals gaining access to critical parts of an airport and the potential use of sophisticated weapons, such as surface-to-air missiles, that could be deployed against commercial aircraft. More recent security concerns include smuggling bombs aboard aircraft in carry-on bags or on passengers themselves. Specific information on the vulnerabilities of the nation’s aviation security system is classified and cannot be detailed here, but we can provide some information. We have a classified report in process that discusses the system’s vulnerabilities in greater detail. FAA believes the greatest threat to aviation is explosives in checked baggage. For those bags that are screened, we reported in March 1996 that conventional X-ray screening systems (comprising the machine and operator who reads the X-ray screen) have performance limitations and offer little protection against a moderately sophisticated explosive device. There are also vulnerabilities in screening passengers because the walk-through devices that currently screen for metal objects are unable to detect explosives carried by passengers. Aviation security rests on a careful mix of intelligence information, procedures, technology, and security personnel. New explosives detection technology will play an important part in improving security, but it is not the panacea. In response to the Aviation Security Improvement Act of 1990, FAA accelerated its efforts to develop explosives detection technology, and devices are now commercially available to address some vulnerabilities. Since October 1, 1990, FAA has invested about $150 million in developing technologies specifically designed to detect concealed explosives. FAA relies primarily on contracts and grants with private companies and research institutions to develop these technologies. The act specifically directed FAA to develop and deploy explosives detection systems by November 1993. However, this goal has not been met. In September 1993, FAA published a general certification standard that explosives detection systems must meet before they are deployed. The standard sets certain minimum performance criteria, such as what kinds of explosives must be detected and how many bags per hour the device processes. However, the specifics of the standard are classified. To minimize human error, the standard also requires that the devices automatically sound an alarm when explosives are suspected; this feature is in contrast to currently used conventional X-ray devices, where the operator has to look at the X-ray screen for each bag. In 1994, we reported that FAA had made little progress in meeting the law’s requirement because of technical problems, such as slow baggage processing. Since then, one system has passed FAA’s certification standard and is being operationally tested at two U.S. airports in Atlanta and San Francisco. Explosives detection devices can substantially improve airlines’ ability to detect concealed explosives before they are brought aboard aircraft. While most of these technologies are still in development, a number of devices are now commercially available. For example, some devices are in use in foreign countries, such as the United Kingdom, Belgium, and Israel. None of the commercially available devices, however, is without shortcomings. On the basis of our analysis, we have three overall observations about detection technologies: First, these devices vary in their ability to detect the types, quantities, and shapes of explosives. For example, one device excels in its ability to detect certain explosive substances but not others. Other devices can detect explosives but not in certain shapes. Second, explosives detection devices typically produce a number of false alarms that must be resolved either by human intervention or other technical means. These false alarms occur because devices use various technologies to identify characteristics, such as shapes, densities, and properties, that could potentially indicate an explosive. Given the huge numbers of passengers, bags, and cargo processed by the average major U.S. airport, even relatively modest false alarm rates translate into several hundreds, even thousands, of items per day needing additional scrutiny. Third, and most important, these devices ultimately depend upon human beings to resolve alarms. This activity can range from closer inspection of a computer image and a judgment call to a hand search of the item in question. The ultimate detection of explosives depends on security personnel taking extra steps—or arriving at the correct judgment—to determine whether or not an explosive is present. Because many of the devices’ alarms signify only the potential for explosives being present, the true detection of explosives requires human intervention. The higher the false alarm rate, the more a system needs to rely on human judgment. As we noted in our January and May 1994 reports, this reliance could be a weak link in the explosives detection process. This fact has implications for the selection and training of operators for new equipment. A number of explosives detection devices are currently available or under development to determine whether explosives are present in checked and carry-on baggage or on passengers, but they are costly. FAA is still developing systems to screen cargo and mail at airports. Four explosives detection devices with automatic alarms are commercially available for checked bags, but only one has met FAA’s certification standard (the CTX 5000). FAA’s preliminary estimates are that the one-time acquisition and installation costs of the certified system for the 75 busiest airports in the United States could range from $400 million to $2.2 billion, depending on the number of machines installed. A computerized tomography (CT) device, which is based on advances made in the medical field, offers the best overall detection ability but is relatively slow in processing bags and has the highest price, costing approximately $1 million each. This device was certified by FAA in December 1994. Two advanced X-ray devices have lower detection capability but are faster and cheaper, costing approximately $350,000 to $400,000 each. The last device, which uses electromagnetic radiation, offers chemical-specific detection ability but only for some of the explosives specified in FAA’s standard. The current price is about $340,000 each. All of these devices require additional steps by security personnel when there are indications that an explosive is present. FAA is funding the development of next-generation CT devices from two different manufacturers. These devices are being designed to meet FAA’s standard for detecting explosives and processing speeds; they could sell for about $500,000 each. Advanced X-ray devices with improved capabilities are also in development. Explosives detection devices are commercially available for carry-on bags, electronics, and other items but not yet for screening bottles or containers that could hold liquid explosives. Devices for liquids, however, may be commercially available within 2 years. Carry-on bags and electronics. At least five manufacturers sell devices that can detect the residue or vapor from explosives on the exterior of carry-on bags and on electronic items, such as computers or radios. These devices, also known as “sniffers,” are commonly referred to as “trace” detectors and range in price from about $45,000 to $170,000 each. They have very specific detection capability as well as low false alarm rates. The main drawbacks are (1) the possibility of insufficient residue on the exterior of the item concealing the bomb and (2) nuisance alarms, where the device accurately detects explosive material—for example, a heart patient’s nitroglycerin medication—but the source is not a bomb. An electromagnetic device is also available that offers a high probability of chemical-specific detection, but only for some explosives. The price is about $65,000. Detecting liquid explosives. FAA is developing two different electromagnetic systems for screening bottles and other containers, likely to sell for $25,000 and $125,000 per device. A development issue is processing speed. These devices may be available within 2 years. Although a number of commercially available trace devices could be used on passengers if deemed necessary, passengers might find their physical intrusiveness unacceptable. In June 1996, the National Research Council, for example, reported that there may be a number of health, legal, operational, privacy, and convenience concerns about passenger screening devices. Accordingly, FAA and the Department of Defense (DOD) are developing devices that passengers may find more acceptable. FAA estimates that it would cost $1.9 billion to provide about 3,000 of these devices to screen passengers. A number of trace devices in development will detect residue or vapor from explosives on passengers’ hands. Two devices screen either documents or tokens that have been handled by passengers. These devices should be available in 1997 or 1998 and sell for approximately $65,000 to $85,000 each. Five devices under development use a walk-through screening checkpoint similar to the current metal detectors. Three will use trace technology to detect particles and vapor from explosives on passengers’ clothing or in the air surrounding their bodies. Ranging in expected selling prices from approximately $170,000 to $300,000, one of these devices will be tested at an airport as early as this month, and another device may undergo airport testing next year. Two other devices, based on electromagnetic technology, are in development. Rather than detecting particles or vapor, these devices will provide images of items concealed under passengers’ clothing. Prices are expected to be approximately $100,000 to $200,000. Cargo and mail continue to represent vulnerabilities in the system. Screening cargo and mail at airports is difficult because individual packages or pieces of mail are usually batched into larger shipments that are more difficult to screen. Although not yet commercially available, two different systems for detecting explosives in large containers are being developed by FAA and DOD. Each system draws vapor and particle samples and uses trace technology to analyze them. One system is scheduled for testing in 1997. In addition, FAA is considering for further development three nuclear-based technologies, originally planned for checked-bag screening, for use on cargo and mail. These technologies use large, heavy apparatus to generate gamma rays or neutrons to penetrate larger items. However, they require shielding for safety reasons. These technologies are not as far along in the development process as many other devices. They are still in the laboratory development stage rather than the prototype development stage. If fully developed, these devices could cost as much as $2 million to $5 million each. To reduce the effects of an in-flight explosion, FAA is conducting research on, among other things, blast-resistant containers. FAA’s tests have demonstrated that it is feasible to contain the effects—blast and fragments—of an internal explosion. However, because of their size, blast-resistant containers can be used only on wide-body aircraft that typically fly international routes. FAA is working with a joint industry-government consortium to address concerns about the cost, weight, and durability of the new containers and is planning to blast test several prototype containers later this year. Also this year, FAA will place about 20 of these containers into airline operations to see how well they function in actual use. In addition to technology-based security, FAA has several procedures that it uses, and can expand upon, to augment domestic aviation security or use in combination with technology to reduce the workload required by detection devices, such as random hand searches. On July 25, the President announced additional measures for international and domestic flights that include, among other things, stricter controls over checked baggage and cargo as well as additional inspections of aircraft. Two procedures that are routinely used on many international flights and could be implemented in the short term for domestic flights are passenger profiling and passenger-bag matching. FAA officials have said that profiling can reduce the number of passengers and bags that require additional security measures by as much as 80 percent. Profiling and bag matching are unable to address certain types of threats. However, in the absence of sufficient or effective technology, these procedures are a valuable part of the overall security framework. These methods may also be expensive. FAA has estimated that incorporating bag matching in everyday security measures could cost up to $2 billion in startup costs and lost revenue. The direct costs to airlines include, among other things, equipment, staffing, and training. The airlines’ revenues and operations could be affected differently because the airlines currently have different capabilities to implement bag matching, different route structures, and different periods of time allowed for connecting flights. Aviation security has become an issue of national importance, but no agreement currently exists among the Congress, the administration—including FAA and the intelligence community, among others—and the aviation industry on the steps necessary to meet the threat and improve security in the short and long terms or who will pay for new security initiatives. While FAA has increased security at domestic airports on a temporary basis, FAA and DOT officials believe that more permanent changes are needed. The cost of these new security initiatives will be significant and may require changes in how airlines and airports operate and will likely have an impact on the traveling public. The law makes airlines responsible for screening passengers and property. In November 1995, senior FAA officials stated that they planned to recommend a high-level national policy review of civil aviation to develop a consensus in government and industry on the nature and extent of the threat, appropriate types of responses, and who would pay for those responses. FAA officials told us that standard cost-benefit analyses would likely reject many initiatives and that a consensus was needed among the Congress, industry, and the executive branch before any regulatory action is taken. There has been considerable debate about how to fund the deployment and operational costs for new security initiatives. Several options have been discussed: (1) government funding, if viewed as a national security issue, (2) industry financing as a cost of doing business, and (3) a fee assessed on air travelers. In January 1996, FAA briefed the National Security Council (NSC) on the threat to civil aviation and the need for a high-level national policy review on ways of increasing aviation security. FAA recommended the establishment of a presidential commission as a means of obtaining the essential elements of consensus and a legislative mandate. At that briefing, FAA provided preliminary estimates on the cost of various options, including the deployment of new explosives detection technology for passengers and baggage and other new security procedures. Depending on the option selected, FAA estimated that costs would range from $1 billion to more than $6 billion over a 10-year period. While no agreement was reached on how to finance these improvements, FAA estimated that it would cost the traveling public between $0.20 and $1.30 per one-way ticket. As a result of this meeting and two others, FAA and NSC agreed to submit a proposal to FAA’s Aviation Security Advisory Committee to establish a working group to review the threat against aviation and recommend options for improving security. In addition to FAA’s effort, on July 15, 1996, the President established a Commission on Critical Infrastructure Protection, whose mission includes assessing the threat and vulnerabilities and making recommendations on how to protect telecommunications, electrical power, banking and finance, water supply, gas and oil storage, emergency services, and transportation. Senior DOT officials told us that they intend to provide several staff to this effort but that it is uncertain how much attention will be placed on transportation and, specifically, aviation security. However, recent events will likely influence the focus of this effort and place greater emphasis on aviation security. On July 17, 1996, the same day that TWA Flight 800 exploded, FAA proposed a joint government-industry working group to its security advisory committee. The committee agreed to establish a working group that will include representatives from FAA, the aviation community, the NSC, the CIA, the FBI, the Departments of Defense and State, and the Office of Management and Budget. This group will (1) review the threat to aviation, (2) examine vulnerabilities, (3) develop options for improving security, (4) identify and analyze funding options, and (5) identify the legislative, executive, and regulatory actions needed. The working group established a goal of submitting a final report to the FAA Administrator by October 16, 1996. Any national policy issues would then be referred to the President by the FAA Administrator through the Secretary of Transportation. Recognizing the importance of aviation security as a national policy issue, the President established a commission on July 25, 1996, headed by the Vice-President, to review aviation safety and airport security. This commission is to report back to the President within 45 days. The international aviation community may need to be involved in developing new procedures to improve security. The administration is working with the Group of Seven industrial nations on additional ways to cooperate on countering terrorism. In summary, Mr. Chairman, we face an urgent national problem that needs to be addressed at the highest levels of government now. The threat of terrorism has been an international issue for some time, with events such as the bombing in Saudi Arabia of U.S. barracks. But other incidents such as the bombings of the World Trade Center in New York, the federal building in Oklahoma City, possibly at the Olympics in Atlanta, and perhaps of TWA 800—if in fact this is determined to be an act of terrorism—have made terrorism a domestic as well as an international issue. Public concern about aviation safety, in particular, has already been heightened as a result of the ValuJet crash, and the recent TWA 800 crash has increased that concern. If further incidents occur, public fear and anxiety will escalate and the economic well-being of the nation will suffer because of reductions in travel and the shipment of goods. Three separate initiatives are under way that may address the concerns about aviation security. In our view, a unified and concentrated effort is needed to address this national issue. The commission that the Vice-President heads could be the focal point to build a consensus on the actions that need to be taken to address a number of long-standing vulnerabilities. As we noted, procedures and technology can be used to improve aviation security but will require substantial resources. We believe several steps need to be taken immediately: (1) conduct a comprehensive review of the safety and security of all major domestic and international airports and airlines to identify the strengths and weaknesses of their procedures to protect the traveling public, (2) identify vulnerabilities in the system, (3) establish priorities to address the system’s identified vulnerabilities, (4) develop a short-term approach with immediate actions to correct significant security weaknesses, and (5) develop a long-term and comprehensive national strategy that combines new technology, procedures, and better training for security personnel. Because terrorism is an international problem, close cooperation with foreign governments is also required. In addition, the time has come to inform and involve the American public in this effort. If there was ever a time that public will accept new security measures, it is now. This concludes my prepared statement. I would be glad to respond to any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed aviation security, focusing on the measures needed to reduce potential security threats. GAO noted that: (1) the threat of terrorism is increasing in the United States; (2) aviation security responsibilities are shared by the Federal Aviation Administration (FAA), airlines, and airports; (3) FAA and the aviation community rely on information from various intelligence and law enforcement agencies, depend on contingency plans to meet a variety of threats, and use screening equipment to detect bombs and explosives; (4) basic security measures for domestic flights include the use of walk-through metal detectors and x-ray screening equipment; (5) FAA is considering passenger profiling and bag matching to ensure that passengers checking carry-on baggage actually board a flight; (6) FAA has mandated additional security measures for international flights; (7) conventional x-ray screening is limited and offers little protection against sophisticated explosive devices; (8) new explosive detectors are being developed and could be available within the next 2 years; (9) the cost of adopting these new technologies will cost at least $6 billion over the next 10 years; (10) recent events underscore the need for improved airline security; and (11) Congress and the aviation and intelligence communities need to agree on a strategy for combating terrorism and funding new security measures.
CSRS and FERS are the two largest retirement programs for federal civilian employees. At the beginning of fiscal year 1995, these programs covered about 2.8 million federal employees, or 90 percent of the current civilian workforce. OPM administers CSRS and FERS. CSRS and FERS pension benefits are financed partly by federal agency and employee contributions and partly by other government payments to the Civil Service Retirement and Disability Fund. Although CSRS and FERS both provide pensions, the programs are designed differently. CSRS was established in 1920 and predates the Social Security system by 15 years. When the Social Security system was established, Congress decided that employees in CSRS would not be covered by Social Security through their federal employment. CSRS is a stand-alone pension program that provides an annuity determined by a formula as well as disability and survivor benefits. The program was closed to new entrants after December 31, 1983, and, according to OPM actuaries, is estimated to end in about 2070, when all covered employees and survivor annuitants are expected to have died. FERS was implemented in 1987 and generally covers those employees who first entered federal service after 1983 as well as those who transferred from CSRS to FERS. The primary impetus for the new program was the Social Security Amendments of 1983, which required that all federal employees hired after December 1983 be covered by Social Security. FERS is a three-tiered retirement program that includes Social Security and a Thrift Savings Plan —in addition to a basic pension. Like CSRS, FERS provides disability and survivor benefits. A distinctive feature of CSRS and FERS pensions is the annual COLAs they are to provide. COLAs are post-retirement increases in pension amounts that generally are given on either an ad hoc or automatic basis to offset increases in living costs due to inflation. Congress enacted the first automatic COLA for CSRS annuitants in 1962 (effective January 1963). At that time, the automatic adjustment was viewed as a way of controlling pension costs, because prior ad hoc adjustments had been criticized as being unrelated to price increases and subject to political manipulation. Although COLAs generally have been provided on an automatic basis since 1962, COLA policies have been modified numerous times over the years. As shown in table 1, the changes made during the 1960s and 1970s were intended to enhance pension purchasing power with respect to inflation as measured by the consumer price index (CPI), but some of the changes made during the 1980s had the effect of reducing purchasing power. Table 1 is based on information in the Congressional Research Service (CRS) Report for Congress, 94-834 EPW, updated March 13, 1996. One of these changes provides especially relevant background for considering the relationship between current pensions and final salaries and requires a more complete discussion. As noted in table 1, P.L. 97-253 (the Omnibus Budget Reconciliation Act of 1982) restricted COLAs in relation to final salaries in certain cases. Under this restriction, a pension may not be increased by a COLA to an amount that exceeds the greater of the current maximum pay for a GS-15 federal employee or the final pay of the employee (or high-3 average pay, if greater), increased by the overall annual average percentage adjustments (compounded) in rates of pay of the general schedule for the period beginning on the retiree’s annuity starting date and ending on the effective date of the adjustment. In effect, the statute requires that a retiree’s pension is to be capped at an amount not to exceed the maximum pay of a general schedule employee (i.e., GS-15) or an amount that represents the value of the retiree’s final or average pay, adjusted for the general schedule pay adjustments that had been provided since the annuitant retired. According to OPM’s policy handbook, because the cap applies to COLA increases to pensions, in no instance would a pension already exceeding the cap be reduced. As noted earlier, under current policy—enacted in 1984—COLAs for CSRS and FERS retirees are based on increases in living costs as measured by the CPI-W between the third quarter (July through September) of the current calendar year and the third quarter of the previous year. Although the COLA formula and schedule are the same for FERS and CSRS, FERS COLAs are limited if inflation is over 2 percent. If inflation is between 2.0 and 3.0 percent, the FERS COLA is 2.0 percent; if inflation is 3.0 percent or more, the COLA is the CPI minus 1 percent. If, however, inflation is less than 2 percent, FERS COLAs are to be fully adjusted for inflation. Also, CSRS benefits are to be fully indexed from the time of retirement, and FERS pensions are to be indexed beginning at age 62 for regular retirees. To respond to your request, we used a computerized personnel database of CSRS and FERS retirees and case file information maintained by OPM. At the time of our analysis, the latest available data were for living CSRS and FERS annuitants who were retired as of October 1, 1995. The database and case files provided much of the information that we needed for our analysis, including the retirees’ initial and 1995 pensions, retirement dates, high-3 average salaries, service histories, survivor benefits, and other retirement-related information. However, the database did not have information on retirees’ final salaries, which we needed in order to compare their final salaries to their 1995 annuities. The database did have information on “high-3” average salaries, which are used in calculating initial pensions. Thus, we compared the retirees’ high-3 average salaries to their 1995 pensions to identify a set of retirees whose pensions were most likely to have exceeded their final salaries. From this group, we selected a random sample of 400 from among the 524,435 CSRS retired general employees whose annuities exceeded their high-3 average salaries and all 105 FERS retired general employees for whom the database reported annuities exceeding their high-3 average salaries. We reviewed the selected retirees’ case files to verify that those we had selected had 1995 pensions that, in fact, exceeded their unadjusted final salaries. From our review of the sample of 400 CSRS annuitants, we identified 348 whose 1995 pensions exceeded their final salaries. We identified and removed from our sample 50 with pensions below their final salaries, 1 whose case file did not have the data we needed for our analysis, and another whose case file was not available for our review. From our case file review of the 105 FERS annuitants, we identified and removed 104 that did not match our criterion (i.e., did not have a 1995 annuity that exceeded the retiree’s final salary). The remaining case had a pension that exceeded the final salary. However, the pension combined both FERS and CSRS benefits. This retiree had transferred from CSRS to FERS and thus was receiving benefits that were neither wholly FERS nor wholly CSRS. Consequently, we included this individual in our estimates of the number of retirees who had annuities that exceed their final salaries, but excluded this individual from our regression analysis. We weighted the CSRS sample results to estimate the number of retired general employees in the population whose pensions had come to exceed both their final salaries and high-3 average salaries. In making these estimates, we assumed that the small number of FERS and CSRS cases for which data were not available were similar to the cases that we had reviewed. The sample results thus estimate the total number of general employees whose pensions exceed both their final salaries and their high-3 average salaries. As the final salary is generally included in the three highest salaries that are averaged, these employees are described as having pensions that exceed their “final salaries” in the remainder of the report. We also adjusted the retirees’ final salaries for inflation, using the 1995 CPI-W, and made a second estimate of the number of retirees whose 1995 pensions exceeded their final salaries, expressed in constant dollar terms. To understand why retiree pensions could come to exceed unadjusted final salaries as much as they did, we used regression analysis to model the relationship between key retirement policy variables and the extent to which the pensions of the sample retirees exceeded their unadjusted final salaries. Regression is a statistical technique that can be used to measure the relationship between a dependent variable and a set of independent (i.e., explanatory) variables and isolate their independent effects. This analysis was based on the subsample of 348 CSRS employees whose 1995 pensions exceeded their final salaries. This subsample did not include the single FERS annuitant whose pension exceeded the final salary, the two sampled cases with missing information, nor the 50 sampled cases whose 1995 pensions did not exceed their final salaries. We used the percentage by which the retirees’ pensions exceeded final salaries as the dependent variable in the model, because our sample did not include retirees whose pensions were below their high-3 average salaries. We selected retirement variables to use as independent variables because they were (1) required to be used for computing pension benefits (e.g., years of service); or (2) known to affect pension amounts for some or all retirees (e.g., COLAs and the selection of spousal survivor benefits). Although variables representing changes in a retiree’s personal circumstances (e.g., marriage, death of a spouse, or divorce) that would have changed his or her pension over the period of retirement were not included in the final regression model, we reviewed the retirees’ case files to determine what effects these changes may have had on individual sample retirees. We found that these changes in personal circumstances could cause an individual retiree’s pension to fluctuate (e.g., increase and/or decrease) during his or her retirement depending on whether survivor’s benefits were being deducted. To compare the effects of current and historical COLA policy on retirees’ pensions, we reviewed federal retirement-related documents and identified the historical changes in COLA policy since the inception of automatic COLAs in 1962. Using this information, we calculated the pensions that the sample of 398 retirees would have received each year from 1962 through 1995 had current COLA policy been in effect without interruption. We compared these results to the pensions that they would have received under actual COLA policy, absent other changes that might have affected their pensions (e.g., adjustments due to death of a spouse when survivor benefits had been chosen). We then compared the resulting numbers to assess the probability that the change, if any, in the number of retirees whose 1995 pensions had exceeded their unadjusted final salaries was statistically significant, that is, unlikely to be due to sampling error. To illustrate the effects that the different COLA policies could have had on pensions during the sample annuitants’ retirements, we simulated the effects of current and actual policy on pension amounts for three different retirement periods. To simplify the analysis, our simulation of the impacts of current COLA policy implemented without interruption since 1984 was not adjusted to reflect the actual effective dates of COLAs, the actual pay dates, “lookback” payments or adjustments, or prorated to reflect the month an employee retired. We selected 1961 to 1995, 1968 to 1995, and 1981 to 1995 to show the cumulative effects that the COLAs of the 1960s and 1970s, which overcompensated for inflation, and the suspensions of COLAs in the 1980s could have had for different periods of retirement. We used the average initial pension for the sample annuitants who had retired in the first year of each of the three periods for our starting pension amounts (e.g., the average initial pension of those annuitants who retired in 1961). Our analysis had several limitations. As agreed with your office, we did not independently verify the accuracy of OPM’s database. However, we did verify the accuracy of the data for the cases used in our analysis. Also, the number of retirees whose pensions had come to exceed their final unadjusted salaries could be somewhat higher than we estimated for two reasons. As noted, we used high-3 average salary to identify a population that we believed would be most likely to have pensions that had come to exceed final salaries, because OPM’s computerized database did not include final salary information. Thus, our estimates do not include those retirees whose pensions were lower than their high-3 salaries but whose pensions were higher than their final salaries. Also, the annuity amounts contained in the case files already had survivor benefit reductions, if any, taken. Thus, retirees who selected survivor benefits would have had higher initial pensions than the pensions reported in OPM’s files. However, we could not take this reduction into account, because the automated data file did not identify those retirees who had selected this benefit. On the basis of our examination of the data and our knowledge of the key retirement policy variables used in our analysis, we believe that any such underestimate would have been small. We requested comments on a draft of this report from the Director of OPM, and those comments are discussed at the end of this letter. We did our review from December 1995 to July 1997 in Washington, D.C., according to generally accepted government auditing standards. As of 1995, 1.7 million retirees who were covered by the CSRS and/or FERS pension plans were on the federal retirement rolls. Our estimate of the number of these retirees whose 1995 pensions exceeded their final salaries differed, depending on whether we adjusted the retirees’ final salaries for inflation. When we did not adjust the salaries for inflation, about 459,000, or 27 percent, of the total general employee retirees received pensions that in nominal dollars exceeded their final salaries. However, when we adjusted the final salaries for inflation, no retiree received a pension that exceeded his or her final salary. As a general rule, using constant—rather than nominal—dollars is more meaningful for examining dollar values across time, because constant dollars correct for the effects of inflation or deflation. Constant dollars are especially appropriate for comparing current pensions and final salaries, because the number of years that the annuitants in our sample had been retired averaged 22 years and ranged from 8 to 42 years. Table 2 compares the 1995 pensions and the nominal and inflation-adjusted final salaries for three illustrative retirees in our sample. The illustrative pensions shown in the table are the average amounts received by those sample annuitants who had retired in the years 1961, 1968, or 1981. Three factors help to explain why some retirees’ pensions came to exceed their final salaries when their salaries were not adjusted for the effects of inflation—the number and size of COLAs that retirees received, the number of years that they had been retired, and their number of years of federal service. Two factors—the number and size of the COLAs that the retirees had received and the number of years that they had been retired—contributed because they helped to cause the retirees’ pension amounts to increase over time. The third factor—years of federal service—contributed because years of service was used in computing the retirees’ initial pensions. Our regression model showed that the value of the COLAs that the sample retirees received, as determined by the number and size of COLAs and the length of employees’ retirement, together with their years of federal service, explained about 82 percent of the variation in the percentage by which the retirees’ pensions exceeded their unadjusted final salaries. The important role that COLAs and length of service played is a predictable consequence of pension policies that are designed to reward employee service and maintain the purchasing power of pensions. During retirement, the retirees’ pensions increased because the COLAs that the retirees were to receive increased in number. The amount of the increase each year fluctuated according to changes in the CPI-W. In contrast, unadjusted final salaries remained unchanged. Thus, the longer the annuitants had been retired, the more COLAs they received and the more likely it was that their pensions exceeded their unadjusted final salaries. In fact, the average annuitant in our sample had been retired about 22 years and had received 26 COLAs. The 4 percent who had retired before 1963 had received 36 COLAs. Generally, the likelihood that a retiree’s pension exceeded his or her unadjusted final salary increased when the annuitant had been retired during periods of high inflation, because larger COLAs were given during these periods. Our model showed that, on average, a 1 percentage point increase in the total value of the COLAs that a retiree had received would result in a 0.5 percentage point increase in the amount by which the retiree’s pension exceeded his or her final salary, other factors being equal. In particular, more than 90 percent of the retirees in our sample had been retired during all or part of the 1969 through 1980 period when the most frequent and largest COLAs were given. Over this 12-year period, pensions increased by 166 percent in nominal terms. Appendix I provides a summary of COLA history since automatic COLAs were enacted in 1962. The number of years of federal service also contributed to the explanation of why some retirees’ pensions exceeded their unadjusted final salaries, because years of service is included in determining the percentage of high-3 average salary that a retiree ultimately will receive as his or her initial pension. For example, under CSRS, an employee who had 41 years, 11 months of service at retirement would have been entitled to receive 80 percent of his or her high-3 average salary—the maximum percentage allowed—while an employee who had worked 30 years would have been entitled to receive 56.25 percent. As a result, the longer a retiree had worked for the federal government, the closer the retiree’s initial pension would have been to his or her unadjusted final salary. Nineteen (5 percent) of the retirees in our sample had worked 40 years or more for the federal government, and another 288 (83 percent) had worked 20 to 39 years. The remaining 41 (12 percent) worked 5 to 19 years. Our model showed that on average, a 1-year increase in a retiree’s federal service time would result in about a 3.7 percentage point increase in the percentage by which the retiree’s pension had exceeded his or her final salary, other factors being equal. A final factor—whether a retiree had chosen a survivor’s annuity benefit—helped to explain why some retirees’ pensions had come to exceed their unadjusted final salaries as much as they did. As noted in the background section of this report, an employee who chooses a survivor annuity benefit can have his or her basic annuity reduced by as much as 10 percent. As a consequence, if two retirees retired in the same year and had the same final salaries and years of service, but only one had chosen a survivor annuity benefit, the retiree who elected not to take the benefit would have had a pension that exceeded his or her unadjusted final salary sooner than the retiree who had chosen the survivor benefit. An employee who chose a survivor annuity benefit would have reduced the initial pension and thus increased the gap between the initial annuity and the final salary. Of the CSRS retirees in our sample, 48 percent were not having survivor benefits deducted from their pensions. Had current COLA policy—that is, the COLA policy enacted in 1984, which established the formula and schedule used today by OPM—been in effect without interruption since 1962, some sample retirees’ pensions would have been smaller than the pensions that they actually received, and other retirees’ pensions would have been larger. Our simulations suggest that other factors being equal, the majority of those who retired before 1970 would have received smaller pensions, while about 90 percent of those who retired after 1970 would have received larger ones. If current policy had been in effect for all retirees in the sample, the number of retirees whose pensions would have exceeded their unadjusted final salaries would have increased by about 3 percentage points. The following examples compare the pensions that retirees would have received under current versus actual COLA policy by simulating the effects that changes in COLA policy would have had on pension amounts, other factors being equal. The examples cover three different periods—1961 to 1995, 1968 to 1995, and 1981 to 1995—and show how the impacts would have varied, depending on the period of retirement. In considering the meaning of the figures, it is important to recognize that the trend lines refer to current versus historical CSRS COLA policy. FERS lines were not presented because, as stated earlier in this report, none of the FERS retirees received an annuity that was based solely on his or her FERS participation. Figure 1 shows the relative effects of current and actual policy for a CSRS participant who retired in 1961. As the figure shows, if the current policy had been in effect without interruption, the retiree’s pension would have been smaller over the period. Our analysis showed that by 1995 the retiree’s pension would have been 6.3 percent smaller than it was under the actual COLA policy. However, as the gap shown between the 1995 pension and the unadjusted final salary amount makes clear, such a reduction would not have been nearly enough to have caused the retiree’s pension to fall below his or her final unadjusted salary. Figure 2 shows similar results for an annuitant who retired in 1968. In this example, our analysis showed that the retiree’s pension would have been 3.5 percent smaller if current policy had been in effect without interruption. The reduction in this annuitant’s pension is less proportionally than the reduction in the pension of the annuitant who had been retired since 1961 (shown in fig. 1), primarily because of the difference in the number of the COLAs that were received and, to a lesser extent, the shorter period of compounding. Again, the reduction would not have been large enough to cause the retiree’s 1995 pension to fall below his or her unadjusted final salary. The third example (fig. 3) shows the results for an annuitant who retired in 1981. The retiree’s pension would have been larger if current policy had been in effect without interruption. As the figure shows, under actual policy, the retiree did not receive a COLA in 1984 or 1986, which caused this retiree’s pension to fall somewhat short of the pension that he or she would have received had current policy been in effect. Because the effects of these suspensions continued to be reflected in the pension amounts that the retiree received in subsequent years, by 1995 the retiree’s pension would have been 1.4 percent larger under current, compared to historical, COLA policy. The increases in the pensions of some sample retirees, if current policy had been in effect the entire time, would have been enough to cause an increase of 3.0 percentage points in the number of retirees whose pensions exceeded their unadjusted final salaries. When we estimated what the sample retirees’ pensions would have been if current policy had been in effect without interruption, we found that about 29 percent of retirees would have had annuities that exceeded their unadjusted final salaries, compared to about 26 percent under the actual policy simulation.Although the difference was quite small, it was statistically significant. The two estimates differed by about 3 percentage points in part because the effects of COLAs on pension amounts are cumulative and compound. In particular, the suspensions of COLAs during 1980s tended to offset the COLA policies of the 1960s and 1970s that overcompensated for inflation. Our analysis of the effects that COLA policies have had on retiree pensions shows that the policies have played an important role in maintaining the purchasing power of retiree pensions since automatic COLAs began. Although COLA policies of the 1960s and 1970s overcompensated for the effects of inflation as measured by the CPI, COLA policies of the 1980s sometimes under-compensated. And, although current COLA policy would have tracked the CPI more closely had it been applied over the period we reviewed compared with some past COLA policies, the numerous changes that have been made in COLA policies over the past 35 years did not cause any retiree’s pension to exceed his or her final salary when the salaries were adjusted for inflation. Our analysis also shows that the effects that COLA policies actually have on retiree pension amounts cannot be summarized easily. Generalization is difficult, in part because no one COLA policy has ever been implemented for a sustained period. For example, although the current underlying policy has been in effect since 1984, Congress has modified this policy several times for limited periods to help reduce the deficit. Also, the effects of many individual COLAs and COLA policy changes are cumulative and compound over time. As a consequence, COLA policy changes have affected individual retirees differently, depending on when they retired. In particular, the effects of the COLA policies of the 1960s and 1970s that overcompensated for inflation will continue to have an effect on retiree pensions for as long as those who received them are alive, just as not receiving scheduled COLAs in 1984 and the suspension of COLAs in 1986 will continue to be reflected in the pensions of anyone who retired before these years. We received oral comments on a draft of this report from OPM on July 16, 1997. OPM officials who provided comments included Federal Retirement Benefits Specialists from the Retirement Policy Division and a Program Analyst from the Retirement and Insurance Service. These officials generally concurred with the information and conclusions presented in our report. In particular, they agreed that using constant dollars, rather than nominal dollars, is a more meaningful way to compare retiree pensions to final salaries and that the statutory factors that are designed to maintain pension purchasing power and reward employees with longer service play a major role in determining whether pensions come to exceed nominal final salaries. These officials also provided a number of technical and clarifying comments, which we incorporated into this report where appropriate. We are sending copies of this report to the Ranking Minority Member of your Committee and the Chairmen and Ranking Minority Members of the Subcommittee on International Security, Proliferation, and Federal Services, Senate Committee on Governmental Affairs; and to the Subcommittee on Civil Service, House Committee on Government Reform and Oversight. Copies of this report are also being sent to the Director of OPM and other parties interested in federal retirement matters and will be made available to others upon request. Major contributors to this report are listed in appendix II. If you have any questions, please call me at (202) 512-9039. 3rd qtr. 1984-3rd qtr. 19833rd qtr. 1985-3rd qtr. 1984 3rd qtr. 1986-3rd qtr. 1985 3rd qtr. 1987-3rd qtr. 1986 3rd qtr. 1988-3rd qtr. 1987 3rd qtr. 1989-3rd qtr. 1988 3rd qtr. 1990-3rd qtr. 1989 3rd qtr. 1991-3rd qtr. 1990 3rd qtr. 1992-3rd qtr. 1991 3rd qtr. 1993-3rd qtr. 1992 (continued) 3rd qtr. 1994-3rd qtr. 1993 3rd qtr. 1995-3rd qtr. 1994 * = Adjustments made whenever the CPI in a year exceeded the CPI in the base year by 3 percent or more. ** = Adjustments made whenever the CPI in a month rose by at least 3 percent over the month of the last adjustment and remained at or above that level for 3 consecutive months. In addition to those named above, Jerry T. Sandau, Social Science Analyst, GGD, contributed through his development of the regression analysis results presented in this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO responded to a series of questions about federal pension costs and retirement policy, focusing on: (1) the number of federal retirees, if any, whose pensions have come to exceed the final salaries that they earned while working; (2) why these retirees' pensions came to exceed their final salaries; (3) the difference, if any, in these retirees' pension amounts if current cost-of-living-adjustment (COLA) policy that is, the COLA policy enacted in 1984, which established the formula and schedule used today by the office of Personnel Management (OPM), had been in effect without interruption since 1962; and (4) any difference in the number of retirees whose pensions would have exceeded their final salaries. GAO noted that: (1) an estimated 459,000 (or about 27 percent) of the 1.7 million retirees who were on the federal pension rolls as of October 1, 1995, were receiving pensions that had come to exceed their final salaries when these salaries were not adjusted for inflation; (2) however, when their salaries were adjusted for inflation --i.e., expressed in constant dollars, no retiree was receiving a pension that was larger than his or her final salary; (3) as a general rule, using constant dollars provides a more meaningful way to compare monetary values across time, because the use of constant dollars corrects for the effects of inflation or deflation; (4) although no retiree's pension exceeded his or her final salary in constant dollar terms, GAO's analysis confirmed that three factors played an important role in explaining why the retirees' pensions came to exceed their unadjusted final salaries: the number and size of COLAs that retirees received, the number of years that they had been retired, and the number of years of their federal service; (5) GAO's analysis of the effects that COLA policies have had on retiree pensions suggests that the policies have played an important role in maintaining the purchasing power of retiree pensions since automatic COLAs began; (6) it also suggests that the effects COLA policies actually have had on retiree pension amounts cannot be summarized easily because of numerous changes that have been made in COLA policies over the past 35 years; (7) COLA policy changes have affected individual retirees differently, depending on when their retirements began; (8) if current COLA policy, that is, the policy that was enacted in 1984, had been in effect without interruption since automatic COLAs began in 1962 the pensions of some of the sample retirees would have been smaller than the pension that they actually received, and the pensions of other retirees would have been larger; (9) GAO's comparison of the effects of current and historical COLA policy on pension amounts suggests that other factors being equal, a majority of those who retired before 1970 would have received smaller pensions had current COLA policy been continuously in effect during their retirement, and about 90 percent of those who retired after 1970 would have received larger pensions; and (10) the changes that would have occurred in the sample retirees' pension amounts under current policy were enough to cause about a 3 percentage point (3.0) increase in the number of retirees whose pensions would have come to exceed their unadjusted final salaries.
Federal agencies can choose from among several different contract types, including T&M contracts, to acquire products and services. This choice is the principal means that agencies have for allocating cost risk between the government and the contractor. The government’s basis for payments, contractor’s obligations, and the party assuming more risk for cost overruns changes depends upon the type of contract used—fixed-price, T&M, or cost-reimbursement. T&M contracts constitute a high risk to the government. The contractor provides its best efforts to accomplish the objectives of the contract up to the maximum number of hours authorized under the contract. Each hour of work authorizes the contractor to charge the government an established labor rate which includes profit. These contracts are considered high risk for the government because the contractor’s profit is tied to the number of hours worked. Thus, the government bears the risk of cost overruns. Therefore the FAR provides that appropriate government monitoring of contractor performance is required to give reasonable assurance that efficient methods and effective cost controls are being used. Further, because of the risks involved, the FAR directs that T&M contracts may only be used when it is not possible at the time of award to estimate accurately the extent or duration of the work or to anticipate costs with any reasonable degree of confidence. For many years, federal regulations have required contracting officers to justify in writing that no other contract type (such as fixed-price) is suitable before using a T&M contract. Commercial services comprise services for support of commercial items and services of a type offered and sold competitively in substantial quantities in the commercial marketplace based on established catalog or market prices. During the 1990s, Congress enacted a number of laws to increase the government’s use of commercial practices to make government buying more efficient. The benefits of using commercial practices were seen as creating greater access to commercial markets (products and service types) with increased competition, better prices, and new market entrants and/or technologies. Commercial acquisition practices also present several advantages to contractors when doing business with the government, such as generally not being required to submit cost or pricing data. While the acquisition procedures in FAR Part 12 for purchasing commercial services allow for a streamlined process, prices are accepted based on competition and availability in the marketplace rather than the government’s review of a contractor’s cost and pricing data. Improperly classifying an acquisition as commercial can leave the government vulnerable to accepting prices that may not have been established by the marketplace. FASA authorized the use of fixed-price contracts for the acquisition of commercial items, but it did not explicitly authorize the use of T&M contracts for such acquisitions. SARA specifically authorized the use of T&M contracts for the acquisition of commercial services with certain safeguards to ensure proper use of these contracts. The implementing regulations included additional requirements as safeguards under FAR Part 12. Table 2 summarizes the FAR safeguards when using T&M contracts under FAR Part 12 acquisition procedures for commercial items; under FAR Part 16, acquisition procedures for noncommercial services; and under FAR Subpart 8.4, GSA schedule contracts. The FAR Part 12 revisions also added safeguards for agencies using T&M pricing on indefinite-delivery contracts for commercial services. Specifically, indefinite-delivery contracts for commercial services awarded using Part 12 procedures may allow for the use of fixed-price or T&M orders, and contracting officers are required to execute the Part 12 D&F for each order placed on a T&M basis. If the contract only allows for the issuance of orders on a T&M basis, the Part 12 D&F is required to be executed to support the basic contract and also explain why using an alternative fixed-price structure is not practicable. The D&F for this type of contract is required to be approved one level above the contracting officer. By contrast, the section of FAR Part 16 pertaining to T&M services does not explicitly address the D&F requirement for indefinite-delivery contracts. Concerns by DOD and Congress over the increased use of T&M contracts have sparked some actions to curb DOD’s use of T&M in general and for the acquisition of commercial services in particular. In June 2007, we reported that DOD’s use of T&M contracts had steadily increased and that contracting officials frequently failed to ensure that this contract type was used only when no other contract type was suitable. Little effort had been made to convert follow-on work to a less risky contract type when historical pricing data existed, despite guidance to do so. Based on our recommendations for improved oversight, DOD’s Defense Procurement and Acquisition Policy office, in March 2008, began requiring military departments and defense agencies to establish procedures for analyzing whether T&M contracts and orders under indefinite-delivery contracts are used when other contract types are suitable. Each department or agency was to provide an assessment of the appropriate use of T&M contracts for any contracting activity that obligated more than 10 percent of its total fiscal year 2007 obligations for services using T&M contracts or orders. The assessment was to include actions that will be taken to reduce the use of T&M contracts whenever possible. Further, the Acquisition Improvement and Accountability Act of 2007 required DOD to revise its acquisition regulation to require contracting officers to determine in writing that the offerer has submitted sufficient information to evaluate price reasonableness for commercial services that are not offered and sold competitively in substantial quantities in the commercial marketplace but are “of a type” offered and sold competitively in substantial quantities in the commercial marketplace. The act also specifies that DOD’s revised regulation shall ensure that the procedures applicable to T&M contracts for commercial services may be used only for the following: services procured for support of a commercial item; emergency repair services; any other commercial services only to the extent that the head of the agency approves a determination in writing by the contracting officer that the services to be acquired are commercial services; the offeror has submitted sufficient information to evaluate the price reasonableness of the services, if they are not offered and sold competitively in substantial quantities in the commercial marketplace; such services are commonly sold to the general public through use of T&M or labor-hour contracts; and the use of a T&M or labor-hour contract type is in the best interest of the government. We did not assess DOD’s compliance with these provisions because they have not yet been implemented. Federal agencies have reported relatively limited use of T&M contracts and GSA schedule T&M orders to purchase commercial services, based on those obligations coded in FPDS-NG as using T&M contracts and orders under commercial item procedures. From February 12, 2007, when the FAR change that allowed T&M acquisitions for commercial services was implemented, to December 31, 2008, $4.4 billion—less than 1 percent of total federal obligations for services—was reported. Figure 1 presents information on the total reported obligations for services (i.e., commercial and noncommercial) compared to obligations coded as (1) having acquired commercial services, (2) as T&M contracts for services, and (3) as T&M contracts for commercial services from February 12, 2007, to December 31, 2008. (.5% of total) Obliation coded a T&M contract for commercial ervice $4. billion (0.% of total) The vast majority of the $4.4 billion in obligations coded as T&M for commercial services were for services actually acquired under GSA schedule contracts ($3.1 billion). The FPDS-NG user manual defines commercial item procedures as those that use FAR Part 12 acquisition procedures, but our analysis of FPDS-NG data showed that these orders had been issued through FAR Subpart 8.4, pertaining to GSA schedule contracts, and thus had been miscoded based on the definition in the user manual. Although our overall focus was on nonschedule T&M orders for commercial services, we identified additional obligations under T&M orders placed on GSA schedule contracts. From February 2007 to December 2008, approximately $6 billion of the $47.6 billion in obligations coded as T&M contracts were through the GSA schedule program, in addition to the $3.1 billion that had been miscoded as having used FAR Part 12. Thus, the full picture of the government’s use of T&M for commercial services for this time period was approximately $10.4 billion— about 90 percent of which was under GSA schedule contracts. Agencies reported purchasing a variety of commercial services using T&M contracts and orders during this time period. The top 10 types of commercial services reported as purchased using T&M contracts were as shown in table 3. Our sample of 149 contracts and orders provides additional details on the variety of commercial services procured under T&M contracts. For example: The Army purchased patent legal services for inventions resulting from biomedical, chemical, and other research. The Indian Health Service, within HHS, entered into a contract for emergency nursing services and inpatient nursing services for a healthcare center. The Navy contracted for repair services for a Navy vessel undergoing overhaul at the Norfolk Naval Shipyard. The VA purchased project management services for its MyHealtheVet Web site, which provides access to health information, tools, and services. The Federal Bureau of Investigation (FBI) purchased certified gunsmith services to repair and perform preventative maintenance on firearms. NASA entered into a contract for translation, interpretation, visa processing, and logistical support services. Maintaining accurate data is an essential component of good oversight and helps lead to informed decisions. In our sample of T&M contracts for commercial services, we found that the quality of the data reported in FPDS-NG was compromised in several ways. First, 28 of the 149 contracts and orders in our sample from October 1, 2001, to June 30, 2008, were incorrectly coded in FPDS-NG. Our review of the contract files revealed that 19 were coded as having acquired commercial services when they did not, and 10 were coded as T&M contracts when they were fixed-price, as shown in table 4. Several of the contracting officers we interviewed attributed these miscodings to errors made during input of data into the federal procurement data system. For example, the Air Force had planned to establish indefinite-delivery/indefinite-quantity contracts for advisory and assistance services using FAR Part 12 acquisition procedures for commercial services. However, because cost-reimbursement orders were contemplated under the contracts—which the FAR prohibits for commercial services—the Air Force decided not to award the contracts using FAR Part 12 acquisition procedures. Agency officials stated that the contracts were then mistakenly coded as having used acquisition of commercial item procedures. In addition, we found that T&M contracts for commercial services may be underreported based on a misunderstanding about contract type among contracting officials in most of the government agencies in our review. Some contracting officers had the incorrect belief that the fixed labor rate component of T&M contracts renders them fixed-price. In fact, some contracts in our sample were referred to in the contract file as “firm fixed price labor hour,” a contract type that does not exist. Despite the fact that labor rates are fixed under T&M contracts, the overall ceiling price is not a firm, fixed price because the contractor will be paid based on the number of hours worked (up to the ceiling price). Some contracting officers acknowledged having coded other similar contracts outside of those in our sample as fixed-price, thus potentially understating the use, and correlated risk to the government, of T&M contracts. Following are some examples that highlight contracting officials’ confusion about fixed-price versus labor-hour contracts (even though these contracts in our sample had been correctly coded as labor-hour). Contracting officers at HHS’s Indian Health Services stated that although a few of their contracts for medical professionals had been coded as labor-hour, these contracts were typical of the contracts they usually code as fixed-price. One contracting officer explained that if the hours are reasonably well known in advance—“shift labor,” for example—then the estimated hours written into the contract are considered fixed-price. However, another contracting officer explained that Indian Health Services pays contractors for actual hours worked, regardless of the estimate written into the contract. A contracting officer at HHS’s Program Support Center told us that a contract in our sample, for maintenance and repair services, had mistakenly been entered as a labor-hour contract in FPDS-NG. He believed it should have been coded as fixed-price because the dollars obligated reflected a fixed hourly rate multiplied by the hours worked, but later conceded that the contract was actually a labor-hour contract. An FBI contracting officer maintained that a labor-hour contract in our sample, for gunsmith services, should have been coded as fixed-price because the labor rate was fixed. The contract purchases the services of one person to repair and maintain firearms for FBI training teams. Although the contract requires these services during “normal business hours” 5 days a week, it also allows the contractor to bill for preapproved overtime when necessary and includes a maximum number of hours to be billed on the contract. When we raised this confusion about contract type with officials from the Office of Federal Procurement Policy (OFPP), they agreed that clarification to the contracting community on what constitutes a fixed price versus a labor hour contract would be beneficial. We also spoke with contracting officers about how they generally define a service as commercial and found that individuals had different opinions about whether or not certain services are commercial, which may be contributing to issues with data reliability in FPDS-NG. Many contracting officers defined a commercial service as being readily available in the commercial marketplace. However, several officials told us that in certain cases, a service could reasonably be considered either commercial or noncommercial. For example, a DOD official stated that a contract for aircraft repair services could be considered either a commercial or noncommercial purchase depending on the contracting officer’s interpretation. On the other hand, Air Force officials we spoke with view aircraft maintenance—even on military aircraft—as predominantly commercial since aircraft mechanics are broadly available commercially. Some contracting officers stated they would consider services that require specific knowledge of government requirements to be noncommercial. For example, a DOJ procurement policy official told us that although a contracting officer used FAR Part 12 commercial acquisition procedures to award a contract for technical services, including the installation of modules for DOJ’s financial management system (one of the T&M contracts in our sample), he did not consider the service to be commercial because it was specific to DOJ’s needs. He cited a contract for trash pick-up as an example of a commercial contract. In another example, a Navy contracting officer explained that although the majority of her purchases are for commercial items or services, if a purchase is completely exclusive to the Navy—such as for equipment used on submarines or Navy ships—she would consider it noncommercial. In addition, although all services available on the GSA schedule are described as commercial in the FAR, we found cases where agencies ordering these services did not consider them to be commercial. GSA officials confirmed that they consider everything under the schedules program to be commercial, even if items or services are slightly modified to meet specific requirements. However, they acknowledged that if significant modifications are made, the items ordered may be out of scope of the underlying GSA contract. The following are some examples from our review where agency officials used the GSA schedules program but considered the procurement to be noncommercial. At one Air Force location, contracting officers told us that they did not consider any of their seven GSA orders in our sample, such as an order for program management and technical support for the Air Force’s telecommunications monitoring and assessment program, to be commercial. They only discovered that these orders were being automatically coded in FPDS-NG as having used commercial procedures when we identified them in our sample for review. NASA had purchased environmental management and safety support services under a GSA schedule contract, but, according to NASA contracting officers, the actual services ordered were so technical and specialized that they did not consider them to be commercial services. They had used the GSA schedule primarily to identify qualified commercial vendors who could perform this specialized work. The Centers for Medicare and Medicaid Services (CMS) at HHS issued an order under a GSA schedule contract for the design and build of a knowledge management system for CMS’s Center for Beneficiary Services. According to the contracting officer, because the system was custom-designed for CMS, it is not commercial. Under FAR Part 12, T&M contracts or orders may be used to acquire commercial services if the contracting officer executes a D&F which sets forth sufficient facts and rationale to justify that no other contract type is suitable. At a minimum, the D&F must: 1. include a description of the market research conducted; 2. establish that it is not possible at the time of placing the contract or order to accurately estimate the extent or duration of the work or to anticipate costs with any reasonable degree of certainty; 3. establish that the requirement has been structured to maximize the use of fixed-price on future acquisitions for the same or similar requirements; and 4. describe actions planned to maximize the use of fixed-price contracts on future acquisitions for the same requirements. Of the 149 contracts and orders in our sample, 82 were subject to this D&F requirement. Of these 82 contracts and orders, only 5 had a FAR Part 12 D&F that addressed each required element. No D&F had been prepared for many of the contracts and orders. Further, for almost half of the contracts and orders, contracting officials had improperly used the less rigorous Part 16 D&F instead of the Part 12 D&F for commercial services. We found a general lack of awareness of the Part 12 D&F requirement at the agencies in our review. Many contracting officials, including some policy officials, across the agencies in our review were unfamiliar with this Part 12 safeguard. We raised this issue with officials from OFPP, who were concerned at the general lack of compliance with this key safeguard pertaining to T&M contracts for commercial services. Table 5 sets forth the breakdown of D&Fs for the 82 contracts and orders in our sample that were subject to the Part 12 D&F. In some cases, contracting officers had incorrectly concluded that a D&F was not necessary. For example, two contracting officers at the Navy told us that they did not complete a D&F because they did not believe contracts below the simplified acquisition threshold required a D&F— which is inconsistent with the FAR. In another instance, an Air Force contracting officer who had included Part 12 D&Fs in two contracts in our sample executed only a Part 16 D&F for a third contract because he believed that a Part 12 D&F was not required for a simplified acquisition. The nine D&Fs in our sample that had some but not all of the discrete elements required by FAR Part 12 typically omitted a description of the market research conducted or actions planned to maximize use of fixed- price contracts for future acquisitions for the same or similar services. For example, one D&F for a DOJ contract for consulting services for the National Prison Rape Elimination Commission, awarded on a sole-source basis, included information on the services needed but did not describe the market research conducted. The D&F states that neither the scope of work nor the contractor’s level of effort can be determined with a degree of accuracy necessary to develop a reliable cost estimate on which to base a fixed-price award. It further states that the work entails professional and other administrative services for which no reliable specifications exist, and the precise method of accomplishment cannot be established in advance. However, the D&F does not describe actions planned to maximize the use of fixed-price contracts on future acquisitions for the same requirements. The five FAR Part 12 D&Fs we found that addressed all the required elements included the rationale for a T&M contract and discussed how future requirements could potentially shift to a fixed-price contract. For example, in preparing a D&F for a Navy contract for the overhaul and repair of naval vessels, contracting officials not only described the market research, but thoroughly documented the market survey performed, including a description of applicable services provided by potential bidders in the marketplace. They also described how they would employ fixed pricing for stable labor expenses and monitor the volatility of other labor categories to determine if the services could be purchased on a fixed-price basis in the future. In another example, at HHS, a contracting officer completed a Part 12 D&F for a contract for less than 6 months of network administrative support services. The D&F stated that the market research had identified an 8(a) company to provide the services. It also explained that the requirement had been structured to maximize fixed pricing by limiting the period of performance and that there was no anticipated need for this service to continue in the future. In yet another example, at the Air Force, the contracting officer prepared a complete Part 12 D&F for a contract for intelligence support services that addressed all of the required elements. The D&F explained that a small business was identified as the best option for the procurement and described the outcome of the market research conducted. Further, the D&F stated that information obtained from the procurement would be used to develop fixed pricing for future procurements, which would be better defined and more concise. In addition to a more detailed D&F, the FAR also requires the contracting officer to document that each change to the ceiling price of a T&M contract for commercial services is in the best interest of the procuring agency. In general, the contracts in our sample that were subject to the FAR Part 12 requirements did not have increases in the ceiling price. However, in the instances where an increase did occur, contracting officers did not always follow the FAR requirement. A contract at HHS for financial services management more than doubled in value over the original “estimated not-to-exceed” cost. No written justification was provided for why this increase was in the best interest of the procuring agency. The contracting officer stated that the not-to-exceed amount on the contract was only an estimate and had not identified a separate ceiling price—which is required by the FAR Part 12. On the other hand, some contracts with ceiling price increases did include a description of why the increase was necessary. For example, we reviewed three orders at the Army for patent legal services that documented why ceiling price increases were necessary—which was essentially due to a change in the acquisition strategy for obtaining these services. After establishing a multiple award contract with 23 vendors, contractors were asked to submit proposals to complete ongoing work that, according to contracting officials, was previously purchased on government credit cards. In one case, a task order increased from approximately $100,000 to $500,000 because the contractor had initially misunderstood the request for proposals and submitted a proposal for only a limited scope of work; it subsequently revised its proposal to address all of the Army’s stated requirements. In another example at the U.S. Marshals Service, the ceiling price on a contract for aircraft maintenance services increased from $250,000 to $400,000 through three successive modifications, and all the modifications included a detailed description of the need for additional funds. Clear guidance and training are needed to successfully introduce and implement changes to regulations. The DOD offices we visited were the only locations in our review that provided general training seminars or guidance on the changes to FAR Part 12 permitting the use of T&M contracts for commercial services, but none provided written guidance or training on the more detailed D&F requirement. Navy contracting officials recognized this omission during our visit and subsequently provided additional training to their contracting officials. Army officials told us that they had discussed the new D&F requirement in a meeting with contracting officers but had not issued any written guidance. None of the civilian agencies in our review had provided formal guidance or training to their contracting officers on the safeguards. Officials who were aware of the Part 12 safeguards frequently found out through their own initiative. For example, in our sample of 17 HHS contracts subject to the FAR Part 12 D&F requirement, 2 contained partial D&Fs and 1, issued by the Program Support Center, contained all of the D&F elements. The contracting officer responsible for the complete D&F indicated that he became aware of the D&F requirement through his own FAR research and had not received guidance from headquarters. The other 2 partial Part 12 D&Fs were issued by another HHS component, the Food and Drug Administration. The head of contracting who signed these D&Fs said that she had also learned of the Part 12 D&F requirement by researching the FAR. At DOJ, officials at the Office of Justice Programs explained that they became aware of the FAR Part 12 D&F requirement through a paid subscription for updates to a contract checklist from an outside vendor. When awarding a contract for consulting services, a contracting officer from that office prepared a Part 12 D&F in the file, but it did not address all of the required elements. Several contracting officials at different agencies noted that their contracting staff is very overworked or inexperienced, which may have contributed to the general lack of awareness of the new D&F requirement. Internal controls, such as contract reviews, administered by informed agency personnel can also help ensure that policies and processes are translated into practice. In some cases, the contracts in our sample had been reviewed by staff, including legal officials, who did not detect that the required Part 12 D&Fs were missing. For example, while six of the eight contracts at the Air Force were reviewed by attorneys or contract management officials, five contract files still contained the incorrect Part 16 D&F rather than the Part 12 D&F for commercial acquisitions. At the Navy, one attorney reviewing a contract file identified the need to include the Part 12 D&F, but another attorney reviewing a different Navy contract failed to do so. In another example at NASA, an attorney and associate division chief had reviewed the contract and did not identify that the Part 12 D&F was missing, but the associate division chief did inquire as to whether part of the work could be fixed-price. In other cases, contract reviews either failed to ensure that any D&F was included in the contract file or there was no evidence that reviews of the acquisition approach had occurred. Four of the five VA contracts we reviewed were subject to internal reviews by VA technical and legal staff based on factors such as value and contract type, yet none contained a D&F of any type. At the Army location we visited, there was no indication that the contracts’ acquisition approach had been reviewed, and most of the contracts in our sample contained the Part 16 D&F or had no D&F at all. However, this Army contracting activity updated its internal contract review checklist in December 2008, after our visit, to include a reference to the Part 12 D&F requirement. Our review of contract files and interviews with agency officials further revealed that awareness of the new D&F requirement even varied among the staff of a single contracting office. For example, three T&M contracts for commercial services were issued during a 6-month period by U.S. Marshals Service contracting officials for aircraft maintenance and pilot services in Puerto Rico. One contract file contained a partial Part 12 D&F, one contained a Part 16 D&F, which is less rigorous, and the third had no D&F. The vast majority of reported obligations for commercial services acquired through T&M contracts went through GSA’s schedules program from February 2007 to December 2008, but the FAR Part 12 D&F requirement has not been applied to the use of schedule contracts. The February 2007 revisions to FAR Part 12 did not specifically address the applicability of the D&F provisions to GSA schedule contracts or orders issued under them. Further, the section of the FAR that governs ordering procedures for GSA schedules contracts does not refer to the Part 12 D&F requirement to either make it explicitly applicable or inapplicable as it does with other FAR provisions. GSA has not incorporated the D&F requirement in its own acquisition manual, for use by its contracting officers, and has not instructed ordering agencies to comply with the Part 12 D&F requirement when issuing T&M orders under its schedule contracts. For example, the Part 12 D&F is not discussed in GSA’s ordering guidance for schedule contracts or in the frequently asked questions on the schedules program Web site. Accordingly, there is uncertainty in the contracting community about the extent to which the Part 12 D&F is required for schedule orders. Our file review revealed that only 2 of the 19 GSA orders we reviewed that were awarded after the February 2007 FAR changes contained the Part 12 D&F. Eleven of the orders contained the less rigorous FAR Part 16 version which would be properly used in conjunction with the purchase of noncommercial services using T&M contracts, and 6 had no D&F, as shown in table 6. Further, the FAR Part 12 requirement to document ceiling price changes on T&M contracts is not included in FAR Subpart 8.4, which pertains to schedule purchases. We found a few GSA orders at the VA location we visited that had ceiling price increases with no documentation on why the increase was in the best interest of the VA. For example, one order for information technology support services increased from $3.5 million to almost $4.8 million with minimal explanation as to why this increase occurred. GSA policy officials told us that the statutory authority that created the schedules program is unique and allows the administrator the flexibility to decide what procedures to apply to the schedules program. They noted, however, that they were planning to issue a procurement information notice in the spring of 2009 to put in place a Part 12 D&F for the entire GSA schedules program. It is not clear how this D&F will address the specific elements required by Part 12 of the FAR, or how it will act as a safeguard to ensure that each agency using GSA’s schedule contracts has made the necessary determination that no other contract type is suitable. On March 6, 2009, we requested a legal opinion from GSA on the applicability of FASA section 8002(d), as amended by section 1432 of SARA, and the implementing FAR section 12.207(b) D&F requirement to the GSA schedules program. In its April, 15, 2009, response, GSA stated that the statutory language of FASA is not explicit and is unclear regarding applicability of the FASA provisions to the GSA schedules program, and therefore concluded that applicability is uncertain with regard to T&M commercial services contracts and orders under the program. In this regard, GSA recognized congressional concerns expressed regarding the use of T&M contracts for commercial services, which in some cases have led to inefficient and costly procurements. Specifically, GSA recognized the concern of the Senate Armed Services Committee that T&M commercial services contracts “are potentially subject to abuse because . . . it very difficult to ensure that prices are fair and reasonable.” GSA stated, however, that it “has exercised the agency’s authority over the Schedules program to create safeguards so as to mitigate the issues presented by T&M commercial services contracts” and that existing provisions in the GSA Acquisition Regulation (GSAR) and FAR Subpart 8.4 “satisfy any concerns about the use of T&M orders in the Schedules program.” It is not apparent to us that the regulations cited by GSA provide the government with risk mitigation equivalent to that provided by the Part 12 D&F requirement that T&M contracts will only be used when no other contract type is suitable. For example, GSA points to the FAR Section 8.4 requirement for the ordering activity to document the rationale for using other than a firm-fixed price order for services. This documentation requirement is minimal, requiring only the “rationale” for using other than a firm-fixed price order rather than the more detailed rationale required in FAR Part 12 to demonstrate that there is no other suitable contract type. GSA also points to two existing price reasonableness requirements as safeguards: (1) the GSAR requirement that before a schedule contract is awarded, the GSA contracting officer must determine that the prices offered are fair and reasonable and (2) the FAR requirement that the ordering activity contracting officer must consider the level and mix of labor proposed and determine that the total price of the schedule order is reasonable. Again, these provisions do not address the more detailed rationale required in FAR Part 12. We see no reason why the concerns which led Congress to require the Part 12 safeguards for the use of T&M contracts would be any less compelling in those instances in which an agency proposes to use a GSA schedule to obtain commercial services on a T&M basis. GSA did not provide any rationale why T&M contracts and orders for commercial services should be treated differently under the GSA schedules program, or be subject to fewer safeguards than those purchased outside of the GSA schedules program where the more heightened FAR section 12.207 requirements would be required. Further, we note that in section 8002(d) of FASA, as amended, there is no indication that the D&F requirement cannot apply to the purchase of any commercial item or service to include items or services available for purchase under the GSA schedules program. The FAR Part 12 D&F requirement for the use of T&M contracts to acquire commercial services helps to ensure that this contract type is used only when no other contract type is suitable and to instill discipline in the determination of contract type with a view toward managing the risk to the government. The general lack of awareness of this requirement among contracting officers across all agencies in our review—more than 2 years after its implementation—coupled with the failure of management to detect the lack of compliance with this key safeguard suggests that further actions are necessary. In addition, miscoding of labor-hour contracts as fixed-price, when based on a misunderstanding about this contract type, potentially understates the risk to the government. Further, the fact that the safeguards put in place by Congress are not applied to GSA schedule contracts or orders raises concerns that the safeguards are not being used for the vast majority of T&M contracts for commercial services. When these safeguards are not used, the government may be assuming more risk than necessary. To help ensure that the risks associated with T&M contracts are understood and that safeguards are followed and to ensure consistency in the use of T&M contracts regardless of which part of the FAR authorizes their use, we recommend that the Administrator of the Office of Federal Procurement Policy take the following three actions: amend FAR Subpart 16.6 (T&M, Labor-Hour and Letter Contracts) and FAR Subpart 16.2 (Fixed-Price Contracts) to make it clear that contracts with a fixed hourly rate and an estimated ceiling price are T&M or labor-hour contracts, not fixed-price-type contracts and amend FAR Subpart 8.4 (pertaining to the GSA schedules program) to explicitly require the same safeguards for commercial T&M services—i.e., the FAR Part 12 D&F and the justification for changes to the ceiling price—-that are required in FAR section 12.207. Provide guidance to contracting officials on the requirements in FAR section 12.207 for the detailed D&F for T&M or labor-hour contracts for commercial services and encourage agencies to provide training regarding the D&F requirement. We requested comments on a draft of this report from OFPP, NASA, HHS, GSA, DOD, VA, and DOJ. In oral comments on a draft of this report, OFPP’s Acting Administrator concurred with our recommendations. In written comments, included in appendix II, NASA stated that the report provides a balanced view of the issues. HHS also provided written comments. Although our recommendations were directed at OFPP, HHS stated that it agrees with them and outlined several steps it is taking to reinforce the need for its acquisition community to comply with requirements for T&M and other contract types. HHS’s comments are included in appendix III. In comments provided via e-mail, DOD’s Director, Defense Procurement and Acquisition Policy, concurred with our findings related to DOD contracts. The Director stated that DOD fully supports the objectives of promoting awareness and compliance with existing requirements related to the safeguards employed to ensure that T&M contracts are used only when justified. GSA, DOJ, and VA provided no comments. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Justice, Veterans Affairs, and Health and Human Services; the Administrators of the General Services Administration, Office of Federal Procurement Policy, and NASA. In addition, this report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgements are provided in appendix IV. The objectives of this review were to assess (1) the extent to which agencies have reported using time-and-materials (T&M) contracts and General Services Administration (GSA) schedule T&M orders for commercial services and what they are acquiring using this contract type, (2) the degree to which agencies complied with the FAR Part 12 safeguards and (3) the applicability of these safeguards to the GSA schedule program. To address these objectives, we identified through the Federal Procurement Data System-Next Generation (FPDS-NG) all reported T&M contracts and orders—including GSA schedule orders—that were coded as using commercial item acquisition procedures from October 1, 2001, to June 30, 2008. We then selected five federal departments to review—based primarily on their high-dollar obligations and high numbers of contract actions—which represent 97 percent of total obligations coded as T&M contracts awarded using commercial item procedures for this time period: Department of Defense (DOD) Department of Health and Human Services (HHS) Department of Justice (DOJ) National Aeronautics and Space Administration (NASA) Department of Veterans Affairs (VA) While the focus of our engagement was non-GSA contracts awarded after the February 2007 changes to the FAR, we also reviewed some GSA orders and contracts awarded by selected defense and civilian agencies prior to the FAR changes to get a better understanding of the circumstances of those procurements—such as whether the contracts were miscoded. We corroborated contract file information by interviewing over 100 contracting and policy officials at all of the selected agencies. Air Force Intelligence, Surveillance, and Reconnaissance Agency, Lackland Air Force Base; San Antonio, Texas Air Combat Command Acquisition Management and Integration Center, Langley Air Force Base, Virginia (Contracts at this location were awarded prior to the FAR change) Fleet Industrial Supply Center, Norfolk; Norfolk, Virginia; Philadelphia, Pennsylvania; Portsmouth, New Hampshire; Millington, Tennessee; and Great Lakes, Illinois. At the Army’s Medical Research Acquisition Activity, we randomly selected 20 non-GSA schedule contracts awarded after the February 2007 FAR Part 12 change, 5 non-GSA contracts awarded prior to the FAR change, and 5 GSA schedule orders issued after the FAR change. Ten of the 20 non-GSA contracts were indefinite-delivery contracts and 2 were blanket purchase agreements. For these, we reviewed 13 T&M orders under the indefinite-delivery contracts and 3 orders that had been placed under 1 of the blanket purchase agreements. At the Navy, we reviewed all of the non-GSA schedule contracts awarded during our selected time period of October 1, 2001, to June 30, 2008, which included 20 contracts awarded after the FAR change and 5 awarded prior to the FAR change. We also reviewed 5 randomly selected GSA schedule orders that were awarded after the FAR change. At Lackland Air Force Base, we reviewed all non-GSA schedule contracts reported as T&M using commercial items acquisition procedures, including 1 awarded prior to the FAR change. We reviewed all 7 GSA schedule orders awarded after the FAR change that were reported as using T&M contracts for commercial services. At Langley Air Force Base, which had the largest obligations reported as T&M contracts for commercial services prior to the enactment of the Services Acquisition Reform Act in November 2003, we selected and reviewed 2 non-GSA T&M orders awarded prior to November 2003 that had been recently modified to better understand the circumstances of these contracts. These 2 orders turned out to have been miscoded in FPDS-NG as having used commercial items acquisition procedures. Cleveland Business Center; Cleveland, Ohio Acquisition Management Section; Austin, Texas Table 7 contains details about the distribution of our contract sample across the agencies in our review. To identify the extent to which agencies have reported using T&M contracts and GSA schedule orders for commercial services, we used FPDS-NG data to determine the obligations reported as T&M awarded using FAR Part 12 commercial items acquisitions procedures between February 12, 2007, when the FAR change authorizing T&M contracts for commercial services went into effect, and December 31, 2008. We compared this figure to total reported federal obligations for services, obligations coded as having acquired commercial services, and obligations coded as T&M contracts and orders during the same time period in order to demonstrate the relative magnitude of T&M contracts for commercial services. We discovered that many GSA schedule orders for T&M services had been miscoded as having used FAR Part 12 procedures (when they had actually used procedures under FAR Subpart 8.4) and brought this issue to the attention of the Office of Federal Procurement Policy officials. To determine the full picture of T&M obligations for commercial services, we identified GSA schedule T&M orders that had not been coded as having used commercial item procedures. We also used FPDS-NG data to assess what proportion of the total reported T&M contracts for commercial services was purchased through the GSA schedules program. To test the reliability of FPDS-NG data, we used information from the contract file and discussions with contracting officials. We confirmed that a contract was used to acquire commercial services by reviewing the contract for relevant commercial clauses (52.212-4—Contract Terms and Conditions—Commercial Items) and other contract file documentation— such as the acquisition plan or the standard contract form for commercial item acquisitions (SF 1449)—that indicated that commercial services were purchased. In some cases, in which the evidence in the files was not sufficient to make this determination, we confirmed that commercial services were acquired by speaking with the contracting officer. To confirm that a contract was T&M, we reviewed relevant contract documentation such as contract line item notations (CLIN) and invoices, spoke with contracting officers, and applied FAR descriptions of T&M or labor-hour contracts. To identify the types of services agencies are acquiring using T&M contracts for commercial services, we used FPDS-NG data to identify the top 10 commercial services purchased under T&M contracts from February 12, 2007, to December 31, 2008. We also analyzed the statements of work from selected contracts in our sample to provide more detailed examples of the types of services agencies are acquiring using these contracts. When we discovered that some contracting officers had mistakenly interpreted the fixed labor rate component of T&M contracts to mean that these contracts are fixed-price type contracts, we decided to review a nonrepresentative sample of contracts labeled as fixed-price in FPDS-NG that were coded for the same types of services as the T&M contracts for commercial services identified in our sample. Using DOD’s electronic database, we conducted a preliminary review of 60 DOD contracts that had been coded as fixed-price contracts and selected 16 that possibly could have been T&M, based primarily on our interpretation of language in the contract that suggested that the contract was not fixed-price. To confirm whether these contracts were T&M, we spoke with contracting officials and requested additional contract documentation for 10 contracts at Lackland Air Force Base and 6 managed by the Fleet and Industrial Supply Center at Norfolk Naval Base. Of these 16 contracts, 3 were confirmed to be incorrectly coded as fixed-price in FPDS-NG due to data entry errors, and should have been coded as T&M contracts. To determine the degree to which agencies’ use of FAR Part 12 to acquire T&M services complies with the safeguards as incorporated in the FAR, we reviewed the contract files for our sample contracts. Specifically, we assessed: 1) whether the files contained a determination and findings (D&F) stating that no other contract type is suitable; 2) if applicable, the extent to which the D&F included FAR Part 12 or Part 16 requirements for T&M contracts; and 3) whether ceiling price increases included written documentation from the contracting officer that they were in the best interest of the procuring agency. We determined that a D&F met all the criteria if it made reference to FAR Section 12.207 and at least mentioned all of the required elements. For example, if a D&F stated the outcomes of the market research conducted but did not describe the research conducted, we still gave credit for having addressed the requirement in FAR Section 12.207 to describe the market research conducted. A partial Part 12 D&F included some but not all of the four required elements. We also reviewed federal and agency-specific acquisition guidance and regulations. To determine the applicability of these safeguards to the GSA schedules program, we reviewed GSA’s ordering guidance to agencies and to its own contracting officers and interviewed GSA policy and legal officials. We also sent a letter on March 6, 2009, to GSA’s General Counsel seeking an opinion on the applicability of Section 8002(d) of FASA, as amended, and FAR Section 12.207 to the GSA schedules contracts. We received a response on April 15, 2009. Finally, we reviewed relevant past GAO and Inspectors General reports on T&M contracts and commercial contracts for context. We conducted this performance audit from September 2008 to June 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Michele Mackin, Assistant Director; Nicholas Alexander; Keya Chateauneuf; and Tatiana Winger made key contributions to this report. Marie Ahearn, Arthur James, Jr., Julia Kennon, and Kenneth Patton also made contributions.
Federal agencies have used time-and-materials (T&M) contracts to purchase billions of dollars in services. These contracts are risky because the government bears the risk of cost overruns. Effective February 2007, the Federal Acquisition Regulation (FAR) was revised, pursuant to a statutory change, to allow T&M contracts to be used to acquire commercial services under FAR Part 12, which uses a streamlined procurement process. Certain safeguards were included in FAR Part 12, including a requirement that contracting officers prepare a detailed determination and findings (D&F) that no other contract type is suitable. Based on a mandate to review the use of T&M contracts for commercial services, we assessed (1) agencies' reported use of such contracts and what they acquired, (2) the degree to which agencies complied with the new safeguards, and (3) the applicability of the safeguards to General Services Administration (GSA) schedule contracts. GAO reviewed contracts and orders at DOD and civilian agencies and spoke with contracting officials. From February 2007 to December 2008, agencies reported using commercial item procedures under FAR Part 12 to buy a variety of services through T&M contracts; examples include emergency nursing services on Indian reservations and gunsmith services for the FBI. The reported value of these contracts was $4.4 billion--or less than 1 percent of the total federal dollars obligated for services during this period. Of the $4.4 billion, $3.1 billion had gone through GSA's schedules program. GAO identified about another $6 billion, in addition to the $3.1 billion, in T&M obligations for commercial services under GSA schedule contracts. The reliability of the data reported as T&M contracts using FAR Part 12 procedures is questionable. Of the 149 contracts GAO reviewed, 28 had been miscoded as acquiring commercial services or as T&M contracts. Another issue that indicates a potential underreporting of T&M contracts for commercial services is that contracting officials across the agencies had the mistaken impression that the fixed labor rate in T&M contracts makes these contracts fixed-price. GAO raised this issue with officials from the Office of Federal Procurement Policy (OFPP)--chair of the federal acquisition regulatory council--who agreed that clarification on what constitutes a fixed-price versus labor hour contract would be beneficial. Further, GAO found that contracting officials had different opinions of what generally constitutes a commercial service. Some viewed services intended to meet a specific government requirement as noncommercial, while others viewed similar services as commercial. The Part 12 D&F was rarely used for the contracts GAO reviewed. The D&F must incorporate four elements, such as a description of the market research conducted. Of 82 contracts reviewed that were explicitly subject to this D&F requirement, only 5 included all the required elements, and 9 partially met the requirement. Of the remaining contracts, 33 had no D&F at all and 35 included the less stringent D&F applicable to noncommercial T&M services. GAO found a general lack of awareness of the Part 12 D&F requirement at the agencies in this review. Agencies' internal management and legal reviews generally did not detect the failure to include the D&F. OFPP officials expressed concern about the lack of compliance with the D&F requirement. The Part 12 D&F requirement has not been applied to the GSA schedules program. GSA officials stated that the GSA Administrator has discretion about what procedures apply to the program. In a legal opinion to GAO on whether the statutory changes regarding T&M contracts for commercial services apply to the schedules program, GSA concluded that the applicability is uncertain but stated that existing regulations satisfy concerns about use of T&M under the schedules program. GAO notes that these regulations do not require the same level of detailed analysis as does the Part 12 D&F. Further, there is no indication that the statutory requirements cannot apply to items or services under the schedules program. GSA officials said they are in the process of developing a Part 12 D&F for the entire schedules program, but it is not clear how this D&F will act as a safeguard when T&M orders are used.
Over the past decade, DOD has increasingly relied on private sector contractors to provide a range of services, including management and information technology support. For example, DOD’s obligations on service contracts rose from $82.3 billion in fiscal year 1996 to $141.2 billion in fiscal year 2005 (see table 1). DOD committed 20 percent of its total service obligations in fiscal year 2005 for professional, administrative, and management support contracts. Overall, according to DOD, the amount obligated on service contracts exceeded the amount the department spent on supplies and equipment, including major weapon systems. The growth in service acquisition spending results, in part, from recent trends and changes within DOD’s acquisition environment, including the increased use of contracted services. For example, while spending on services has increased, DOD’s civilian workforce shrank by about 38 percent between fiscal years 1989 and 2002. DOD performed this downsizing without proactively shaping the civilian workforce to ensure that it had the specific skills and competencies needed to accomplish future DOD missions. In June 2006, DOD issued a human capital strategy that acknowledged that DOD’s civilian workforce is not balanced by age or experience. DOD further noted that a proposed reduction of an additional 55,000 personnel through fiscal year 2007, continuing increases in the number of retirement age employees, and the loss of experienced personnel and institutional knowledge could make it difficult to mentor its developing workforce. DOD’s strategy identified a number of steps planned over the next 2 years to more fully develop a long-term approach to managing its acquisition workforce. The increased use of service contracts is also partly attributable to DOD acquiring capabilities through different acquisition approaches, as well as needing to meet new requirements and demands. For example, DOD historically bought space launch vehicles, such as the Delta and Titan rockets as products. Now, under the Evolved Expendable Launch Vehicle program, the Air Force purchases launch services using contractor-owned launch vehicles. Similarly, after the terrorist attacks on September 11, 2001, increased security requirements and the deployment of active duty and reserve personnel resulted in DOD having fewer military personnel to protect domestic installations. Consequently, the U.S. Army awarded contracts worth nearly $733 million to acquire contract guards at 57 installations. DOD has traditionally approached the acquisition of services differently than the acquisition of products. DOD and military department officials we interviewed noted that DOD generally views service acquisition as less risky than the acquisition of weapon systems, in part because many services are not tied directly to mission accomplishment and tend to be composed of far more numerous and lower dollar value contracts. DOD has long focused its attention, policies, and procedures on managing major weapon systems and typically does so using the cost of the weapon system as a proxy for risk. For example, DOD classifies its acquisition programs, including research and development efforts related to weapon systems and major automated information systems, in categories based upon estimated dollar value or designation as a special interest. The largest programs generally fall under the responsibility of the Under Secretary of Defense (Acquisition, Technology, and Logistics), while less complex and risky programs are overseen by the service or component acquisition executive. Overall, more than 25 percent of DOD’s annual budget is managed under this framework. For example, as of December 2005, DOD managed 85 major defense acquisition programs currently estimated to cost about $1.6 trillion combined over their program life. Conversely, we previously reported that DOD’s approach to buying services is largely fragmented and uncoordinated, as responsibility for acquiring services is spread among individual military commands, weapon system program offices, or functional units on military bases, with little visibility or control at the DOD or military department level. For example, we noted that DOD’s information systems could provide data on the amount spent on services, but the reliability of the information was questionable and the system itself was seldom used as a tool to manage or identify opportunities for managing DOD’s supplier base. Procurement processes within DOD were not always carried out efficiently and effectively. There were few service contracting-related enterprisewide annual performance metrics, none of which measured the cost-effectiveness or quality of services obtained. Services differ from products in several aspects and can also be challenging when attempting to define requirements, establish measurable and performance-based outcomes, and assess contractor performance. For example, it can easily take over 10 years to define requirements and develop a product like a weapon system before it can actually be delivered for field use. Individual service acquisitions generally proceed through requirements, solution, and delivery more rapidly. Further, delivery of services generally begins immediately or very shortly after the contract is finalized. In response to the National Defense Authorization Act for Fiscal Year 2002, DOD and the military departments established a service acquisition management structure, including processes at the headquarters level for reviewing individual, high-dollar acquisitions. In September 2003, we reported that this approach did not provide a departmentwide assessment of how spending for services could be more effective and recommended that DOD give greater attention to promoting a strategic orientation by setting performance goals for improvements and ensuring accountability for achieving those results. In its response, DOD concurred in principle and agreed that additional actions could strengthen the management structure as implemented, but also identified challenges for doing so based on organizational size, complexity, and acquisition environment. In January 2006, Congress again enacted legislation with specific requirements for managing the acquisition of services. Among other things, the legislation required DOD to identify the critical skills and competencies needed to carry out the procurement of services; develop a comprehensive strategy for recruitment, training, and deploying employees to meet the requirements for skills and competencies; establish contract services acquisition categories, based on dollar thresholds, for the purpose of establishing the level of review, decision authority, and applicable procedures; dedicate full-time commodity managers to coordinate the procurement of key categories of services; ensure that contract services are procured by means of procurement actions that are in the best interests of DOD and entered into and managed in compliance with applicable laws, regulations, directives, and requirements; ensure that competitive procedures and performance-based contracting are used to the maximum extent practicable; and monitor data and periodically collect spend analyses to ensure that funds allotted for the procurement of services are expended in the most rational and economical manner practicable. The requirements pertaining to establishing contract service acquisition categories were to be phased in over a period of 3 years, with the first categories, for acquisitions with an estimated value of $250 million or more, to be established by October 2006. At the conclusion of our review, DOD issued a policy memorandum aimed at strengthening service acquisition management in response to the legislation. DOD is to report on its implementation by January 2007. Several key factors are necessary to improve DOD’s service acquisition outcomes—that is, obtaining the right service, at the right price, in the right manner. Our work found that to do this, an organization must understand the volume, sources, portfolios, and trends related to what it is buying, then ensure that requirements are valid and understood, services are purchased properly, and performance delivered with minimum risk and maximum efficiency. Success factors to achieve these goals can be defined at both the strategic and the transactional level, as shown in figure 1. The strategic level is where the enterprise sets the direction or vision for what it needs, captures the knowledge to enable more informed management decisions, ensures deparmentwide goals and objectives are achieved, determines how to go about meeting those needs, and assesses the resources it has to achieve desired outcomes. The strategic level also sets the context for the transactional level, where the focus is on making sound decisions on individual transactions. Our work found that officials need to ensure that individual service transactions have valid and well- defined requirements, have appropriate business arrangements, and that performance is being managed—again, while minimizing related risks and maximizing efficiency. A comprehensive approach would use the strategic and transactional factors in a complementary manner to tailor management activity to ensure preferred outcomes. Without this management attention, risks exist within each level that can impair an organization’s ability to get desired service acquisition outcomes. Our prior work with leading commercial firms found that a successful organization proactively identifies and manages outcomes of the services it acquires at a strategic, or enterprisewide, level. Effective service acquisition requires the leadership, processes, and information necessary for mitigating risks, leveraging buying power, and managing outcomes. Several factors are needed to implement a strategic approach, including (1) strong leadership to define and articulate a corporate vision, including specific goals and outcomes; (2) results-oriented communication and metrics; (3) defined responsibilities and associated support structures; and (4) increased knowledge and focus on spending data and trends. See figure 2 for key factors to achieve a strategic approach to acquiring services. Our work found that organizations seeking to significantly improve service acquisition outcomes must begin with an established vision and commitment from senior management. This can come in various ways, ranging from restructuring the corporate procurement function, providing greater insight into and authority over the company’s service spending, or signaling support for a new way of doing business. With an articulated vision, leaders then have a basis for making commitments to factors important for realizing the desired end state, such as practices, procedures, structures, information, and human capital planning. Our work has shown that when corporate goals and expected outcomes are not defined, employees becomes less likely to accept new roles or understand the importance of upcoming changes that are necessary to reduce risks in service acquisition. These include allowing the sum total of individual transactions to define the strategy and not providing a context within which managers of individual transactions can make sound judgments about the risk and sensitivity of a particular service acquisition. Being able to define a strategic vision presupposes that leaders can determine and articulate a normative position for the future. A normative position would entail defining what end state or goals they want to achieve at a specified time. This position can then be translated into specifics, both in the aggregate and by type, such as the current volume, type, location, and trends of service acquisitions; the results the organization wants to achieve in a specified time frame; the definition of a good service acquisition outcome; and the characteristics of a service acquisition that make it desirable, undesirable, or sensitive. Critical to establishing a normative position is knowledge of current service acquisition expenditures, management priorities, and expected outcomes. The vision could also dictate which services represent risks to the organization. For example, acquisitions could be deemed low risk based on minimal cost exposure, high availability of service providers, or limited criticality for meeting mission requirements. Conversely, high-risk acquisitions may be those of higher dollar value, mission-critical requirements, services that are new or being acquired using a different approach, or any other services determined to need additional corporate- level involvement or oversight based on management priorities. Once a vision and desired end state for service acquisition have been defined, senior management must be both active and persistent in supporting ongoing efforts, adjusting the strategy to reflect new information, and moving toward the established normative position. Communication and metrics are important management ingredients in terms of overcoming resistance, cultural barriers, and other impediments to achieving identified goals. Senior leaders also have the responsibility to communicate and demonstrate a commitment to sound practices deemed acceptable for the acquisition function. We have previously reported that DOD faces vulnerabilities in aspects of its senior leadership because of certain disconnects, including senior positions that have remained unfilled for long periods of time, the acquisition culture fostered by management’s tone at the top, and the management approach used in new industry partnering relationships. We have also noted the importance of leadership by senior agency officials to successfully transform other aspects of DOD’s business operations and those of other federal agencies. For example, our prior work has shown that DOD’s substantial financial and business management weaknesses adversely affect not only its ability to produce auditable financial information, but also its ability to provide accurate, complete, and timely information for DOD management and Congress to use in making informed decisions. We indicated that overcoming these weaknesses required sustained leadership at the highest level and a strategic and integrated plan. Metrics defining specified outcomes are vital to increasing the likelihood that changes to practices will successfully contribute to the organizational vision. While they can differ in nature and be used to varying degrees, metrics can be used to (1) evaluate and understand performance levels, (2) identify critical processes that require attention, (3) document results over time, and (4) report information to senior officials for decision- making purposes. To illustrate this, DOD spends 20 percent of its service dollars on professional and administrative management support contracts. If senior DOD officials believe that such volume poses risks, then it can use this information to establish targets to control and monitor the use of these services. For example, in March 2006 the Secretary of the Air Force issued a memorandum directing increased visibility and management of contract services in support of command functions, in an attempt to save over $6 billion that would then be used for other transformation initiatives. If the Air Force follows up by collecting timely data on the individual service transactions made in this area, it can see whether it is making progress toward its desired end state. For DOD, risks of not doing this at a strategic level entail losing momentum and failing to sustain positive change, and such failures can then be manifested in quick fixes, fire drills, or changes in policy statements that do not have a material effect on actual operations. Successful service acquisition management also requires attention to the organization’s ability to move from a fragmented manner of doing business to one that is more coordinated and strategically oriented. Primarily, this involves changing how services are acquired in terms of business processes, organizational structures, and roles and responsibilities. Our work with leading commercial firms found that typical changes in this area include restructuring acquisition organizations and elevating the procurement function to improve coordination with other internal organizations and optimize available resources; establishing new processes for routine tasks and using cross-functional teams made up of individuals with various skills to ensure the right mix of knowledge, technical expertise, and credibility; and establishing full-time, dedicated commodity managers to provide more effective management over key services. We reported in March 2005 that the Department of Homeland Security was pursuing similar approaches as it attempted to integrate the various acquisition functions it inherited upon its establishment in 2003. For example, the department designated a Chief Procurement Officer with broad responsibility for its acquisition function and established commodity councils composed of representatives from across the department that were assigned responsibility for assessing future purchasing strategies. We noted, however, that senior agency leadership needed to address a number of challenges before fully integrating its procurement function, such as clearly defining the roles and responsibilities of key offices, and establishing a structure to ensure continued support for commodity councils—such as appointing full-time commodity managers. In essence, this move toward a more strategic orientation can be compared to a franchise model of business versus that of individually owned stores or units. While franchises, like individually owned businesses, operate at the local level and adapt to the specific needs and demands of a community, they still must adhere to the consistent set of standards and processes of the parent organization. For service acquisition, this translates into recognition that while unique local requirements need to be understood and met, individual acquisitions should also be viewed in the context of organizational goals, objectives, and strategies. In this regard, company officials indicated they can tailor delivery of services to meet local needs while helping to achieve organizational cost savings or quality improvement objectives. Risks here are twofold. First, if a single, monolithic process is used for every service acquisition regardless of size, sensitivity, or type, it could be overkill for some transactions and insufficient for others. Second, allowing local buying activities to operate independent of organizational standards and processes would impair or defeat an organization’s ability to achieve desired aggregate goals or outcomes. Organizations also need basic, reliable data on how service dollars are being spent and the capabilities of the workforce in place to acquire and manage those services. Company officials who were successful with improving service acquisition management informed us it was critical to define the relevant types of information that were required and then develop the appropriate data systems to collect and provide reliable spending data. Such data enable senior managers to know not only the current state of service acquisition, but how far it is from the desired end state. While the type of information may vary depending on the organization and the types of services acquired, basic spend analysis data should include information and trends related to the type of services being acquired; the number of suppliers for a specific service the organization is using; the amount the organization is spending for that service, in total and with each supplier; and the units in the organization that are acquiring the services. We have previously reported that several civilian agencies have used this approach to leverage their buying power, reduce costs, and better manage suppliers of goods and services. For example, we reported in September 2004 that the Departments of Agriculture and Veterans Affairs, among others, had launched or expanded spend analysis efforts and in turn realized savings ranging from $1.8 million to $394 million on related acquisitions. Similarly, we noted in 2005 that the Department of Homeland Security identified 15 commodity areas as having the potential to leverage the department’s buying power. In fiscal year 2004, four commodity councils reported approximately $14.1 million in cost savings and avoidances. Some councils encountered difficulties due to a shortage of comprehensive data upon which to draw an accurate and detailed picture of what was being spent on certain commodities over time, thereby preventing them from taking full advantage of their strategic sourcing and spend analysis efforts. Equally important and necessary is for an organization to have a workforce that is manned at the appropriate levels and equipped with the right skills and abilities. To do this, a comprehensive, data-driven workforce analysis must be performed in conjunction with establishing the corporate vision and goals. An organization cannot fully understand what skills and staffing commitments are necessary at each organizational level to meet service acquisition requirements until it understands where it wants to go and how it plans to get there. Once information on spending and workforce capabilities is known and understood, organizations can be more strategic in planning and managing service acquisition. The absence of such data creates several risks, including not knowing how and where money is being spent on service acquisition or not having the appropriate workforce skills or staffing levels to ensure it is using sound buying practices. While the strategic level defines the direction and manner in which an organization pursues improvements in service acquisition, it is through individual service transactions that the strategy is implemented. Key factors at this transactional level include (1) clearly defined and valid requirements; (2) appropriate business arrangements; and (3) effective contractor management and oversight. In short, an organization needs to assure itself that on individual service transactions it is buying the right thing in the right way and that doing so results in the desired outcome. See figure 3 for key factors for effectively managing service acquisitions at the transaction level. Establishing a valid need and translating that into a service acquisition requirement is essential for obtaining the right outcome. Without this, an organization increases the risk that it will pay too much for the services provided, acquire services that do not meet its needs, or enter too quickly into a sensitive arrangement that exposes the organization to financial, performance, or other risks. Moreover, to establish accurate requirements, the customer organization would benefit by involving stakeholders that have knowledge about past transactions, current market capabilities and the potential supplier base, and budgetary and financial management issues. The makeup of stakeholders may vary across different transactions depending on the nature, complexity, and risks. In the end, the purpose of stakeholders with varied knowledge and skills is to ensure at the earliest point possible that all aspects of the acquisition are necessary, executable, and tailored to the level of risk commensurate with the individual transaction. We have found that when DOD uses similar teaming concepts to develop and deliver products, the results have included superior outcomes within predicted time frames and budgets. For example, we reported in April 2001 that the Advanced Amphibious Assault Vehicle program used teams to reduce the time needed to make a design decision from 6 months to about a week. Because the nature of service contracts can vary, they naturally require different approaches in describing requirements. For example, the time, discipline, and sophistication of a team developing a requirement for repetitive building maintenance would be considerably less than that of a team developing a requirement for the first purchase of a space launch service. Observing these factors, tailored to the individual requirement at hand, can help to ensure that risks associated with a requirement for a service acquisition are fully considered before entering into a business arrangement. This is especially important for service acquisitions, because once requirements are developed, most transactions move very quickly into the business arrangement and contracting stages. Once a requirement has been validated and defined, it becomes necessary to develop an appropriate business arrangement to meet that need while protecting the government’s interests. Of course, without a sound requirement, the business arrangement could be relegated to buying the wrong service the right way. At a basic level, this includes defining a clear scope of expected contractor performance, developing an objective means to assess the contractor’s performance, ensuring effective contractor selection based on competition and sound pricing, and selecting an appropriate contracting vehicle. Here again, while these are performed with respect to the individual transaction, they must be done in the context of the organization’s strategic vision. As an organization undergoes the process for selecting those contractors that will provide services, there should be clearly established relationships among what tasks the contractor is expected to perform, the contract terms and conditions, and performance evaluation factors and incentives. This is especially true as federal agencies makes adjustments to their acquisition practices. For example, in recent years, federal agencies have made a major shift in the way they buy services, turning increasingly to interagency contracts as a way to streamline the procurement process. In these cases, an agency can use an existing contract that has already been awarded by another agency, or turn to another agency to issue and administer task orders on its behalf, often for a fee. Requirements, roles, and responsibilities need to be clear to reduce risks. For example, we reported in July 2005 that DOD customers did not provide the awarding agency with detailed information about their needs. Without this information, these agencies did not translate DOD’s needs into well- defined contract requirements that contained criteria to determine whether the contractor had performed successfully. In the absence of well-defined outcomes, DOD and the agencies lacked criteria to provide effective contractor oversight. Similarly, competition during the acquisition process is also important in getting reasonable prices, as offerors put forth their best bid and solution to meeting the proposed requirements and the government receives the benefit of market forces on pricing. We have noted, however, that DOD has, at times, sacrificed the benefits of competition for expediency. For example, we noted in April 2006 that DOD awarded contracts for security guard services supporting 57 domestic bases, 46 of which were done on an authorized, sole-source basis. The sole-source contracts supporting the last 37 installations were awarded by DOD despite recognizing it was paying about 25 percent more than previously paid for contracts awarded competitively. When proper management controls are not in place, particularly in an interagency fee-for-service contracting environment, too much emphasis can be placed on customer satisfaction and revenue generation rather than on compliance with sound contracting policy and required procedures, such as competition. Significant problems in the way contracting offices carry out responsibilities in issuing the orders for services may not be detected or addressed by management. For example, in April 2005 we reported that a lack of effective management controls—in particular insufficient management oversight and a lack of adequate training—led to the breakdowns in the issuance and administration of task orders for interrogation and other services in Iraq, including: issuing 10 out of 11 task orders that were beyond the scope of underlying contracts, in violation of competition rules; not complying with additional DOD competition requirements when issuing task orders for services on existing contracts; not properly justifying the decision to use interagency contracting; not complying with ordering procedures meant to ensure best value for the government; and inadequate monitoring of contractor performance. Without appropriate attention, there is an increased risk that the government will pay too much for the purchased service, will be limited in its access to new and innovative alternatives, or will not be in the proper position to effectively manage the contractor after an arrangement is established. At the transactional level it is also important to implement a post-contract award process to effectively manage and assess contractor performance to ensure that the business arrangement is properly executed. Managing and assessing post-award performance entails various activities performed by government officials to ensure that the delivery of services meets the terms of the contract, including adequate surveillance resources, proper incentives, and a capable workforce for overseeing contractor activities. Each of these requires metrics and tools to encourage contractors to provide superior performance and to manage and document that the contractor’s performance was acceptable. For example, one important element of this phase is having a plan for assessing performance that outlines how services will be delivered. In addition, the plan should provide a mechanism for capturing and documenting performance information so it can serve as past performance information on future contracts. Effective use of such a plan can allow the government to evaluate the contractor’s success in meeting the specified contract requirements. Further, organizations can use monetary incentives, such as those provided through award and incentive fee contracts, to promote desired acquisition outcomes. Finally, quality assurance surveillance— oversight of the services being performed by the contractor—is important to ensure that contractors are providing timely and high-quality services and to help mitigate any contractor performance problems. In an environment that demands increased interaction between DOD and the contractor to ensure expected outcomes, acquisition personnel must be adequately trained to understand each of these elements and have the skills to manage service contractors accordingly. Without appropriate attention through contract completion, we have found that risks exist that could result in poor contractor performance, services not being delivered as expected, or payment to contractors for more than the value of the services they performed. For example, our March 2005 review of 90 contracts showed wide variance in the level of surveillance, including 15 contracts that had no personnel assigned at all for these responsibilities. According to DOD officials, this condition existed because surveillance was not as important to contracting officials as awarding contracts and contracting oversight personnel were not properly assigned, evaluated on the performance of their duties, or provided enough time to complete surveillance tasks. In the same way that the development of requirements for services must be different from the development of requirements for products, so is the case for overseeing contractor performance. Given that performance thresholds may vary greatly, management and oversight of individual service acquisitions may need to be tailored to meet specific requirements. In some cases, dollar value may not be a good proxy for determining risk. For example, some high dollar contracts could pose relatively little risk to achieving the agency’s mission. Conversely, certain lower dollar contracts, such as those used to obtain interrogation services in Iraq, may pose higher risk and, therefore, require greater management attention. DOD and the military departments have not yet fully addressed the key elements for managing service acquisition at a strategic or a transactional level. At the strategic level, DOD has not formed a normative position of where service acquisition needs to be and does not have the data necessary to know the state of service acquisition today. As a result, DOD is not in a position to determine whether investments in services are achieving their desired outcomes. These are precursors to defining and promoting improved outcomes. At the transactional level, most of DOD’s efforts have been aimed at improving business arrangements, without commensurate focus on how requirements are established and communicated or how service contracts are executed. Despite the implementation of a senior-level review process, buying commands and activities have not made significant changes to how they manage individual service acquisitions. DOD’s overall approach to managing service acquisition suffers from the absence of several key elements. DOD has not developed a strategic vision and lacks sustained commitment to manage service acquisition risks and foster more efficient outcomes. As a result, DOD is not in a position to communicate to its workforce how it intends to improve its acquisition of services; determine needed changes to structures and processes to better identify and prioritize risks; or understand the current state of service spending and the skills of its current workforce. While DOD’s current approach to managing service acquisition at the strategic level provides some additional insight into high-dollar value service acquisitions, it lacks an overall road map for managing risk and integrating key service acquisition initiatives. DOD has not yet identified the types and quantities of services it purchases; the outcomes needed in service acquisition so that necessary changes can be understood and evaluated; or metrics that can be used to assess whether those changes have actually achieved the expected outcomes. DOD and military department officials have acknowledged that DOD has not developed a comprehensive plan that targets areas needing improvements, coordinates ongoing and planned initiatives, and provides an overall road map to improve DOD’s management of services. In the absence of such a vision, DOD’s strategic level efforts do not position the department to proactively manage service acquisition outcomes, but rather relegate DOD to a reactive role in which the billions of dollars spent acquiring services simply reflects the sum total of individual actions. Further, DOD’s efforts to transform its enterprisewide business operations may not translate into improved knowledge on how services are acquired. For example, DOD established the Business Transformation Agency in October 2005 to lead and coordinate business transformation efforts across the department. The Business Transformation Agency is tasked primarily with modernizing key information technology systems and business processes intended to make reliable data more readily available while at the same time consolidating the overall number of information technology systems and ensure consistency across the department. However, the Business Transformation Agency has few ongoing activities directly related to the acquisition of services. In addition, DOD has pursued few opportunities to leverage its buying power to acquire services through the use of strategic sourcing concepts. While DOD has undertaken a number of pilot efforts, only a limited number of these focused specifically on services. In 2006, DOD appointed the Assistant Deputy Under Secretary of Defense for Strategic Sourcing and Acquisition Processes to coordinate efforts and assist other DOD components, including the military departments and the Defense Acquisition University (DAU), as they develop strategic sourcing plans and training processes. The Assistant Deputy Under Secretary stated that initial efforts were focused on developing a concept of operations to facilitate this requirement, but so far had been limited by a lack of staff and resources. Further, he acknowledged that his office does not play a role in DOD’s service acquisition review process. In September 2006, a senior DOD official indicated that DOD was considering transferring this responsibility to the Office of the Director, Defense Procurement and Acquisition Policy. It is uncertain how this change, if implemented, would affect the roles and responsibilities previously assigned to the office. Because it lacks a strategic vision, DOD is not in a position to communicate how it intends to improve its approach to service acquisition. DOD’s primary policy for managing service acquisition came in the form of a memorandum issued in response to sections 801 and 802 of the Fiscal Year 2002 National Defense Authorization Act. That memorandum, issued in May 2002, noted DOD’s intent to move to a more strategic and integrated approach to the acquisition of services and the need to treat this area as seriously as it does that of hardware. Similarly, DOD and senior military department officials have testified on the need to improve service acquisition management within their departments. Nevertheless, our discussions with command and buying activity officials found that while recognizing this need, without specific guidance from DOD, their acquisition practices remain unchanged. As a result, senior DOD leadership’s call for change has had limited impact on acquisition practices at lower levels within the department. Further, one of the biggest obstacles to a more strategic approach to service acquisition is breaking down cultural barriers at different levels and across various functions of the acquisition process. In that regard, officials noted that the acquisition and contracting communities often do not have a shared vision for improving service acquisition or of their role in such a vision. For example, DOD has acknowledged that the use of performance-based service contracting techniques is generally perceived as a “contracting” initiative, with the rest of the acquisition community generally not fully participating or embracing the initiative. Consequently, DOD and military department officials indicate that without senior leadership and commitment, it is difficult to get support for changes in business practices within the acquisition community. As part of its May 2002 policy, DOD required the development of a review process for individual service acquisitions, established oversight thresholds, and specified which service acquisitions are to be reviewed. In addition, it required the military departments to establish a similar management review process. DOD officials noted in 2003 that this approach, combined with several other initiatives, was expected to have significant impact on the acquisition of services. The new management structure DOD implemented to address identified deficiencies associated with the management of services established three levels: (1) review by the Under Secretary of Defense (Acquisition, Technology, and Logistics) for services acquisitions valued over $2 billion; (2) review by the component or designated acquisition executive for service acquisitions valued between $500 million and $2 billion; and (3) review by a component-designated official for the acquisition of services valued at less than $500 million. In response to this guidance, the Air Force, Army, and Navy each developed individual service acquisition review processes and authorities to support the DOD review requirements and identified respective decision authorities responsible for conducting execution reviews to assess progress against metrics. DOD and military department officials with whom we spoke indicated that the review structure has provided the reviewing office with additional insight on high-dollar value service acquisitions. However, the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) lacked complete information on the number and scope of acquisitions of which it was notified and therefore could not give us a definitive response as to how many transactions were formally reviewed. Officials from that office provided a list of 19 service acquisitions that had been notified for review—9 Army and 2 Air Force acquisitions, in addition to 8 acquisitions from the Office of the Assistant Secretary of Defense (Networks and Information Integration), which are subject to review under the guidance for major automated information systems—but provided no additional information on the results of those reviews. Data provided by officials at the military department level indicated that through September 2005, 69 acquisitions—representing just under 3 percent of service obligations—had been reviewed by the Air Force, Army, and Navy under the new process (see table 2). While the DOD reviews to date under this process have provided some additional visibility over high-dollar value service acquisitions, the reviews tend to focus more on ensuring compliance with applicable statutes, regulations, and other requirements, rather than on imparting a vision or tailored method for strategically managing service acquisition. Senior DOD and military department officials noted that the process is generally not intended to review program or customer decisions made at lower levels within the department as to the need for the particular services or to have a post-contract award follow-up assessment to ensure expected outcomes. Also, the reviews have not positioned DOD to regularly identify opportunities to leverage buying power. Further, they noted that the reviews are largely perceived as a function and responsibility of the DOD contracting organization, rather than a shared responsibility of the entire acquisition community, to include the program office and other customers for services. Moreover, DOD’s policy does not require the Under Secretary of Defense (Acquisition, Technology, and Logistics) to actually review those acquisitions that exceed the $2 billion threshold. Service acquisitions that meet this threshold are first reviewed and approved at the military department level. In turn, the military departments notify the Under Secretary that a service acquisition exceeding the threshold is available for review. If there is no response within 10 days, the acquisition is allowed to proceed without further review. Additionally, at the military department level, most of the service acquisitions reviewed to date have been indefinite delivery/indefinite quantity contracts. There is no requirement to review individual task orders that are subsequently issued even if the value of the task order exceeds the review thresholds. We spoke to many officials at buying activities that had proposed service acquisitions for review under this process. For the most part, they did not believe the review significantly improved those acquisitions and noted very few examples of occasions when, as a result of review feedback, acquisition strategies were changed in a meaningful way. For example, the reviews tended to focus on compliance with applicable laws, regulations, and socioeconomic goals, such as small business participation or, in other words, that the business arrangements are proper—all of which are covered in the development of the acquisition strategy prior to the review. These officials indicated that the timing of the review process—which generally occurs well into the planning cycle—is too late to provide opportunities to influence the acquisition strategy. These officials told us that the reviews would be more beneficial if they were conducted earlier in the process, in conjunction with the program office or customer, and in the context of a more strategic approach to how best to meet the requirement, rather than simply from a secondary or tertiary review of the contract and in an area where they have considerable experience and expertise. In addition, contracting officials at one buying command stated that reviewing officials often lack the resources or technical expertise to provide useful and valuable feedback. DOD’s ability to effectively manage service acquisition at either the strategic or the transactional level is hindered by the absence of reliable data on which to make informed decisions. DOD and military department officials acknowledge that the DOD contracting information systems available to the locations we visited do not provide information on forecasted demands for services; current and reliable information on what services are currently being procured; or data to assess whether these services are being acquired in line with cost, schedule and performance goals, or otherwise meeting customer needs. For its part, DOD has not identified the specific data it needs to better manage service acquisition outcomes or developed appropriate data systems that are essential for providing the information necessary for improving results. According to DOD documentation, there are thousands of individual information systems that have been implemented over decades to meet various mission needs, and rather than providing usable information, these systems can hinder collecting information needed by decision makers. Because these systems were developed independently— often not designed to be interoperable with other such systems—it is a challenge to share data with other locations or higher organizational levels in support of broader planning and decision making. Even collecting basic information on high-dollar services often proves time-consuming. For example, in April 2005, DOD initiated a formal review of its service acquisition policy. According to DOD officials, after determining that its data systems were inadequate to identify and assess the status of service acquisition policy compliance, the department initiated a data call to each of the military components asking for a status review of the top 20 service acquisitions in each military department since inception of the policy. This data call took more than 6 months to collect, review, and report basic contracting data. Further, the results did not provide the types of knowledge DOD had expected. Finally, DOD has acknowledged that it faces significant workforce challenges that if not effectively addressed could impair the responsiveness and quality of acquisition outcomes. In response, DOD is in the process of identifying the current skill sets and gaps of the acquisition workforce that routinely are engaged in acquiring services. For example, DOD’s 2006 Human Capital Strategic Plan noted there are currently efforts to develop a comprehensive competency model for each functional career field including the technical tasks, knowledge, skills, abilities, and personal characteristics required of the acquisition workforce. Similarly, DAU officials noted that they are revising the training curriculum for acquisition personnel, in part to provide an increased emphasis on service acquisition. For example, DAU has an ongoing effort to identify the critical competencies for service acquisition, determine which of these competencies require further workforce training, and develop the appropriate training. DAU officials stated the new courses will be initially targeted for contracting personnel. In addition, DOD officials stated that these courses will be made available to noncontracting acquisition personnel only as time and resources permit. Our work found that at the transactional level, buying commands and activities have not significantly adjusted their acquisition practices since DOD implemented its new review structure. The current transactional- level approach does not always take the necessary steps to ensure customer needs are translated into well-defined contract requirements or that post-contract award activities result in expected outcomes. Instead, DOD service acquisition management activities focus primarily on awarding the contract. Without clearly defined requirements and attention after the contract is awarded, DOD cannot be sure it is buying the right service or using an appropriate means to assess contractor performance. As a result, DOD is potentially exposed to a variety of risks, including buying things that do not fully meet customer needs or that should be provided in a different manner or with better results. DOD and military department officials consistently identified poor communication and the lack of timely interaction between the acquisition and contracting personnel as key challenges to developing good requirements. These officials noted that developing well-defined and clearly articulated requirements in outcome-based performance measures is difficult in and of itself, but the challenges can be reduced if both communities work together early in the process. Several officials identified actions they have initiated to improve working relationships, but acknowledged that results have not been uniformly achieved. In part, these issues arise from cultural differences between the contracting and acquisition communities concerning their roles in managing various service acquisition elements. Generally the intended customer of a service, such as a program office, has the responsibility to identify what type of service it requires, the level of performance or quality needed, the period of performance, and the available budget. To avoid problems with the later stages of the acquisition process, and depending on the complexity of the services needed, early involvement of the contracting and other functional communities is important. However, contracting officers we spoke with frequently commented that the initial statements of work prepared by the customer were often insufficient, unclear, or not expressed in performance-based terms, requiring considerable rework. In addition, officials told us it is important that contracting officers fully understand exactly what the customer needs in order to get the best business arrangement for the government. However, contracting officers did not always have the necessary knowledge or expertise to understand the requirement, such as translating specific requirements into the statement of work. According to contracting officials, the resulting frustrations are heightened when customers identify the need to award a contract for the services in a short time period. Similarly, because services are generally funded on an annual basis, contracting officers are often faced with many pressures at the end of each fiscal year. For example, officials at the Navy Fleet Industrial Supply Center in Philadelphia noted the impact of DOD’s recent policy requiring contracting officers to approve task orders with a value of $100,000 or more if they are issued against a non-DOD contract. While this does provide greater visibility in an area of previous concerns, the officials we spoke to indicated that the policy was issued without first assessing the impact on the contracting workforce or whether the customer was fully aware of the new process requirements. Consequently, contracting personnel were faced with a significant increase in workload, much of it at the end of the fiscal year. As many of these contracting officials had not been involved with the original negotiation or award of the contract, and because of the short time frames needed to issue the orders, they felt pressured to review and approve task orders without being able to fully assess whether the overall approach was the most effective or efficient. The lack of technical knowledge and training was raised as an issue at several commands we visited. For example, at many locations, officials commented on the lack of contracting knowledge on the part of the customer. One contracting manager told us he would be willing to pay for contracting-related training for customers, so that they could better understand how to prepare various contract documents, such as a performance-based statement of work or an award fee evaluation plan. Contracting officials told us that such documents can be difficult to prepare without sufficient planning and input from customers who are familiar with what needs to be accomplished. Similarly, contracting officers at one location told us that they sometimes have to alter acquisition approaches because it is too difficult to develop evaluation plans that can be used by customers to effectively evaluate contractor performance. Program officials also commented on the lack of technical knowledge on the part of the contracting community. One Air Force program official told us that he is required to use the general base contracting office to procure advanced medical services, even though the contracting officers usually do not have related technical knowledge and sometimes have difficulty understanding the requirement. Shared knowledge and communication are therefore important for ensuring that customers and contracting personnel are placed in the best position to achieve expected outcomes. Officials at some locations reported that better acquisition outcomes can result from establishing effective working partnerships. For example, Air Force Space Command officials noted that one of their major service acquisitions involves support for base operations in Thule, Greenland. The Air Force has relied on contractors to provide these services for more than four decades and believed their experience on the program illustrates key aspects needed to promote a successful acquisition. These officials noted that the service is a high priority and receives considerable attention from senior management. Additionally, the command employs a team-based approach to the acquisition, which means that personnel from both the customer and contracting communities are assigned and remain on the acquisition team throughout the development of the requirements, associated acquisition strategy, and contracting approach. Further, to help develop the requirements, the team receives considerable input from program personnel and the contracting officer’s technical representative as to the contractor’s performance, and makes use of monthly reports that measure key performance parameters. By including personnel from all stages of the acquisition process—program managers, contracting officers, and quality assurance personnel—Space Command officials believed they were able to make adjustments to their requirements that allowed the contract to be priced in a manner that reduced cost risk to the government. Command officials and those involved in the Thule service acquisition acknowledge, however, that they have not been able to consistently replicate this success on all other acquisitions. Because of the recognized need to improve communication and share knowledge between customers and the acquisition workforce, some of the buying commands we visited have taken actions to promote communication and timely interaction. For example, the Army’s Communications and Electronics Command colocated senior contracting officers with customers to promote better communication and more cooperation between the two communities. These staff members, referred to as customer service representatives, establish early lines of communication by participating in management meetings with the customers to identify future acquisition needs. The representatives use this knowledge to help senior management in the contracting organization identify resources and approaches to meet customer needs. Buying commands and activities we visited focused the majority of their attention on structuring business arrangements to ensure compliance with applicable laws and regulations. As a result, command officials generally indicated that their previous practices were already in line with requirements established under DOD’s review process and therefore remain largely unchanged. In some cases, the commands have established additional procedures to review specific areas of interest, such as the proposed use of an interagency contract or to ensure that task orders under multiple award contracts comply with competition requirements. Despite these reviews, however, we have identified examples of potentially poor contracting practices and the pressure to meet customer demands. For example, On one acquisition, an Army contracting officer issued a task order for a product that the contracting officer knew was outside the scope of the service contract. The contracting officer noted in an e-mail to the requestor that this deviation was allowed only because the customer needed the product quickly and cautioned that no such allowances would be granted in the future. The Navy has established Seaport-enhanced, a centralized electronic ordering system that competes and issues task orders for multiple customers for program management support contracts. The Navy instructed buying activities within its virtual system command structure to use Seaport-enhanced as the “mandatory method of choice” for these services. However, officials at one Navy buying command told us they plan to submit waivers to avoid using Seaport- enhanced to meet their customer’s preference for using particular contractors and to use time- and-materials contracts, neither of which would be possible using Seaport-enhanced. An Air Force contracting official noted that his office intended to award a number of contracts to local firms so that the firms, in turn, could have increased opportunities to provide services to other civilian and military organizations in the region, through marketing themselves as having been awarded a federal contract. While these cases are anecdotal, they indicate that some contracting officers feel pressured to meet their customers’ needs and had, or were considering, options that may not be in the bests interests of the government. Command and buying activities we reviewed generally had limited capabilities to assess the degree to which their service acquisitions were successful. For example, few of the commands or activities could provide us reliable or current information on the number of service acquisitions they managed, and others had not developed a means to consistently monitor or assess, at a command level, whether such acquisitions were meeting the performance objectives established in the contracts. Many command officials noted the difficulties in doing so, since service acquisitions involve a wide range of activities that necessitate different measures of quality or performance from each other or from acquisitions involving products or major weapon systems. Often, these officials noted that their measure of success is reflected in terms of customer satisfaction or the number of complaints received from the customers. Command officials noted that cost, delivery, or schedule performance measures may not be as effective on service contracts as for products or weapon systems. In this regard, the officials noted that services are often 1-year efforts in which schedule performance provides limited insights. Similarly, these officials noted that many of their service contracts, which are often cost reimbursable or time-and-materials in nature, are funded on a quarterly basis and are limited by the amount of funds made available. In these cases, program officials noted that if more funds are made available than expected, the customer may increase the number of staff or labor hours to be provided; conversely, funding reductions will be reflected in commensurate reductions in the number of staff or labor hours. In either case, measuring changes in cost or hours is more reflective of the availability of funding, rather than an indication of contractor performance. Command officials noted that their information systems generally do not provide a capability to assess service acquisition outcomes. Additionally, the ability to conduct oversight within DOD’s management structure is often constrained by resources and workforce availability. For example, Air Force documentation suggests that 90 percent of planning activity is focused on getting the contract awarded, leaving very little for contract administration and oversight. Further, DOD officials noted that organizations like the Defense Contract Management Agency do not perform the same level of surveillance functions for services as they do for products. The Defense Contract Management Agency generally assembles integrated program support teams to deliver support at prime contractor facilities, which in turn supply business and technical support and furnish program managers with insight into program execution at the prime contract level, as well as the major and critical subcontract tiers. Senior DOD officials told us that because services tend not to rise to the level of a program, the Defense Contract Management Agency does not always provide the resources to support those acquisitions. Rather, the contract administration task is often assigned to a local contracting officer technical representative. Just as commercial firms have reported positives changes after implementing management approaches that include strategic and transactional elements, there are also examples where DOD has had similar success. DOD officials who report success stated that this can be achieved by paying attention to and addressing the risks inherent in each of the key elements in the service acquisition process—at both the strategic and the transactional levels. For example, the Air Force, in conjunction with the Army, developed a strategic approach to acquiring wireless services and generated projected savings of 30 percent annually for just one of its service providers. According to Air Force officials, the combination of sustained leadership support, good data, a supporting structure, and communication with and among customers mitigated risks and set the context for the transactional level to achieve good acquisition outcomes. The Air Force’s Chief Information Officer and the Deputy Assistant Secretary for Contracting actively participated in establishing and supporting the development of the Information Technology Commodity Council, which is responsible for managing information technology-related strategic sourcing initiatives. Clear roles and responsibilities were established within the supporting structure of the council, including appointing a single individual to lead the initiative; identifying stakeholders who were responsible for developing the requirements; and establishing a team to perform market research and obtain other necessary data. The team responsible for obtaining the data upon which to base its acquisition decisions consisted of Army and Navy officials. These officials worked together to perform a market analysis to understand the marketplace, develop a spend analysis to understand current expenditures, and forecast future demand to understand the needs of the military departments individually, and of DOD as a whole. As a result of the team’s efforts, data are now available to help any agency within DOD save money when it acquires wireless services. Over the past 10 years, DOD has seen large growth in the acquisition of services, to the point where the value of these acquisitions exceeds the value of major weapon systems. To a large extent, this growth has not been a managed outcome. Congress, concerned over these rapid increases, has directed DOD to take several actions to promote more oversight and discipline in service acquisition. DOD has taken action, but action has not necessarily equated to progress. At this point, DOD is not in a good position to say where service acquisition is today in terms of outcomes, where it wants service acquisition to be in the next few years, or how to get there. This makes it difficult to set the context within which individual organizations can make informed judgments on service acquisition transactions. Without this context, DOD will not be in a position to determine the current volume, type, location, and trends of service acquisition; the results DOD wants to achieve in the next 3 to 5 years in each of these areas; the definition of a good service acquisition outcome; the characteristics of a service acquisition that make it desirable, undesirable, or sensitive; the risks that need to be managed at each stage of a transaction; and the conditions under which a transaction should be referred for review. Given the diverse nature of services that DOD acquires, multiple sources of risk, and wide variety of organizations that are involved in individual acquisitions, until this basic information is available and understood, it may not be to possible to develop an ideal departmentwide review process or organizational structure. For example, while setting dollar thresholds as a basis for reviewing an individual service acquisition is an improvement over no review, dollars are not always a good proxy for risk. Moreover, when a service acquisition reaches the review stage, the requirement and proposed business arrangement are set and expediency becomes an issue as the contract is ready to be awarded. Ultimately, the majority of individual service acquisition decisions will be made by organizations at the local level. The people making these decisions will have to make judgments regarding the risks and soundness of requirements, sources, competition, contract types, and execution follow-up. Their decisions will also trigger which acquisitions receive higher level review. Of primary importance now is to provide a context for these organizations in which they can make tailored decisions and recommendations that remain consistent with DOD’s overarching views of risk, desirable outcomes, and direction for service acquisition. That context does not yet exist. The strategic and transactional elements presented in this report can ultimately provide such a context. While not new, these elements, when considered together, offer the prospect of a cohesive approach—a necessary precursor to developing specific solutions to improve service acquisition outcomes. DOD will then be able to evaluate outcomes against expected results, provide the basis for making course corrections, and ultimately make service acquisitions a managed outcome. We recommend the Secretary of Defense adopt a proactive approach to managing service acquisition that leverages strategic and transactional elements. Specifically, we recommend that the Secretary of Defense take the following six actions: establish a normative position of how and where service acquisition dollars are currently and will be spent (including volume, type, and trends); determine areas of specific risk that are inherent in acquiring services and that should be managed with greater attention (including those areas considered sensitive or undesirable in terms of quantity or performance); on the basis of the above, clearly identify and communicate what service acquisition management improvements are necessary and the goals and timelines for completion; ensure that decisions on individual transactions are consistent with DOD’s strategic goals and objectives; ensure that requirements for individual service transactions are based on input from key stakeholders; and provide a capability to determine whether service acquisitions are meeting their cost, schedule, and performance objectives. DOD provided written comments on a draft of this report. DOD concurred with each of our recommendations and identified actions it has taken or plans to take to address them. These comments are reprinted in appendix II. As part of its comments, DOD provided its October 2006 policy memorandum that implements Section 812 of the National Defense Authorization Act for Fiscal Year 2006. We did not reprint the policy as it is publicly available through DOD’s acquisition website (www.acq.osd.mil/dpap/). DOD also provided technical comments, which we have incorporated as appropriate. DOD agreed that a more coordinated, integrated and strategic approach for acquiring services is needed. In particular, DOD noted that it is reassessing its strategic approach to acquiring services, including examining the types and kinds of services it acquires and developing an integrated assessment of how best to acquire such services. DOD expects this assessment will result in a comprehensive, departmentwide architecture for acquiring services that will, among other improvements, help refine the process to develop requirements, ensure that individual transactions are consistent with DOD’s strategic goals and initiatives, and provide a capability to assess whether service acquisitions are meeting their cost, schedule and performance objectives. DOD expects its assessment will be completed in early 2007. DOD also noted that it has taken a number of initiatives to improve specific issues associated with acquiring services. For example, DOD noted that its October 2006 policy will modify certain aspects of the current management structure, including providing lower dollar thresholds for reviewing proposed services acquisitions and requiring senior DOD officials to annually review whether service contracts were meeting established cost, schedule and performance objectives. Further, DOD noted that it had made organizational changes to improve its strategic sourcing efforts; it is assessing the skills and competencies needed by its workforce to acquire services; and the military departments and defense agencies are currently conducting self-assessments intended to address contract management issues we identified in our January 2005 high-risk report. While these efforts are steps in the right direction, they appear to be primarily incremental improvements to DOD’s current approach to acquiring services. Our discussions with DOD officials indicate that the architecture being developed may hold the potential for making the more fundamental changes at the strategic and transactional level that we have recommended. We have identified a number of elements that are needed at each of these levels, such as clearly articulating where DOD wants service acquisition to be in the next few years, setting the context for making informed and tailored decisions at the transactional level, and assuring that requirements are well-defined and consistent with DOD’s strategic objectives. The extent to which DOD successfully integrates these elements as it develops and implements its new architecture will be the key to fostering the appropriate attention and action needed to make service acquisitions a managed outcome. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http:/www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or francisp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are listed in appendix III. To identify the key factors needed to improve service acquisition, we drew heavily on our prior work in this and other areas. We primarily used our January 2002 report that identified how leading commercial companies took a strategic approach to acquiring services. In addition, we reviewed previous GAO reports related to overall contract management and those related to individual service contract transactions, specifically in areas such as business transformation, interagency contracting, strategic sourcing, and contract surveillance. To confirm and validate key factors, we held detailed discussions with relevant defense contracting experts in the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Office of the Assistant Secretary of Defense (Networks and Information Integration); the Office of the Deputy Undersecretary of Defense (Business Transformation); the Office of the Assistant Secretary of Defense (Health Affairs); and the Defense Acquisition University. In addition, we spoke with various offices within each of the military departments, including the Air Force Program Executive Office (Combat and Mission Support); the Deputy Assistant Secretary of the Army (Policy and Procurement); the Assistant Secretary of the Navy (Research, Development, and Acquisition); and the Air Force Information Technology Commodity Council. To assess the extent to which the Department of Defense (DOD) approach, including its current management structure, exhibited these characteristics, we reviewed relevant DOD guidance and policy memoranda, including those that established its current management structure and review processes. We interviewed officials responsible for implementing the management structure within the Office of the Secretary of Defense, as well as each of the offices mentioned above. Discussions with these officials focused on DOD and military department service acquisition approaches, including those in put in place in response to sections 801 and 802 of the National Defense Authorization Act for Fiscal Year 2002 and other service contracting issues and initiatives. Data collected includes the number of service contract management reviews that have occurred under the newly implemented structure as well as the composition and purpose of those review boards. We obtained information on the types of activities and reviews conducted by these offices, including the number and value of service acquisitions they reviewed. To document and analyze the processes by which individual military commands and buying activities acquired services, we visited 20 locations. We selected these locations based both on recommendations from DOD officials and from our own internal knowledge. While our selection of locations cannot be generalized to the population of all DOD contracting locations, those selected represented each of the military services and represented a range of DOD service types. Locations visited include: Air Force Space Command, Colorado Springs, Colorado; Air Force 21st Space Wing, Peterson Air Force Base, Colorado Springs, Air Force 50th Space Wing, Schriever Air Force Base, Colorado Air Force 460th Space Wing, Buckley Air Force Base, Aurora, United States Air Force Academy, Colorado Springs, Colorado; Army Contracting Center of Excellence, Arlington, Virginia; Army Contracting Agency, Fort Carson, Colorado Springs, Colorado; Army Communications and Electronics Command, Fort Monmouth, Army Information Technology, E-Commerce and Commercial Contracting Center, Alexandria, Virginia; Surface Deployment and Distribution Command, Alexandria, Virginia; Naval Sea Systems Command, Washington Navy Yard, District of Naval Supply Systems Command, Mechanicsburg, Pennsylvania; Navy Fleet and Industrial Supply Center, Philadelphia, Pennsylvania; Navy Fleet and Industrial Supply Center, Norfolk, Virginia; Navy Fleet and Industrial Supply Center, San Diego, California; Naval Personnel Development Command, Norfolk, Virginia; Navy Space and Warfare Command, San Diego, California; Naval Education and Training Command (NETC), Pensacola Naval Air NETC Professional Development and Technology Center, Pensacola, TRICARE Management Activity, Acquisition Management and Support, Aurora, Colorado. We conducted field observations at these locations and distributed a structured set of questions to solicit information from contracting officials designated to respond to our inquiry. Discussions with these officials focused on the level and type of contracting activity and service acquisition management approaches. These discussions centered on topics such as service acquisition culture and relationships between contracting personnel and customers (including program office and user personnel), changes in policy and practice under the new review structure, challenges faced in acquiring services, efforts to improve individual service acquisitions; databases for capturing service contract data, performance based contracting, strategic sourcing, and other service acquisition initiatives. We discussed the review processes for selected service acquisitions (as determined by buying activity officials); including those that met criteria for review by either the Office of the Secretary of Defense or the military departments and, for comparison, those that were not subject to review at these levels. We reviewed contract file documentation to examine standard processes, review authority, requirements determination, and risk management activity. Information collected included contract solicitations, acquisition strategies, status reports, performance certifications; review and approval checklists, and other contract specific documents. In addition to the contact above, Tim DiNapoli, Assistant Director; Brian Mullins; Christina Cromley Bruner; Whitney Havens; Moshe Schwartz; Andrew Redd; Julia Kennon; and John Krump made key contributions to this report.
Department of Defense (DOD) obligations for service contracts rose from $82.3 billion in fiscal year 1996 to $141.2 billion in fiscal year 2005. DOD is becoming increasingly more reliant on the private sector to provide a wide range of services, including those for critical information technology and mission support. DOD must maximize its return on investment and provide the warfighter with needed capabilities and support at the best value for the taxpayer. GAO examined DOD's approach to managing services in order to (1) identify the key factors DOD should emphasize to improve its management of services and (2) assess the extent to which DOD's current approach exhibited these factors. Several key factors are necessary to improve DOD's service acquisition outcomes--that is, obtaining the right service, at the right price, in the right manner. These factors can be found at both the strategic and the transactional levels and should be used together as a comprehensive, but tailored approach to managing service acquisition outcomes. At the strategic level, key success factors include (1) strong leadership that defines a corporate vision and normative goals; (2) sustained, results-oriented communication and metrics; (3) defined responsibilities and associated support structures; and (4) increased knowledge and focus on spending and data trends. The strategic level also sets the context for the transactional level, where the focus is on making sound decisions on individual transactions. Success factors at this level include having (1) valid and well-defined requirements; (2) properly structured business arrangements; and (3) proactively managed outcomes. DOD's current approach to managing service acquisition has tended to be reactive and has not fully addressed the key factors for success at either the strategic or transactional level. At the strategic level, DOD has yet to set the direction or vision for what it needs, determine how to go about meeting those needs, capture the knowledge to enable more informed decisions, or assess the resources it has to ensure departmentwide goals and objectives are achieved. For example, despite implementing a review structure aimed at increasing insight into service transactions, DOD is not able to determine which or how many transactions have actually been reviewed. The military departments, while having some increased visibility, have only reviewed proposed acquisitions accounting for less than 3 percent of dollars obligated for services in fiscal year 2005 and are in a poor position to regularly identify opportunities to leverage buying power or otherwise change existing practices. Actions at the transactional level continue to focus primarily on awarding contracts and do not always ensure that user needs are translated into well-defined requirements or that post-contract award activities result in expected performance.
The results of our investigation serve to emphasize the overall lesson that a complete fraud prevention framework is necessary in order to minimize fraud, waste, and abuse within the SDVOSB program. The most effective and most efficient part of the framework involves the institution of rigorous controls at the beginning of the process for becoming eligible to bid on SDVOSB contracts. Specifically, controls that validate firms’ eligibility, including ownership and control by one or more service- disabled veterans, is the first and most important control. Next, active and continual monitoring of contractors performing SDVOSB contracts is also essential. Given the numerous examples we identified of firms owned by a service-disabled veteran who subcontracted 100 percent of contract work to non-SDVOSB firms, it is essential that program officials monitor compliance with program rules after contract performance has begun. Finally, as shown in our investigation, preventive and monitoring controls are not effective unless identified abusers are aggressively prosecuted and/or face other consequences such as suspension, debarment or termination of contracts and future contract options. The examples we identified of cases where SBA found a firm misrepresented its eligibility for the SDVOSB program, but failed to penalize the firm, undermine the positive effects of the few controls currently in place. Figure 1 provides an overview of how preventive controls serve as the first and most important part of the frame work because they are designed to screen out ineligible firms before they get service-disabled sole source or set-aside contracts. Monitoring controls and prosecution or other consequences also helps minimize the extent to which a program is vulnerable to fraud. Preventive controls are a key element of an effective fraud prevention framework and are also described in the Standards for Internal Controls in the Federal Government. Preventive controls are especially important because they limit access to program resources through front-end controls. Our experience shows that once contracts are awarded and money disbursed to ineligible SDVOSB contractors, it is unlikely that any money will be recovered or even that the contract will be terminated. Preventive controls for the SDVOSB program should, at a minimum, be designed to verify that a firm seeking SDVOSB status is eligible for the program. However, during our investigation, we found that there are no governmentwide controls that verify whether firms who self-certify as SDVOSBs meet program requirements. VA performs some level of validation of contractors claiming to be SDVOSBs that bid on VA contracts, but even that process was primarily based on a review of self- reported data. The key to the validation process within the SDVOSB program must be verifying self-reported contractor data with independent third-party sources. Key data to validate with preventive controls should include whether the owner or owners are service-disabled veterans, whether the service-disabled veteran owner(s) manage and control daily operations, and whether the business qualifies as a small business under the primary NAICS industry-size standards for the SDVOSB contract awarded. Validation of whether a business owner is a service-disabled veteran must be the first step in the SDVOSB prevention framework. Coordination between VA, SBA, and potentially DOD will be necessary to ensure an accurate determination is made. VA already maintains a database of service-disabled veterans, and therefore, it appears that data necessary for this validation are already available. However, during our investigation, we found that 1 of the 10 firms we investigated was owned by an individual who was not a service-disabled veteran, but received more than $7.5 million dollars in Federal Emergency Management Agency (FEMA) contracts. This firm is a prime example of why the relatively simple process of validating an individual’s status as a service-disabled veteran can prevent fraud within the SDVOSB program. In addition to the validation of firm owners’ status as service-disabled veterans, preventive controls should also validate whether firm owners actually manage and control daily operations. This must be accomplished in order to prevent “rent-a-vet” situations where a firm finds a willing service-disabled veteran to pose as the “owner” of a firm while in reality, other ineligible firm members manage and control the daily operations of a business. One case uncovered during our investigation found that the service-disabled veteran owner actually played no part in business operations related to the primary government contracts won by the firm, and worked from home on non-government related contracts. The alleged owner also did not receive any salary from the firm and tax returns showed that he received less in Internal Revenue Service (IRS) 1099 distributions than the 10 percent minority owner. In order to identify these types of situations, controls must utilize a variety of tools including a review of independent third-party information such as individual and company tax returns obtained directly from the IRS. Other processes such as performing unannounced site visits to an applicant’s place of business can provide evidence to indicate management and control of daily operations, whether the firm is a shell company operating with a mail box as an address or a legitimate firm with employees and assets and whether a firm is co-located with another non-SDVOSB firm that will likely perform all contract work. In our previous work, we used unannounced site visits when conducting our investigations of the 10 firms that through various fraudulent schemes, obtained $100 million in service-disabled sole-source and set-aside contracts. Verification of whether a firm meets NAICS’s industry-size standards is another part of preventive controls that can help minimize fraud and abuse within the program. During our investigation, we found that one company had violated small business size standards and received more than $171 million in federal contracts between fiscal years 2003 and 2009. We were able to identify the company’s information through a review of contract obligation information within the Federal Procurement Data System-Next Generation (FPDS-NG). FPDS-NG is a publicly available database that allows a user to search for federal contracts awarded to specific firms. As part of comprehensive preventive controls, a review of these types of databases as well as company IRS tax returns will provide information to ensure a prospective SDVOSB firm is not already a large business. Beyond validation of data and checks with independent third parties, it is also important that personnel performing the validation of a firm’s SDVOSB status are well trained and aware of the potential for fraud. Fraud awareness training with frontline personnel is crucial to stropping fraud before it gains access to the program. Additionally, when implementing any new set of controls, it is important that agencies field test new controls and provide a safety net to deal with firms who feel they were inappropriately rejected from the SDVOSB program. Finally, a properly managed and staffed prevention program should not create a large backlog of legitimate firms attempting to be certified. Unfortunately, as GAO testified at the end of April, VA’s certification program has a large backlog of businesses awaiting site visits and some higher-risk businesses have been verified months before their site visits occurred or were scheduled to occur. Verifying businesses prior to site visits may allow ineligible firms to appear as eligible and to receive SDVOSB set-aside and sole-source contracts. Even with effective preventive controls, there is substantial residual risk that firms that may have appeared to meet SDVOSB program requirements initially will violate program rules after being awarded SDVOSB contracts. Monitoring and detection are not as efficient or effective as prevention because once a contractors are in the program and fraudulently receive a SDVOSB sole-source or set-aside contract, there are few if any consequences if they are caught. Detection and monitoring efforts, which are addressed in the Standards for Internal Control in the Federal Government, include data-mining of transactions and other reviews. Our investigation found cases where firms may have initially been able to meet a program’s eligibility criteria, but subsequently violated subcontracting rules of the program after subcontracting 100 percent of the SDVOSB contract work to a non-SDVOSB firm. Our findings therefore emphasize why it is important for a comprehensive fraud prevention framework to have detection and monitoring controls in place to identify violations. For the SDVOSB program, there are several areas that require periodic review, including monitoring of a firms compliance with industry-size standards and monitoring of the performance of required percentage of work on SDVOSB contracts. In order to confirm that an SDVOSB firm continues to comply with NAICS standards, agencies should periodically data-mine FPDS-NG and other relevant federal procurement data to determine the number and size of contracts awarded and funds obligated to SDVOSB firms. A thorough review of this data is important so that all contacts awarded to a firm or its joint ventures are identified. During our investigation, we found one firm that received more than $171 million in federal funds through more than five different joint ventures. This example shows why data-mining efforts must be creative and thorough in order to effectively prevent fraud. In addition, data mining can also be done to review existing contracts with company information to determine whether a company could reasonably perform contracts given its area of expertise. For example, through data mining we found one firm during our investigation that initially listed its area of expertise as construction. However, the firm had recently been performing multiple janitorial service contracts across the country. While this was not a definite indicator of fraud, subsequent on-site unannounced site visits found that the firm was subcontracting 100 percent of the contract work to an international firm with more than $12 billion in annual revenues. Monitoring of the firms active participation in contracts is another way to ensure SDVOSB program requirements are being met. During our work, we identified cases where firms, which may have initially appeared legitimate on paper, that actually functioning as pass-throughs and subcontracting 100 percent of the work to non-SDVOSB firms. Controls to help identify these situations would include conducting unannounced site visits to contract performance locations and contacting local contracting officers to determine with whom they interact during the contract performance period. In addition, a periodic review of the types of contracts awarded to a firm compared with company information can help identify firms requiring further review. Finally, when fraudulent activity is identified through data mining and monitoring controls, agencies should also use that information to help improve future preventive controls when appropriate. The final element of a comprehensive fraud prevention framework is the aggressive investigation and prosecution of firms that abuse the SDVOSB program or other consequences such as suspension, debarment, and termination of contracts and cancellation of contract options. These back- end controls are often the most costly and least effective means of reducing fraud in a program. However, the deterrent value of prosecuting those who commit fraud sends the message that the government will not tolerate firms that falsely represent themselves as SDVOSB firms. Our investigation found that while the SBA has successfully identified multiple firms that falsely certified themselves as SDVOSB firms, in October of 2009 when we issued our report, SBA had not attempted to suspend or debar the problem firms. In addition, during our investigation, we could not find any examples of referrals for prosecution of these firms to the Department of Justice by the VA or SBA Inspectors General for fraud within the SDVOSB program. In order for the SBA and VA to ensure the highest level of compliance with SDVOSB program requirements, there must be consequences for those firms that chose to fraudulently misrepresent themselves as SDVOSB firms. Agencies have tools available such as suspension, debarment, and removal from the program, termination of contracts and cancellation of future contract options. Finally, as with fraud found through monitoring controls, lessons learned from investigations and prosecutions should be utilized to strengthen controls earlier in the process and improve the overall fraud prevention framework. Our prior investigation into allegations of fraud and abuse within SDVOSB contracts found 10 firms that were ineligible for the program but received approximately $100 million in SDVOSB contracts. Upon completion of our investigation, we referred all 10 cases to various agency officials who had contracts with the firms and to each agency’s IG. Based on our referrals, agencies have taken a variety of actions including the termination of existing contracts, the decision not to extend contract performance by exercising future contract options, and the opening of civil and criminal investigations. IG officials have stated that most of their investigations are ongoing and that therefore, details cannot be provided because of the risk of jeopardizing the investigation. However, in at least one case, the future contract options under a janitorial services contract were not exercised and, the firm was not allowed to perform work beyond the initial contract performance period. In addition, this firm’s subcontractor, which performed 100 percent of the contact work, initiated its own investigation. The subcontractor’s investigation determined one of its employees helped to perpetrate the fraud by creating fictitious documents at the request of the SDVOSB firm’s owner. In another case, the SDVOSB firm was found to be intentionally overcharging a federal agency by inflating the hourly labor rate of unapproved subcontracted employees from a temporary employment agency. Finally, in one case, multiple federal investigative agencies have an ongoing criminal investigation and are working together on a grand jury indictment. Additionally, these 10 case-study firms have received more than $5 million in new contract obligations on SDVOSB sole-source and set-aside contacts and more than $10 million in other new contract obligations since November 2009. Our investigation of the SDVOSB program shows that existing controls are ineffective at minimizing the risk for fraud and abuse. Our 10 cases alone show that approximately $100 million in SDVOSB contracts have gone to ineligible firms. With billions of dollars being spent annually on SDVOSB contracts, agency officials should use lessons learned to implement a comprehensive fraud prevention framework. Controls at each point in the process are the key to minimizing the government’s risk. With a comprehensive framework in place, the government can be more confident that the billions of dollars meant to help provide opportunities to our nation’s service-disabled veterans actually make it to the intended beneficiaries. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or Members of the Subcommittee have at this time. For additional information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Service-Disabled Veteran-Owned Small Business (SDVOSB) program is intended to provide federal contracting opportunities to qualified firms. In fiscal year 2008, the Small Business Administration (SBA) reported $6.5 billion in sole-source, set-aside, and other SDVOSB contract awards. Testimonies GAO delivered on November 19 and December 16, 2009 identified millions of dollars in SDVOSB contracts that were awarded to ineligible firms, and weaknesses in fraud prevention controls at the SBA and VA which allowed ineligible firms to receive contracts. GAO was asked to testify about the key elements of a fraud prevention framework within the SDVOSB program and to provide an update on the status of fraud referrals made based on the prior investigation of selected SDVOSB firms. To address these objectives, GAO reviewed prior findings from audits and investigations of the SDVSOB program and contacted investigative agency officials concerning the referrals GAO made on prior work. GAO also reviewed applicable guidance on internal control standards from the Comptroller General's Standards for Internal Controls in the Federal Government. GAO founda lack of government-wide prevention controls, a lack of validation of information provided by SDVOSB firms used to substantiate their eligibility for the program, non-existent monitoring of continued compliance with program requirements, and an ineffective process for investigating and prosecuting firms found to be abusing the program. The results of GAO's investigation serve to emphasize the overall lesson that a complete fraud prevention framework is necessary in order to minimize fraud, waste, and abuse within the SDVOSB program. The most effective and most efficient part of the framework involves the institution of rigorous controls at the beginning of the process for becoming eligible to bid on SDVOSB contracts. Next, active and continual monitoring of contractors performing SDVOSB contracts is also essential. Given the examples GAO identified of firms owned by a service-disabled veteran who subcontracted 100 percent of contract work to non-SDVOSB firms, it is essential that federal agencies monitor compliance with program rules after contract performance has begun. Finally, as shown in GAO's investigation, prevention and monitoring controls are not effective unless identified fraud is aggressively prosecuted or companies are suspended, debarred or otherwise held accountable. GAO's prior investigation into allegations of fraud and abuse within SDVOSB contracts found 10 firms that were ineligible for the program but received approximately $100 million in SDVOSB contracts. Upon completion of its investigation, GAO referred all 10 cases to various agency officials who had contracts with the firms, and each agency's Inspector General (IG). Based on the referrals, agencies have taken a variety of actions including the cancellation of existing contracts, termination of future contract options, and opening of civil and criminal investigations. IG officials have stated that many of their investigations are ongoing, and therefore details cannot be provided due to the risk of jeopardizing the investigation. These 10 companies have obtained over $5 million in new SDVOSB sole-source and set-aside contact obligations since November 2009.
The federal government spends more than $3.5 trillion annually, but data on this spending lack transparency. Moreover, the data are often incomplete or have quality limitations. To address these data issues, several statutes were enacted over the last decade, including: The first, the Federal Funding Accountability and Transparency Act of 2006 (FFATA), required OMB to establish a website to provide information on grant and contract awards, and subawards.information is available at www.USASpending.gov. The second, the American Recovery and Reinvestment Act of 2009 (Recovery Act), which provided approximately $840 billion in funding, required that funding recipients’ reports on award and spending data be made available on a website.funding is available at www.Recovery.gov. Information on the spending and distribution of Hurricane Sandy funds are available on that site as well. Today, data related to Recovery Act The third, the Digital Accountability and Transparency Act of 2014 (DATA Act), expands FFATA so that taxpayers and policy makers can track federal spending more effectively. When fully implemented in 2018, the DATA Act will require federal agencies to disclose their direct expenditures and link federal contract, loan, and grant spending information to agency programs. That data are to be available on the web in machine-readable and open formats. The act also requires the establishment of government-wide financial data standards and simplified reporting requirements for entities receiving federal funds. Lastly, to improve the quality of data submitted to USAspending.gov, the act requires Inspectors General (IG) to assess the completeness, timeliness, quality, and accuracy of the spending data submitted by their respective agencies and the use of the data standards. To assist with that effort, the DATA Act also calls for the establishment of a pilot program, with participants to include, among others, a diverse group of recipients of federal awards. The purpose of the pilot program is to develop recommendations for 1) the standardization of reporting elements across the federal government, 2) the elimination of unnecessary duplication in financial reporting, and 3) the reductions of compliance costs for recipients of federal funds. Strong and consistent leadership will be needed to ensure the DATA Act is fully implemented. Our work underscores this point, as we have found that unclear guidance and weaknesses in oversight have contributed to persistent challenges with data on USASpending.gov. These challenges relate to the quality and completeness of data submitted by federal agencies. In 2010, we reported that USAspending.gov did not include information on awards from 15 programs at nine agencies for fiscal year 2008. Also in that report, we looked at a sample of 100 awards on the website and found that each award had at least one data error. To address this problem, we recommended that OMB include all required data on the site, ensure complete reporting, and clarify guidance for verifying agency-reported data. OMB generally agreed with our findings and recommendations, and subsequently issued additional guidance on agency responsibilities. Our most recent report on this subject reinforces these earlier findings. In June 2014, we reported that while agencies generally reported contract information as required, many assistance programs (e.g., grants or loans) were not reported. Specifically, we found agencies did not appropriately submit the required information on 342 assistance award programs totaling approximately $619 billion in fiscal year 2012, although many reported the information after we informed them of the omission. In addition, we found few awards on the website contained information that was fully consistent with agency records. We found that only between 2 percent and 7 percent of the awards contained information that was fully consistent with agencies’ records for all 21 data elements we examined. The element that identifies the name of the award recipient was the most consistent, while the elements that describe the award’s place of performance were generally the most inconsistent. To address these problems, we recommended the Director of OMB (1) clarify guidance on reporting award information and maintaining supporting records and (2) develop and implement oversight processes to ensure that award data are consistent with agency records. OMB generally agreed with our recommendations and we will continue to monitor OMB’s implementation. Across the federal government, initiatives are under way to implement key provisions of the DATA Act. Among these provisions is a requirement for OMB and Treasury to consult with public and private stakeholders in establishing data standards. In response, Treasury and OMB convened a data transparency town hall meeting in late September 2014 so the public could provide input to Treasury officials responsible for developing data standards. The event drew more than 200 participants from the public and private sector, including congressional staff and representatives from federal agencies, state and local governments, private industry, and transparency advocacy organizations. Agency officials provided information on efforts to standardize federal financial management data and members of the public shared their views on the importance of data standards and recommendations for successful implementation. In addition, on September 26, 2014, Treasury published notice in the Federal Register seeking public comment on the establishment of financial data standards by November 25, 2014. These actions are consistent with our recommendations based on lessons learned from the implementation of both USAspending.gov and Recovery.gov. These lessons stressed the importance of obtaining input from federal agencies, recipients, and subrecipients early in the development of new transparency systems to minimize reporting burden. The DATA Act also calls on Treasury to establish a data analysis center or to expand an existing service, to provide data, analytic tools, and data management techniques for preventing or reducing improper payments and improving the efficiency and transparency in federal spending. The act also directs Treasury to work with federal agencies, including IGs and federal law enforcement agencies, to provide data from the data analysis center to identify and reduce fraud waste and abuse and for use in the conduct of criminal investigations, among other purposes. In response to this requirement, Treasury established the Data Transparency Office which is working with the Recovery Board to transfer assets from the board’s Recovery Operations Center to Treasury.assumed program responsibility for USAspending.gov to display accurate government-wide spending data to the public, as called for in the act. Building on lessons learned from the implementation of the Recovery Act, the DATA Act’s provisions also ensure that implementation will be closely monitored. These provisions require IGs and us to assess the implementation of the act throughout the next 7 years (see figure 1 for a timeline of key DATA Act provisions). The DATA Act requires the Inspectors General to assess the completeness, timeliness, quality and accuracy of spending data submitted by their respective agencies and the use of the data standards. These reports are due 18 months after OMB and Treasury issue data standards guidance and then within 2 and 4 years after that. The Treasury IG is leading the IG community’s efforts to develop a comprehensive framework of audit procedures, in consultation with us, to ensure IGs meet their auditing and reporting responsibilities under the act. The Treasury IG is also reviewing Treasury’s standup of a Transparency Office and Treasury’s efforts to improve USASpending.gov, as well as Treasury’s plans to implement its responsibilities under the DATA Act. We are fully prepared to meet the DATA Act’s oversight and consultative roles for us as well. The act requires us to review IG reports on agency spending data quality and use of data standards in compliance with the act, and IGs are to consult with us to assess the completeness and accuracy of agency data. We are working with the Treasury IG and through the Council of Inspectors General for Integrity and Efficiency to develop common audit procedures and practices across the federal accountability community to avoid duplication. We are also working to ensure that the Treasury’s implementation efforts follow good consultative practices, and that views from both federal and nonfederal stakeholders are appropriately considered as data standards are developed. We also will evaluate the data standards to ensure that they are complete, clear, and at the right level of specificity. Toward that end, we plan to provide an interim report to the Congress in 2015 on the establishment of the standards. To effectively implement the DATA Act, the federal government will need to address multiple technical issues. The first of these issues involves developing and defining common data elements across multiple reporting areas. Among the lessons learned from the implementation of the Recovery Act’s transparency provisions was the value of standardized data for improving data quality and transparency, including uniform information for contracts and financial assistance awards. To address this issue for DATA Act implementation, the DOD and the Department of Health and Human Services (HHS) are examining data elements used by the procurement and grants communities to identify financial data elements common to both communities that can be standardized.assessment focuses on 72 data elements that are linked to five data Their areas: (1) identification of award; (2) awardee/recipient information; (3) place of performance; (4) period of performance; and (5) identification of agencies. HHS and DOD were able to reach agreement on a basic set of data elements that could be standardized across the procurement and award communities. Some of the elements will require changes in policy, while in other cases agencies will have to change how they collect and report data. Plans to identify and coordinate recommended policy changes with OMB are under way. Another related issue is how to enhance data transparency while protecting individual privacy and national security. The DATA Act does not require the disclosure of any information that is exempt from disclosure under the Freedom of Information Act, including information that is specifically authorized to be kept secret in the interest of national defense or foreign policy. Additionally, the DATA Act does not require federal agencies to report direct payments to individuals. However, some federal agencies have raised concerns about how privacy and national security can be maintained if more data are made available. In January 2013, we co-hosted a forum on data analytics with the Recovery Board and The Council of Inspectors General for Integrity and Efficiency. The forum brought together representatives from federal, state, and local agencies and the private sector to explore the use of data analytics—which involve a variety of techniques to analyze and interpret data—to help identify fraud, waste, and abuse in government. Forum participants identified opportunities to enhance data-analytics efforts, such as consolidating data and analytics operations in one location to increase efficiencies by enabling the pooling of resources as well as accessing and sharing of the data to enhance oversight. The forum participants also identified a variety of challenges that hinder their abilities to share and use data. For example, forum participants cited statutory requirements that place procedural hurdles on agencies wishing to perform data matching to detect fraud, waste, and abuse, and technical obstacles—such as the lack of uniform data standards across agencies— which make it more difficult for oversight and law enforcement entities to share available data.coordination and data sharing, we formed the Government Data Sharing Community of Practice (CoP). In 2013 and 2014, the CoP partnered with a variety of organizations, including MITRE and the National Intergovernmental Audit Forum, to host a series of events for the audit community to discuss legal issues and technological challenges to data sharing. When fully and effectively implemented, the DATA Act holds great promise for improving the efficiency and effectiveness of the federal government, and for addressing persistent government management challenges. Expanding the quality and availability of federal spending data will better enable federal program managers to make data-driven decisions about how they use government resources to meet agency goals. Providing open and consumable federal data will enable innovation and help new and existing businesses to use data to inform activities. By expanding the quality and availability of federal spending data, the DATA Act also holds great promise for enhancing government oversight and preventing and detecting fraud, waste and abuse. Our work on examining fragmentation, overlap and duplication in federal government programs has demonstrated the need for more reliable and consistent federal data, which implementation of the DATA Act should produce. As we have reported and I have testified before this Committee, better data and a greater focus on expenditures and outcomes are essential to improving the efficiency and effectiveness of federal efforts.Currently, there is not a comprehensive list of all federal programs and agencies often lack reliable budgetary and performance information about their programs. Without knowing the scope, cost, or performance of programs, it is difficult for executive branch agencies or Congress to gauge the magnitude of the federal commitment to a particular area of activity, or the extent to which associated federal programs are effectively and efficiently achieving shared goals. Moreover, the lack of reliable, detailed budget information makes it difficult to estimate the cost savings that could be achieved should Congress or agencies take certain actions to address identified fragmentation, overlap, and duplication. Absent this information, Congress and agencies cannot make fully informed decisions on how federal resources should be allocated and the potential budget trade-offs. Implementing data standards across the federal government, as required under the DATA Act, could help address another ongoing challenge: the need for reliable and consistent agency program information. We recently examined the implementation of the agency program inventory requirements under the GPRA Modernization Act of 2010 (GPRAMA) and found that inconsistent program definitions and program-level budget information limit comparability among like programs. In developing the inventory, OMB allowed for significant discretion in several areas leading to a variety of approaches for defining programs and inconsistencies in the type of information reported. The inconsistent definitions, along with agencies not following an expected consultation process, led to challenges in identifying similar programs in different agencies. The lack of program comparability hampers decision makers’ ability to identify duplicative programs and accurately measure the cost and magnitude of federal investments. In addition, we found that although GPRAMA requires agencies to identify program-level funding, OMB did not direct agencies to include this information in their 2013 inventories and it was not included in the May 2014 update. OMB officials told us that they put the 2014 update on hold to determine how to merge these requirements with DATA Act transparency requirements since both laws require web-based reporting. Implementing data standards across the federal government, as required under the DATA act, could help address this ongoing challenge. Effective implementation of the DATA Act could also provide additional data analytic tools for agencies to detect, reduce, and prevent improper payments. Throughout the past decade, we have reported and testified on improper payment issues across the federal government, as well as at specific agencies. In July, we testified that federal agencies reported an estimated $105.8 billion in improper payments in fiscal year 2013 that were attributable to 84 programs spread among 18 agencies. The Improper Payments Elimination and Recovery Improvement Act of 2012 is the latest in a series of laws aimed at addressing this issue. The act requires that agencies verify benefit eligibility by checking multiple existing databases before making a payment to a person or entity. The act also modified requirements to promote computer matching activities that assist in the detection and prevention of improper payments. As we have previously found, a number of strategies across government, some of which are under way, could help to reduce improper payments, including (1) designing and implementing strong preventive controls activities such as up-front validation of eligibility through data sharing and predictive analytic tests and (2) implementing effective detection techniques to quickly identify and recover improper payments after they have been made. By establishing a data analysis center to provide data, analytical tools, and data management techniques, the DATA Act could also help address this problem. The open data provisions of the DATA Act will also enhance the federal government’s emerging use of data analytics capabilities to conduct incisive analysis to support oversight, improve decision-making by federal program managers, and foster innovation by making more federal data available to the public. This oversight will include, but not be limited to, the detection and prevention of fraud, waste and abuse as well as analysis of improper payments and overlap, duplication, and fragmentation across federal programs. For example, we plan to leverage open data as part of our piloting of data analytic technologies, which include (1) data mining for improper payments analysis; (2) link analysis for fraud identification and mitigation; (3) document clustering and text mining for overlap and duplication analysis; and (4) network analysis for program coordination assessment, among other potential endeavors. As in prior years, the federal government was unable to demonstrate the reliability of significant portions of its accrual-based consolidated financial statements for fiscal years 2013 and 2012, principally resulting from limitations related to certain material weaknesses in internal control over financial reporting. For example, about 33 percent of the federal government’s reported total assets as of September 30, 2013, and approximately 16 percent of the federal government’s reported net cost for fiscal year 2013 relate to DOD, which received a disclaimer of opinion on its consolidated financial statements. As a result, we were unable to provide an opinion on the accrual-based consolidated financial statements of the U.S. government. Further, significant uncertainties, primarily related to the achievement of projected reductions in Medicare cost growth reflected in the 2013, 2012, 2011, and 2010 Statements of Social Insurance, prevented us from expressing opinions on those statements,Social Insurance Amounts. as well as on the 2013 and 2012 Statements of Changes in It is important to note, however, that since the enactment of key financial management reforms in the 1990s, significant progress has been made in improving financial management activities and practices. For fiscal year 2013, almost all of the 24 Chief Financial Officers (CFO) Act agencies received unmodified (“clean”) audit opinions on their respective entities’ financial statements, up from 6 CFO Act agencies for fiscal year 1996. Also, for the first time, the Department of Homeland Security was able to obtain an unmodified audit opinion on all of its financial statements—a significant achievement. Three major impediments continued to prevent us from expressing an opinion on the U.S. government’s accrual-based consolidated financial statements: (1) serious financial management problems at DOD that have prevented its financial statements from being auditable, (2) the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Having sound financial management practices and reliable, timely financial information is important to ensure accountability over DOD’s extensive resources to efficiently and economically manage the department’s assets, budgets, mission, and operations. Accomplishing this goal is a significant challenge given the worldwide scope of DOD’s mission and operations; the diversity, size, and culture of the organization; and its reported trillions of dollars of assets and liabilities and its hundreds of billions of dollars in annual appropriations. Given the federal government’s continuing fiscal challenges, reliable and timely financial and performance information is important to help federal managers ensure fiscal responsibility and demonstrate accountability; this is particularly true for DOD, the federal government’s largest department. DOD continues to work toward the long-term goal of improving financial management and full financial statement auditability. The National Defense Authorization Act (NDAA) for Fiscal Year 2010 requires that DOD’s Financial Improvement and Audit Readiness (FIAR) Plan set as its goal that the department’s financial statements be validated as ready for audit by September 30, 2017. In addition, the NDAA for Fiscal Year 2013 required that the FIAR Plan also describe specific actions to be taken, and their associated costs, to ensure that DOD’s Statement of Budgetary Resources (SBR) would be validated as ready for audit by September 30, 2014. DOD’s current FIAR strategy and methodology focus on two priorities— budgetary information and asset accountability—with an overall goal of preparing auditable department-wide financial statements by September 30, 2017. Based on difficulties encountered in auditing the SBR of the U.S. Marine Corps, DOD made a significant change to its FIAR Guidance that will limit the scope of the first-year SBR audits for all DOD components. As outlined in the November 2014 FIAR Plan Status Report and the November 2013 revised FIAR Guidance, the scope of the SBR audits, beginning in fiscal year 2015, will be on budget activity only related to the current year appropriations as reflected in a Schedule of Budgetary Activity (SBA), an interim step toward achieving the audit of multiple-year budgetary activity and expenditures required for a full audit of the SBR. The most current FIAR Plan acknowledges that DOD did not achieve the above noted requirement for the SBR to be validated as ready for audit by September 30, 2014. The military departments and other defense agencies asserted audit readiness for their SBAs on September 30, 2014, and plan to start their first-year SBA audits during fiscal year 2015. Even though DOD components are moving forward with SBA audits, our work has shown that DOD components are asserting audit readiness without fully implementing the FIAR Guidance. For example, prior to asserting audit readiness, the Defense Finance and Accounting Service did not fully implement the FIAR Guidance in the areas of planning, testing, and corrective actions for processing payments to contractors. Also, the Army did not ensure that all budgetary processes, systems, and risks were adequately considered and identified as required by the FIAR Guidance For example, the Army did not adequately identify for audit readiness.significant activity attributable to its service provider business processes and systems. Also, the Army’s documentation and assessment of controls were not always complete or accurate. To meet its audit readiness goal of June 30, 2016, for asset accountability, DOD is also continuing to implement plans that focus on the existence and completeness of mission-critical assets to (1) ensure accurate quantity and location information, and (2) support valuation activities. However, with regards to meeting its goal of full auditability by September 30, 2017, the department has not fully developed a strategy for consolidating individual component financial statements into department-wide financial statements. The effects of DOD’s financial management problems extend beyond financial reporting. Long-standing control deficiencies adversely affect the economy, efficiency, and effectiveness of its operations. As we have previously reported, DOD’s financial management problems have contributed to (1) inconsistent and sometimes unreliable reports to Congress on estimated weapon system operating and support costs, limiting the visibility needed for effective oversight of the weapon system programs; and (2) continuing reports of Antideficiency Act violations— 75 such violations reported from fiscal year 2007 through fiscal year 2012, totaling nearly $1.1 billion—which emphasize DOD’s inability to ensure that obligations and expenditures are properly recorded and do not exceed statutory levels of control. With improvements to its financial management processes, DOD would be better able to provide its management and Congress with reliable, useful, and timely information on the results of its business operations. Effectively implementing needed improvements, however, continues to be a difficult task. While DOD has made efforts to improve its financial management, we have reported over the past few years significant internal control, financial management, and systems deficiencies including the following: Fundamental deficiencies in DOD funds control significantly impair its ability to properly use resources, produce reliable financial reports on the results of operations, and meet its audit readiness goals. Risk management policies and procedures associated with preparing auditable financial statements through the FIAR Plan were not in accordance with widely recognized guiding principles for effective risk management. The effective implementation of DOD’s planned Enterprise Resource Planning (ERP) systems is considered by DOD to be critical to the success of all of its planned long-term financial improvement efforts; however, as we have previously reported, DOD continues to encounter difficulties in implementing its planned ERP systems on schedule and within budget, and experiences significant operational problems such as deficiencies in data accuracy, inability to generate auditable financial reports, and the need for manual workarounds. We have made numerous recommendations to DOD to address these financial management issues. We are encouraged by DOD’s sustained commitment to improving financial management and achieving audit readiness, but several DOD business operations, including financial management, remain on our list of high-risk programs. DOD has financial management improvement efforts under way and is monitoring progress against milestones; however, we have found that DOD and its components have emphasized the assertion of audit readiness by milestone dates over the implementation of effective underlying processes, systems, and controls. While establishing milestones is important for measuring progress, DOD should not lose sight of the ultimate goal—implementing lasting financial management reform to help ensure that it has the systems, processes, and personnel to routinely generate reliable financial management and other information critical to decision-making and effective operations for achieving its missions. Continued congressional oversight of DOD’s financial management improvement efforts will be critical to helping ensure DOD achieves its financial management improvement and audit readiness goals. To assist Congress in its oversight efforts, we will continue to monitor DOD’s progress and provide feedback on the status of its improvement efforts. In fiscal year 2013, despite significant progress, the federal government continued to be unable to adequately account for and reconcile intragovernmental activity and balances between federal entities. When preparing the consolidated financial statements, intragovernmental activity and balances between federal entities should be in agreement and must be subtracted out, or eliminated, from the financial statements. If the two federal entities engaged in an intragovernmental transaction do not both record the same intragovernmental transaction in the same year and for the same amount, the intragovernmental transactions will not be in agreement, resulting in errors in the consolidated financial statements. In fiscal year 2013, Treasury continued to actively work with federal entities to resolve intragovernmental differences. For example, Treasury expanded its quarterly scorecard process to include all 35 significant component entities, highlighting differences requiring the entities’ attention and encouraging the use of the dispute resolution process.a result of these and other actions, a significant number of intragovernmental differences were identified and resolved. While such progress was made, we continued to note that amounts reported by federal entity trading partners were not in agreement by significant amounts. Reasons for the differences cited by several CFOs included differing accounting methodologies, accounting errors, and timing differences. In addition, the auditor for DOD reported that DOD, which contributes significantly to the unreconciled amounts, could not accurately identify most of its intragovernmental transactions by customer, and was unable to reconcile most intragovernmental transactions with trading partners, which resulted in adjustments that cannot be fully supported. Additionally, for fiscal year 2013, there continued to be unreconciled transactions between the General Fund of the U.S. Government (General Fund) and federal entity trading partners related to appropriations and other intragovernmental transactions, which amounted to hundreds of billions of dollars. because only some of the General Fund is reported in Treasury’s department-level financial statements. For example, these financial statements include various General Fund-related assets and liabilities that Treasury manages on behalf of the federal government (e.g., federal debt and cash held by Treasury), but do not include certain other activities such as receipts and disbursements related to other federal agencies. As a result of these circumstances, the federal government’s ability to determine the impact of these differences on the amounts reported in the accrual-based consolidated financial statements is significantly impaired. In fiscal year 2013, Treasury continued to establish processes to account for and report General Fund activity and balances, such as providing entities information to assist them in complying with the proper use of the General Fund as a trading partner. The General Fund is a central reporting entity that tracks core activities fundamental to funding the federal government (e.g., issued budget authority, operating cash, and debt financing activities). Over the years, we have made several recommendations to Treasury to address these issues. Treasury has taken or plans to take actions to address these recommendations. Treasury, in coordination with OMB, implemented corrective actions during fiscal year 2013 to address certain internal control deficiencies detailed in our previously issued reports regarding the process for preparing the consolidated financial statements. These include further developing and beginning to implement a methodology to reconcile certain outlays and receipts between Treasury’s records and underlying federal entity financial information and records. Nevertheless, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited entity financial statements, properly balanced, and in accordance with U.S. generally accepted accounting principles (U.S. GAAP). For example, Treasury was unable to properly balance the accrual-based consolidated financial statements. To make the fiscal years 2013 and 2012 consolidated financial statements balance, Treasury recorded a net decrease of $9.0 billion and a net increase of $20.2 billion, respectively, to net operating cost on the Statements of Operations and Changes in Net Position, which were identified as “Unmatched transactions and balances.” Treasury recorded an additional net $5.9 billion and $1.8 billion of unmatched transactions in the Statement of Net Cost for fiscal years 2013 and 2012, respectively. Over the years, we have made numerous recommendations to Treasury to address these issues. Most recently, in June 2014, we recommended that Treasury, working in coordination with OMB, include all key elements for preparing well-defined corrective action plans from the Chief Financial Officers Council’s Implementation Guide for OMB Circular A-123, Management’s Responsibility for Internal Control – Appendix A, Internal Control over Financial Reporting, in Treasury’s and OMB’s corrective action plans. Treasury has taken or plans to take actions to address these recommendations. The 2013 Financial Report includes comprehensive long-term fiscal projections for the U.S. government that, consistent with our recent simulations, show that while the near-term outlook has improved—absent policy changes—the federal government continues to face an unsustainable long-term fiscal path. Such reporting provides a much needed perspective on the federal government’s long-term fiscal position and outlook. The projections included in the Financial Report and our simulations both underscore the need to take action soon to address the long-term path to avoid larger policy changes in the future that could be disruptive to individuals and the economy, while also taking into account concerns about near-term economic growth. In the near term, deficits are expected to continue to decline from the recent historic highs as the economy further recovers and actions taken by Congress and the President continue to take effect. Treasury recently reported that the deficit for fiscal year 2014 was the lowest as a share of the economy since 2007. Both the projections in the Financial Report and our long-term simulations reflect enactment of the Budget Control Act of 2011 (BCA), which established discretionary spending limits through Under these limits, discretionary spending will continue fiscal year 2021. to decline as a share of the economy and in fiscal year 2021 will be lower than any level seen in the past 50 years. At the same time, revenues are projected to rise in the near-term as the economy continues to recover. The Budget Control Act of 2011, Pub. L. No. 112-25, § 302, 125 Stat. 240, 256-59 (Aug. 2, 2011). The BCA amended the Balanced Budget and Emergency Deficit Control Act (BBEDCA), classified, as amended, at 2 U.S.C. § 901a. Our Spring 2014 simulations also incorporate the effects of the Bipartisan Budget Act of 2013, which further amended BBEDCA to establish higher limits on discretionary appropriations for fiscal years 2014 and 2015 and to extend sequestration for direct spending programs, as well as making other changes to direct spending and revenue. In all, the BBEDCA, as amended through December 2013, reduced deficits over the next 10 years in our Baseline Extended simulation but did not significantly change the long-term federal budget outlook. Our updated simulations for 2015 will incorporate the effects of more recently enacted amendments to the BBEDCA. Debt held by the public as a share of gross domestic product (GDP), however, remains well above historical averages. Debt held by the public at these high levels could limit the federal government’s flexibility to address emerging issues and unforeseen challenges such as another economic downturn or large-scale natural disaster. Further, even with BCA and other actions taken, the U.S. government continues to face a significant long term structural imbalance between revenues and spending. This imbalance, which is driven on the spending side largely by the aging of the population and rising health care costs, will cause debt held by the public to rise continuously in coming decades. Changing this path will not be easy, and it will likely require difficult decisions affecting both federal spending and revenue. However, as both the projections in the Financial Report and our long-term simulations show, delaying action only increases the size of actions eventually needed. Our past work has also identified a variety of fiscal exposures— responsibilities, programs, and activities that explicitly or implicitly expose Fiscal exposures vary widely the federal government to future spending.as to source, extent of the U.S. government’s legal commitment, and magnitude. Over the past decade, some fiscal exposures have grown due to events and trends and the U.S. government’s response to them. Increased attention to these fiscal exposures will be important for understanding risks to the federal fiscal outlook and enhancing oversight over federal resources. In conclusion, to operate as effectively and efficiently as possible, and to address persistent government-wide challenges that exacerbate the federal government’s fiscal challenges, Congress, the administration, federal managers, the public and the accountability community must have ready access to consistent, reliable and complete financial data. When fully and effectively implemented, the DATA Act will improve the accountability and transparency of federal spending data (1) by establishing government-wide financial data standards so that data are comparable across agencies and (2) by holding federal agencies more accountable for the quality of the information disclosed. Such increased transparency provides opportunities for improving the efficiency and effectiveness of federal spending; increasing the accessibility of data to benefit the public and the business community; and improving oversight to prevent and detect fraud, waste, and abuse of federal funds. While the process to implement the DATA Act has begun, more work remains. We are committed to being a continuing presence to monitor Treasury’s, OMB’s, and agencies’ progress as data standards are developed and implemented, and to work with Inspectors General to ensure an effective audit process is in place to help ensure data quality. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues at (202) 512-6806 or Gary Engel, Director, Financial Management and Assurance at (202) 512-3406. In addition to the contact names above, key contributions to this testimony were made by Nabajyoti Barkakati, Kathleen M. Drennan, Joah Iannotta, Thomas J. McCabe, Timothy Persons, James Sweetman, Jr., and staff on our Consolidated Financial Statement audit team. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government spends $3.5 trillion annually, but data on this spending are often incomplete or have quality limitations. Effective implementation of the DATA Act would help address the federal government's persistent management and oversight challenges by providing for standardized, high-quality data. The DATA Act also will increase the accessibility of data to benefit the public and the business community by requiring, among other things, that data be made available in machine-readable and open formats. This statement focuses on (1) the condition of information detailing federal spending as reported in our June 2014 report; (2) efforts to date to implement and plan for meeting key provisions of the DATA Act, including potential implementation challenges as well as GAO's plan; (3) the importance of the DATA Act for addressing government management and oversight challenges; and (4) results of GAO's audit of the fiscal year 2013 U.S. government's financial statements, including efforts to improve financial management at DOD. This statement is primarily based upon our published and on-going work covering GAO's work on federal data transparency, fragmentation, overlap and duplication, improper payments, and government efficiency, effectiveness, and financial reporting. GAO has made numerous recommendations to OMB, Treasury, and other executive branch agencies in these areas, and this statement reports on the status of selected recommendations. GAO's prior work on federal data transparency has found persistent challenges related to the quality and completeness of the spending data agencies report to USAspending.gov. For example, GAO reported in June 2014 that roughly $619 billion in assistance awards were not properly reported. In addition, few reported awards—between 2 and 7 percent—contained information that was fully consistent with agency records for all 21 data elements GAO examined. GAO's work also found that a lack of government-wide data standards limits the ability to measure the cost and magnitude of federal investments and hampers efforts to share data across agencies to improve decision-making and oversight. The Digital Accountability and Transparency Act of 2014 (DATA Act) was enacted to help address these challenges. Among other things, the DATA Act requires (1) the establishment of governmentwide data standards by May 2015, (2) disclosure of direct federal spending with certain exceptions, (3) agencies to comply with the new data standards, and (4) Inspectors General audits of the quality of the data made available to the public. Initial implementation efforts are focused on obtaining public input, developing data standards and establishing plans to monitor agency compliance with DATA Act provisions. These efforts include, for example, a data transparency town hall meeting co-hosted by the U.S. Department of the Treasury (Treasury) and the Office of Management and Budget (OMB) to obtain public stakeholder input on the development of data standards, and Treasury Inspector General's efforts, in consultation with GAO, to develop a comprehensive audit framework to assess agency compliance and ensure new standardized data elements are effective once implemented. Effective implementation will need to address key technical issues including developing and defining common data elements across multiple reporting areas and enhancing data transparency while protecting individual privacy and national security. Effective implementation would help promote transparency to the public and address ongoing government management challenges by expanding the quality and availability of federal spending data. Having better data also will make it possible to gauge the magnitude of the federal investment, help agencies make fully informed decisions about how federal resources should be allocated, and provide agencies and the audit community with additional data analytic tools to detect and prevent improper payments and fraudulent spending. GAO also reports on its annual audit of the consolidated financial statements of the U.S. government. Almost all of the 24 Chief Financial Officers Act agencies received unmodified (“clean”) opinions on their respective entities' fiscal year 2013 financial statements. However, three long-standing major impediments, including serious financial management problems at the U.S. Department of Defense (DOD), prevented GAO from expressing an opinion on the U.S. government's 2013 accrual-based consolidated financial statements. In addition, while progress has been made to reduce the deficit in the near term, comprehensive long-term fiscal projections, consistent with GAO's recent simulations, show that absent policy changes, the federal government continues to face an unsustainable long-term fiscal path.
There are about 1.2 million school-age children of military-connected families, and the majority of students attend public and private schools located off military bases. In addition to these off-base options, several school options are available on military bases for military-connected children, including traditional public schools, DOD-operated schools, and, more recently, public charter schools. These options are described below.  Traditional public schools: Approximately 160 traditional public schools operated by local school districts are located on military bases in the United States. According to DOD, 94 percent of students attending public schools on military bases are military-connected children. Traditional public schools are generally open to all students in the geographic area they serve.  DOD-operated schools: Although the majority of DOD schools are located overseas, 64 DOD schools currently operate on military bases in the United States, and these domestic DOD schools enroll about 28,000 students. DOD schools—open only to eligible dependents of active duty military and DOD civilians who reside on military installations—constitute a separate school system administered by the Department of Defense Education Activity (DoDEA). Domestic DOD schools were established to educate military children in communities where the local schools were deemed unable to provide a suitable education, among other reasons. DOD school systems depend almost entirely on federal funds, unlike public schools, which are funded primarily with local and state taxes and for which federal funding constitutes a small portion of total resources. As we noted in an earlier study, questions have been raised periodically concerning the continuing need for DOD schools. DOD has commissioned various studies since the 1980s exploring the possibility of transferring DOD schools to local school districts, and over the years, DOD has transferred some DOD schools to local public school districts.  Charter schools: Charter schools are a relatively new option for students. These schools are public schools created to achieve a number of goals, including encouraging innovation in public education, providing an alternative to poor performing schools, and giving families an additional educational option to traditional public schools. Charter schools operate with more autonomy than traditional public schools in exchange for agreeing to improve student achievement, an agreement that is formalized in a contract or charter with the school’s authorizing body. A school’s charter defines the specific academic goals and outlines school finances and other aspects of operation. Charter schools provide students and parents with increased educational options. However, research has found considerable variability in charter school performance on student achievement. Enrollment and interest in charter schools has grown rapidly in the past few years. According to the National Center for Education Statistics, the number of students enrolled in public charter schools more than quadrupled, from 0.3 million to 1.6 million students between school years 1999-2000 and 2009-2010, while the percentage of all public schools that were public charter schools increased from 2 to 5 percent. In the 2009- 2010 school year, about 5,000 charter schools operated in 40 states and the District of Columbia. Meanwhile parental interest in this public school option has also grown. According to a survey conducted by one national charter school organization, nearly two-thirds of charter schools across the nation reported having children on their waiting list, with an average waiting list totaling 228 students. The 2008 DOD report on military compensation recommended that military-connected parents be allowed to form charter schools on military bases. The 2008 DOD report indicated that offering a charter school option in areas with underperforming local public schools would give parents stationed in those locations another choice in addition to the private school or home schooling options that may currently exist. This recommendation was part of the report’s broader emphasis on the need to increase service members’ choices in order to enhance recruiting and retention efforts in the uniformed services and, ultimately, support military readiness. Charter schools are established according to individual state charter school laws, and these state laws determine how schools operate and are funded. Depending on the state, a range of groups and organizations can establish a charter school, including parents, educators, private nonprofit organizations, and universities. A significant portion of charter schools nationally are established or operated by private management organizations, such as charter management organizations (CMO). According to one research institute, in the 2010-2011 school year, 35 percent of all public charter schools were operated by such private management organizations, and these schools accounted for almost 42 percent of all students enrolled in charter schools. States also set requirements for how charter schools operate. For example, most state charter school laws generally require that charter schools be open to all students within a specified boundary (commonly referred to as “open enrollment” requirements). In addition, most state charter school laws generally require that charter schools that receive more student applications than have available classroom spaces must enroll students based upon a lottery or some other random selection process to ensure that enrollment to the school is fair and does not favor particular groups of students. States also specify which entities can authorize the establishment of a charter school, including state departments of education, state boards of education, school districts or local educational agencies (LEA), institutions of higher education, and municipal governments. Authorizers are responsible for monitoring school performance and have the authority to close schools or take other actions if academic goals and state financial requirements are not met. States also define how charter schools are structured. For example, unlike traditional public schools that are generally part of a larger LEA, some states establish charter schools as their own LEA while others allow schools to choose between being a distinct LEA or part of a larger LEA for certain purposes, such as special education. In general, schools that operate as separate LEAs may be able to directly obtain federal funds or apply for federal grants that would otherwise be distributed among schools in a larger LEA. It may, therefore, be financially advantageous for schools to be separate LEAs, although this advantage also comes with the added responsibility associated with LEAs. Finally, states determine how charter schools will be publicly funded. In most states, charter schools are largely funded according to the formula states use for traditional public schools, usually a per-pupil allocation based on student attendance. As public schools, charter schools are also eligible to receive formula funding from some federal programs, such as those authorized by the Individuals with Disabilities Education Act (IDEA) and Title I of the Elementary and Secondary Education Act. DOD and Education have had a formal memorandum of understanding (MOU) since 2008 to collaborate on addressing the educational needs and unique challenges faced by children of military families, including serving as a resource for communities exploring alternative school options, such as charter schools. In addition, a number of federal resources are available that may be used to assist in starting and operating charter schools, some of which focus on schools serving military-connected students, among others: Impact Aid: Education and DOD administer Impact Aid programs that provide qualifying LEAs—encompassing both traditional public schools and charter schools—with funds to compensate LEAs for revenue losses resulting from federal activities and to help students connected with these federal activities—which may include military- connected students—meet state academic standards. Appropriations for Education’s Impact Aid program were almost $1.3 billion in fiscal year 2010, and Congress appropriated $40 million in additional funding for DOD Impact Aid in 2012. One type of Impact Aid grant is designed to support the construction and repair of school buildings and is awarded to LEAs on a competitive basis.  DOD provides discretionary grants—about $50 million in grants to 38 military-connected school districts for fiscal year 2012—for enhancing student learning opportunities, student achievement, and educator professional development at military-connected schools, as well as about $9 million for math, science, English, and foreign language programs affecting military-connected students.  DOD is authorized to provide up to $250 million to make grants, conclude cooperative agreements, or supplement other federal funds to construct, renovate, repair, or expand elementary and secondary public schools on military installations in order to address capacity or facility condition deficiencies at such schools. The Consolidated Appropriations Act, 2012, provided an additional $250 million for DOD to continue addressing capacity and condition issues of public schools on military installations.  Charter Schools Program resources: Education also provides supports and resources through its Charter Schools Program (CSP).  CSP provides funds—about $255 million in fiscal year 2012—to create high-quality charter schools, disseminate information about effective schools, and support the replication and expansion of successful schools, among other purposes. In applying for grants for state education agencies, state agencies must describe, among other things, how they will disseminate information about effective schools and how students in the community will be given an equal opportunity to attend the charter school. A 2011 White House report details an agency-wide effort to develop a coordinated approach to supporting military families. One specific administration commitment is for Education to make supporting military families one of its supplemental priorities for its discretionary grant programs. This priority, which has been implemented, favors grant applications to meet the needs of military-connected students.  CSP funds a number of organizations, including the National Charter School Resource Center and the National Resource Center on Charter School Finance and Governance, which provide a diverse range of information on charter schools. While most schools located on military bases were traditional public schools or DOD schools, eight were charter schools at the time of our review. The military base charter schools differed among themselves in their academic focuses and in the number of military-connected children they served. In addition to the eight schools, most of which were located on Air Force bases or on joint Air Force/Navy bases, another charter school was being developed on the Fort Bragg Army base at the time of our review (see fig. 1). Most military base charter schools opened after 2008, following DOD’s Quadrennial Review that recommended parents be allowed to form charter schools on bases to provide another educational option for military children in geographic areas with underperforming public schools, in addition to private schools or home schooling options (see fig.2). Like many charter schools located in public school districts across the country, many of the eight schools on military bases offered a program with a particular academic focus (see table 1). For example, Sonoran Science Academy, the only charter school currently on a military base that serves children from grade 6 through grade 10, offers a college preparatory program with a focus on science, technology, engineering, and mathematics subjects (STEM). In school year 2011-12, Sonoran Science Academy served 185 children in grades 6 through 10 and plans to expand through the 12th grade by adding a grade each year. Manzanita Public Charter School offered a program for children learning English in which classes are taught in both English and the children’s home language, known as a dual immersion language program. The program’s goal was to support bilingualism and bi-literacy. Located on Vandenberg Air Force Base, but outside the base’s security gate, Manzanita Public Charter School served 438 students in school year 2011-12. Sigsbee Charter School in Key West draws on the school’s location to offer an environmental education program with a focus on marine studies. The only one of the eight schools on a military base to offer a pre-kindergarten program, Sigsbee served 410 children through grade 7 in school year 2011-12. In Arkansas on the Little Rock Air Force Base, Flightline Upper Academy chose a curriculum in which the arts are used to teach all subjects. The school served 164 students in grades 5 through 8 in school year 2011-12. Both of these schools—Sigsbee and Flightline—are located behind the security gate on their respective military bases. Table 1 provides information on the characteristics of the eight charter schools. While the charter schools’ academic focuses differed considerably, most military base charter schools served predominantly children of military- connected families and some of these schools took various steps to attract these students. For example, the largest charter school operating on a base, the Belle Chasse Academy, serves more than 900 elementary and middle school children. To address the needs associated with high mobility and parental deployments that military-connected students experience, Belle Chasse Academy offers psychological and other counseling services, welcome clubs, and a buddy program to ease an incoming student’s transition to Belle Chasse Academy. With 90 percent of its students coming from military families, Belle Chase Academy has the largest percentage of military-connected children of the schools currently in operation. While the children of civilians can attend the school, which is located inside the security gate on the Naval Air Station Joint Reserve Base New Orleans, Belle Chase Academy officials told us they initially took several steps to attract children of military personnel. For example, they held multiple town meetings for military families, distributed flyers on the base, and posted notices on the school’s website encouraging military-connected families to enroll their children as soon as the service member receives his or her orders (see text box.) Belle Chasse Academy officials said the Academy is well-known now and they no longer have to conduct as much outreach. Belle Chasse Academy: Promoting Early Enrollment of Military- Connected Students The Belle Chasse Academy includes the following announcement on its website: … Active-duty personnel are enjoined to register their child(ren) to attend Belle Chasse Academy as soon as they are in receipt of orders. This enables the school to plan effectively and ensures that your student has a space in the appropriate grade and setting. BCA is space-limited, and we cannot ensure that every dependent of active-duty personnel has space unless you assist us in planning. We are an open-enrollment school, so unless we have reserved a spot for your student, we must admit students who apply if there is a vacancy. Thanks for your cooperation. Imagine Andrews and Sigsbee charter schools also considered educating the children of military-connected families an integral component of their mission. Sigsbee Charter School officials described the children of military-connected families as “central to the school’s mission” and said they offered services geared to the needs of this transient population. For example, the school has a military life counselor available to children with an active-duty parent, who holds small group sessions that address family stress, deployment, and issues related to moving. Sigsbee officials also told us they work closely with a professional organization that provides services to educators working with military-connected children and send the school’s staff to professional development sponsored by this organization. Imagine Andrews, where two-thirds of its students come from military-connected families, offers a range of services for these children: for example, according to the school’s website, Imagine Andrews staff receives in-depth professional development on how to recognize the warning signs of the stressors faced by students in military- connected families and how to help those students deal effectively with the challenges they encounter. While Sonoran Science Academy does not specifically state that serving the children of military-connected families is part of its mission, it enrolled a high percentage of these children and provided services geared to their needs. For example, the school provides a full-time counselor and offers a buddy program for military-connected students transferring into the school, a self-esteem program, and a support group for students with a recently-deployed parent. Located inside the gate on the Davis-Monthan Air Force Base in Arizona, Sonoran Science Academy has a student body that includes 76 percent military-connected students. (See fig. 3). In addition, three schools used enrollment preferences to ensure the children of military-connected families had a greater chance of securing a place in the school: Belle Chasse Academy, Imagine Andrews, and Sigsbee. These schools were among the five schools with the highest enrollment of students from military-connected families. While all of the charter schools on military bases serve a large percentage of military-connected students, they were started for various reasons, including family perceptions about the quality of education available for their children in local school districts and military officials’ need to attract and retain military families to bases. Moreover, in some instances the impetus for establishing a charter school on a military base originated with private housing developers on military bases and charter management organizations. At Imagine Andrews and other schools, school officials told us some parents expressed reservations about enrolling their child in local public schools due to the perception that those schools were of poor quality. As a result, many military families chose to live off-base, which allowed their children access to other districts they believed had higher quality schools. For example, one Belle Chasse Academy parent we interviewed said his concerns about the quality of schooling available in New Orleans led him to consider refusing assignment to the base. Belle Chasse Academy officials also said that some personnel accept assignment to the base, but leave their families behind in communities they believe provide better educational opportunities for their children. In these cases, leaving family behind often negatively impacted service members’ job readiness and happiness, Belle Chasse Academy officials and others noted. Navy officials who helped develop LEARN 6 in North Chicago said quality is important because parents want assurance that their children will “keep up” in the school in which they are currently enrolled and will at least be on grade level when they have to transfer to a new school. Additional parental concerns included children’s safety and the schools’ convenience or proximity to home, according to school officials and others. Military interests were also significant in the creation of some of the charter schools we reviewed. Officials at several schools said that base commanders recognized the important role of quality schools in attracting and retaining service members on base and that commanders’ support was critical to charter school development on base. For example, at Sonoran Science Academy and Imagine Andrews, school officials credited the base commander with being a driving force behind the vision of creating a charter school. Furthermore, the military Base Realignment and Closure (BRAC) process may contribute to growth in the number of military-connected families living on certain bases and heighten demand for more schooling options on bases, including charter schools. At LEARN 6 in North Chicago, military and state interests both contributed to the creation of the charter school. Partly in response to declines in the military population at Naval Station Great Lakes, the state board of education, which approved the charter school application, noted that the district stood to lose millions of dollars in federal Impact Aid if it did not take immediate action to attract and retain military families. The state board further noted that a charter school could help ensure the district’s continued eligibility for federal Impact Aid funds while offering another public school option for the district’s students. Housing developers and charter management organizations (CMO) also led moves to establish some charter schools on military bases, according to some school officials we interviewed. For example, housing developers at Joint Base Andrews that were hired by the military to privatize on-base housing believed that having a charter school on base would attract more families to on-base living. The housing developers worked with Imagine Schools, the CMO, to develop a charter school that would make living on base more attractive. In another case, the CMO Lighthouse Academies decided to open a new charter school campus—Flightline Upper Academy—when demand for spaces in its existing charter school exceeded capacity. Although the new charter school was ultimately located on a nearby military base, the CMO’s original plan was to provide more options for children in the community, not to target the children of military-connected families for enrollment in the school. At Manzanita Charter School, public school educators were the planners because they perceived a need for better educational options for the children of economically disadvantaged families and English language learners. The school’s planners told us they had not considered the children of military-connected families as a target population for enrollment. They said the decision to locate the school on a military base was one of necessity—it was the only facility the local authorizer offered the charter school organizers. While charter schools on military bases encountered some of the same challenges as other charter schools around the nation—such as acquiring facilities and startup funding—they also experienced challenges unique to starting up and operating a charter school on a military base. One of these challenges is maintaining slots for military students whose parents may move more frequently. Specifically, because the high turnover rate among military-connected students at military base charter schools could limit enrollment access of these students, three charter schools provided military-connected students with an enrollment preference (see table 2). Planners for Imagine Andrews wanted all of the slots at the school to be reserved for military families, according to a CMO representative. However, Maryland law requires charter schools to be open to all students. In 2010 the Maryland legislature revised the law to provide an exemption to the open enrollment requirement for military base charter schools, as long as students with parents who are not assigned to the base constitute at least 35 percent of enrollment. However, children of military parents who are assigned to the base, but live off the base, are grouped with civilians because Imagine Andrews also requires residency on the base for enrollment preference. Despite the school’s military student enrollment preference, an Imagine Andrews official said that the school would likely encounter concerns about enrollment from military- connected parents whose children are not able to enroll in the school due to limited slots. Similarly, a representative of Belle Chasse Academy said the school also explored how they could maintain slots for military- connected children. In addition to a stated mission to educate military- connected children and its efforts to encourage military-connected parents to register their children as soon as they are assigned to the base, the school uses a hierarchy of admission preferences, with the top six tiers for military-connected students, the seventh tier for the children of staff, and the final and eighth tier for civilian students (see table 3). According to the official, these preferences were allowed as an admission standard under an interpretation of the law by the state Attorney General’s office, which determined that it was acceptable as long as the mission, academics, and programs of the school were targeted to military- connected students. The preferences have enabled the school to maintain high military student enrollment—approximately 90 percent in SY 2011-12. The official said the school does not generally conduct a lottery because it could lead to higher levels of civilian enrollment and undermine the school’s mission to educate military-connected children. Sigsbee Charter School also encountered a challenge to ensuring enrollment slots for military-connected students. Created prior to recent changes in Florida state law that now permit charter schools to give enrollment preference to children of an active duty member of any branch of the United States Armed Forces, the school utilized a provision in state law that allows a charter school to give enrollment preference to children of workplace employees. A school official explained that the school considered the base a workplace but could not establish a formal business partnership with the base because the base does not provide the school with funds. As a result, school officials established a formal partnership with the base through a memorandum of understanding, which it considered a business partnership for the purposes of satisfying the state requirement for using the school’s enrollment preference. Two other schools that wanted to focus on enrolling military-connected students, including one school currently in development, did not plan to use an enrollment preference because officials said they either believed or were told by state education officials it was not allowable under state charter school law. For example, a Navy official involved with establishing LEARN 6 in North Chicago at Naval Station Great Lakes explained that, while planners for the school wanted to focus on enrolling military- connected students, Illinois state law required the school to be open to all students. According to the official, this requirement could pose a long- term challenge to maintaining enrollment access for military-connected families. He indicated that school stakeholders are currently working to propose changes to Illinois state law that would enable the school to use a preference for military-connected students at a minimum of one-third of the school’s enrollment. Similarly, Fort Bragg military officials involved with establishing a charter high school on the base said that school planners wanted an enrollment preference for military-connected students but were told by state education officials that North Carolina’s charter law does not allow for such a preference. One official indicated the CMO planned to challenge the state’s interpretation of the law. However, even without such a preference, base officials noted that the school’s prospective location on the base would ensure high military-connected student enrollment. Officials said that the school did not plan to include transportation in its budget. However, they said the school may consider offering fee-based busing to students living on base—but not to students living off base. Doing so could also result in higher military-connected student enrollment. Of the three schools currently using an enrollment preference for military- connected students, Sigsbee Charter School and Imagine Andrews received CSP subgrants from their state departments of education. As previously noted, both schools used lottery-based preferences to enroll military-connected students at higher rates than civilian students. The statute authorizing CSP grants requires charter schools, as a condition of receiving funding, to admit students on the basis of a lottery if more students apply than can be accommodated, and to provide a description of how students in the community will be given an equal opportunity to attend the charter school. Education, in its non-regulatory guidance, states that a charter school receiving CSP funds must hold one lottery that provides qualified students with an equal opportunity to attend the school, but also provides for certain exemptions to the lottery requirement. For example, the guidance allows certain categories of applicants to be exempted, such as the siblings of students, children of a charter school’s founders, and children of employees in a work-site charter school. However, the guidance does not specifically address whether schools may exempt military-connected students from a lottery. Further, the guidance also states that schools may use weighted lotteries, which are lotteries that give preference to one set of students over another, but only when they are necessary to comply with certain federal laws, such as Title VI of the Civil Rights Act of 1964, or applicable state laws. CSP officials told us that there were limits to how lottery preferences can be used. For example, a CSP official said that the practice of holding separate lotteries for enrolling civilian and military- connected students is not consistent with CSP requirements. The official also stated concern about enrollment preferences that would significantly limit civilian enrollment access to a school. However, another CSP official stated that they would not necessarily be aware of the specific enrollment practices of Sigsbee Charter School and Imagine Andrews at the time the awards were made, in part because both were subgrantees of state educational agency (SEA) grants and, under federal regulations, the SEA, not Education, is primarily responsible for monitoring subgrant activities and ensuring that subgrantees comply with applicable federal program requirements. However, the official said that CSP does not require SEAs to describe the enrollment preferences of school subgrantees in their grant applications. When charter schools are located on military bases, base security requirements can limit access for civilians. Of the eight charter schools, six are currently located inside a protected security perimeter, which generally requires that civilians pass a background check and carry a base pass to access the school. The background check for one base school, Imagine Andrews, consists mainly of checking the validity of the applicant’s driver’s license and reviewing any recorded criminal history. For this base, passes take about two days to process, are valid for one year, and applicants who are denied access can appeal the decision. At another charter school, Flightline Upper Academy, some civilian parents did not pass the background checks required for base access, according to a school official. When this happens, however, the official said that school staff can escort children to the school. Base security requirements can also limit community participation in school events and activities. For example, an Imagine Andrews official stated that the base restricts each civilian family to three passes, which can create a challenge for them during school events, such as an honors breakfast or awards ceremony. He noted that military-connected families on the base have no such restriction. Further, the official noted that the base does not permit civilian access to the school on weekends, which would prevent the school from holding extracurricular activities, such as morning tutorials or enrichment programs, during this time. The official also explained that the school conducted certain community outreach events, such as open houses, off-base to give off-base civilian families an opportunity to learn about the school without requiring access to the base. Similarly, an official with Sonoran Science Academy Davis-Monthan said that base restrictions on civilian access on weekends prevented them from holding community events on the school grounds. As a result, the school rented off-base facilities, such as a YMCA. According to the official, the base’s limitations on public access was the school’s most significant challenge because it limits the school’s ability to establish relationships with the community and inform the public about the school. Two base schools—Manzanita Public Charter School and LEARN 6 in North Chicago—were located outside the base security gate and therefore did not require base access for civilians. According to a military official who assisted with establishing LEARN 6 in North Chicago at Naval Station Great Lakes, school organizers and stakeholders considered the issue of civilian access to the school prior to its opening and were concerned about the possibility that parents of some civilian students given slots at the school through the lottery would not pass the background check and would not be allowed access to the school. As a result, the base command, the school’s charter management company, and the Illinois State Board of Education jointly agreed that the school should be located outside the base security perimeter in order for the school to operate on the base. Because the school was slated to occupy a former military hospital training facility inside the perimeter, the base command arranged to move a section of the perimeter so that the school would be located outside it and fully accessible to the public. Some schools, including Wheatland Charter Academy, Manzanita Public Charter School, and LEARN 6 in North Chicago, were located on bases that also hosted a traditional public school. While we did not examine these or other district-run public schools on bases, we believe civilian access to them may similarly be limited as a result of military base security requirements. Like charter schools generally, military base charter schools encountered difficulties obtaining facilities for school use, and they may face additional challenges because of their location on military bases (see table 4). As we previously found, securing adequate school facilities is one of the greatest challenges for new charter schools because they typically are not able to rely on the same resources for facility financing—such as local taxes and tax-exempt municipal bonds—as public schools that are operated by school districts. We also previously reported that charter schools’ access to other facility financing options, such as private lending, can also be limited. Charter schools are often considered credit risks because they may have limited credit histories, lack significant cash flows, and have short-term charters that can be revoked. As a result, private loans are not easily accessible to charter schools for facility financing, so they often rely on state or district per-pupil allocations to finance their facilities. Two schools we examined encountered challenges initially securing financing for the construction of new facilities. According to a Belle Chasse Academy official, the school struggled to find a bank that could underwrite a long-term loan to build a facility on Naval Air Station Joint Reserve Base New Orleans. School planners were eventually able to secure a loan after receiving a loan guarantee through the U.S. Department of Agriculture Rural Development Community Facilities Guaranteed Loan Program. Similarly, construction of a school facility for Imagine Andrews was able to start on Joint Base Andrews after a loan for this work was secured by the Charter School Development Corporation (CSDC)—a non-profit group that helps finance charter schools. A representative for the CMO, Imagine Schools, indicated that CSDC secured the loan because the CMO had limited capacity to finance the construction. CSDC cosigned the loan with the CMO, and a private real estate developer guaranteed the loan. Securing financing to renovate facilities for charter schools on bases was another obstacle for some schools. For example, Sigsbee Charter School moved into a former public school facility on Naval Air Station Key West that required significant renovation. The local district provided funds to defray the renovation cost, but these did not fully cover the needed repairs. Because none of the grant funds the school received could be used to renovate its facility, the school relied extensively on local volunteers, including military personnel and parents of students who would attend the school, to make many of the essential repairs to the facility. According to a school official, there are no funds to complete the remaining renovation work. For Flightline Upper Academy on Little Rock Air Force Base, school planners converted a base facility previously used as a conference center and that was slated for demolition. Renovations included removing asbestos, replacing old pipes, and repairing the roof. The cost of the renovations was paid for primarily through donations from a private housing developer and foundations—with no financial contributions from the Air Force. A school official noted that the school was able to address its main financing needs for facility repair prior to opening. However, he stated that the school’s significant investment in renovating a building it leases from the base comes with risk because the Air Force could decide not to renew the 5-year lease and take back the building. Some school representatives said they also had to navigate complex facility and land lease arrangements. With Imagine Andrews, the non-profit CSDC will own the completed facility and lease it to Imagine Schools. The Air Force leased the property to a nonprofit joint venture between the Air Force and a private housing developer, which in turn leased it to CSDC. Imagine Andrews stakeholders also received assistance in structuring the facility financing and land-lease agreements from an agency that oversees the housing privatization program for the Air Force. A Navy official involved with establishing LEARN 6 in North Chicago stated that the real estate arrangements, such as the lease for the school site and facility, were complex and required the involvement of multiple stakeholders, including the CMO, the Navy, and the local municipality. In particular, the official said understanding the appropriate support role for the Navy was a significant challenge to acquiring the facility for the school. For example, the official said it was unclear whether the Navy could provide funds for the school site, such as paying for its utilities. He noted that guidance on developing a lease agreement for charter schools on military bases would be beneficial and could potentially have saved school planners significant resources during startup. Similarly, other military and school officials cited the need for federal guidance and information sharing on starting and operating a charter school on a military base. For example, base officials at Fort Bragg said that they unsuccessfully sought information from the Army that would have helped guide their efforts to establish a charter high school on the base, such as liability issues related to operating a charter school on federal property. North Carolina state education officials denied the school’s application because school planners revised it after the deadline in order to replace most of the founding members on the school’s governing board with new members. According to state education officials, school planners told them they removed these members because a military regulation made them ineligible to serve on the school’s board. As a result of the application denial, the school will not open in 2013 as planned. We also found that currently, little guidance and information sharing exist for guiding the development of military base charter schools and addressing their startup and operational challenges and the guidance that does exist is not DOD-wide. While the Air Force produced guidance to support community efforts to develop charter schools on its bases, it does not apply to other military service bases. Army officials said they are currently developing charter school guidance for bases, which they plan to distribute in January 2013. Further, while Education and DOD have taken initial steps to support information sharing on developing charter schools on military bases—such as Education conducting outreach efforts with school planners and stakeholders at Naval Station Great Lakes and DOD and Education providing online information—some school officials suggested more information sharing could be helpful. For example, according to an official at Sigsbee Charter School, which opened 2 years ago, school planners found the information Belle Chasse Academy representatives provided on establishing on school on a base to be valuable for their own efforts. The official noted that more information sharing in this area would be useful. Against the backdrop of a growing and diverse charter school landscape, charter schools on military bases have emerged as one additional option for military-connected parents. How rapidly charter schools will spread to other military bases is difficult to predict, but demand among military families and base communities for more military base charter schools will likely increase, especially in light of residential growth on bases affected by military Base Realignment and Closure. While the number of charter schools operating on military bases is currently small, they present a novel set of challenges for charter school founders and operators as well as an opportunity for Education and DOD to be in the forefront of emergent issues for these charter schools. One issue that the various stakeholders may well confront is the tension between preserving the public mission of charter schools of being open to all students and the desire in military base communities to ensure enrollment of military- connected students. This tension is already emerging as some charter schools use enrollment preferences to ensure continued enrollment access for highly mobile military-connected students. Using such enrollment preferences could have implications for whether or not a charter school is eligible for federal CSP grant funding. Although Education officials have expressed concern to us over some charter schools’ enrollment preferences and practices, CSP guidance does not specifically address the issue of enrollment preference for military- connected students. Moreover, in two cases, CSP subgrants were awarded by states to charter schools about which Education expressed concerns because of the nature of their enrollment preferences. However, Education does not require SEA applicants for CSP grants to indicate whether schools use enrollment preferences and, if so, to describe those preferences. Such a requirement would allow Education and the states to better determine if an applicant is eligible to receive CSP funds. Finally, as charter school planners, authorizers, and military base commands consider adding schools on bases, they could benefit from having information to help them better weigh the tradeoffs of locating charter schools on bases, including the need for community outreach and civilian access to schools. The 2008 MOU between DOD and Education was intended, in part, to facilitate this kind of information development. Such information would also help them address the types of challenges current schools have encountered. As existing charter schools have discovered, determining DOD requirements and allowable practices— such as the role of base command—was often difficult. Having guidance from DOD on appropriate ways to establish and operate charter schools on military bases may help mitigate these common challenges and smooth school startups and operations. Military base charter schools that have experienced some of these stumbling blocks could share useful information with planners of charter schools in other military base communities—such as information about making facilities and lease arrangements, and building effective working relationships between school administrators and base command. To ensure that Charter Schools Program grants are provided only to schools that meet eligibility criteria, we recommend that the Secretary of Education direct the Charter Schools Program office to revise the Charter Schools Program guidance to  Clarify CSP grant requirements regarding charter school enrollment preferences, including preferences for military-connected students, such as whether schools can hold separate lotteries for military- connected and civilian students and the extent to which schools can enroll military-connected students under work-site exemptions, and  Require applicants for CSP grants and subgrants to describe any enrollment preferences in their applications. To address the specific needs of military communities that charter schools on bases serve while preserving the public mission of charter schools, we recommend that the Secretary of Defense develop and set standards for operating charter schools on military bases and require the appropriate military services to create guidance based on those standards. The guidance should describe the requirements and allowable practices for establishing and operating charter schools on military bases. At a minimum, this guidance should address the following areas:  The appropriate role of military base command and other DOD offices and agencies in supporting the creation and operation of charter schools;  Reasonable base access and security arrangements for civilian children, parents, and others involved in a military base charter school; and  Military lease arrangements and other property-related issues for a charter school on a base. To serve as a resource for military base communities exploring educational options, as stated in their 2008 Memorandum of Understanding, we also recommend that the Secretaries of DOD and Education facilitate the sharing of information among interested parties— such as base commanders and school planners and officials—on how military base charter schools have addressed startup and operational challenges. We provided a draft copy of this report to DOD and Education for review and comment. DOD’s comments are reproduced in appendix III and Education’s comments are in appendix IV. The agencies generally agreed with our recommendations, and Education described its plans for implementing them. Specifically, in response to our recommendation to clarify CSP grant guidance, Education stated that it will review its current non-regulatory guidance to determine how it can clarify admissions and lottery requirements for military-base charter schools that receive CSP funds. With respect to the two charter schools noted in our report that had enrollment preferences but received CSP grants from their states, Education said that the schools’ receipt of these grants raises compliance questions. Education has asked the states to conduct reviews of these instances and report back to the Department. Education also agreed with our recommendation to require CSP applicants to describe enrollment preferences in their applications. Education said it intends to revise its CSP grant application notices to require descriptions of any enrollment preferences. Furthermore, Education plans to request that SEA grantees require CSP subgrant applicants to describe recruitment and admissions policies and practices, including any enrollment preferences they plan to employ. Education acknowledged the importance of working together with DOD to enhance awareness of the unique challenges involved in locating charter schools on bases and indicated steps they would take to continue this work. For example, Education stated that the Working Group established under the DOD and Education Memorandum of Understanding will continue to facilitate the sharing of information on challenges through shared newsletters, outreach, conference participation, panel discussions, and websites. The agencies also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the Secretaries of DOD and Education, relevant congressional committees, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Mission statement The mission of Belle Chasse Academy is to educate our military- dependent children, no matter what their background or previous school experience, to fully achieve their personal and academic potential through the acquisition of core knowledge and the skills of analysis, problem- solving, communication, and global responsibility. Military-connected student enrollment SY 11-12: 845 (90%) Mission statement Imagine Andrews Public Charter School (PCS) was established … to provide outstanding educational opportunities for military and community students. Our mission is to serve our nation by providing the students of the Andrews Community with a “world class” education, while meeting the needs of military families. Our vision is to create a school environment that prepares students for high school and beyond, develops their strong moral character, and provides them with the skills necessary to lead and advance our nation. on the military base. Mission statement We prepare our students for college through a rigorous arts-infused program. Military-connected student enrollment SY 11-12: 82 (50%) Mission statement To provide children with the academic foundation and ambition to earn a college degree. Free-reduced lunch eligible: 66% School Year 2011-12 student demographics School was not open in SY 2011- 12 . Mission statement We are dedicated to advancing academic excellence in Lompoc by providing students in kindergarten through sixth grade with the intellectual capacity to participate and work productively in a multi-cultural society. Military-connected student enrollment SY 11-12: 184 (42%) Mission statement The mission of Sonoran Schools is to provide a rigorous college prep, STEM-focused education through a challenging and comprehensive curriculum, continuous assessment, and dedicated teachers who inspire their students to become the leaders of tomorrow. Military-connected student enrollment SY 11-12: 140 (76%) Preference for military-connected: No Adequate Yearly Progress for SY 10-11: Met Free-reduced lunch eligible: 21% School Year 2011-12 student demographics White: 55% Asian/Pacific Islander: 1% American Indian/Alaskan: 1% Two or more races: 1% . Military-connected student enrollment SY 11-12: 74 (71%) Preference for military-connected: No Adequate Yearly Progress for SY 10-11: Did not meet Free-reduced lunch eligible: 38% School Year 2011-12 student demographics Did not receive complete demographic data, but school is predominantly White. In addition to the contact named above, Sherri K. Doughty, Assistant Director; Sandra L. Baxter; Edward F. Bodine; and Deborah A. Signer made significant contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Jessica A. Botsford, Ying Long, James M. Rebbe, Terry L. Richardson, Laura L. Talbott, and Kathleen L. van Gelder.
Many families struggle to balance their job demands with ensuring that their children have access to a high-quality education, and for military families this struggle can be exacerbated by the highly mobile nature of their service. Family concerns about education affect readiness and retention of military personnel, according to the Department of Defense (DOD). The majority of children of military families in the United States attend public schools. A 2008 DOD study recommended offering military families a public charter school option in areas with poorly-performing local schools. In response to a directive in a House Appropriations Committee report, GAO examined: (1) the characteristics and origins of charter schools on military installations, and (2) the challenges charter schools on military installations have faced in starting up and continuing their operations. To conduct this review, GAO interviewed officials in the eight charter schools on domestic military bases and one school being planned; visited two schools; interviewed Education and DOD officials; and reviewed relevant federal and state laws, federal regulations and guidance, and school, federal agency, and other documents. Eight charter schools were located on domestic military bases and one charter school was being developed on a base at the time of GAO's review. The military base charter schools differed in their academic focuses and served militaryconnected students to different degrees. For example, one school focused on science, technology, engineering, and mathematics while another used the arts to teach all subjects. Enrollment of military-connected students at these base charter schools ranged from 42 percent to 90 percent, and three schools used preferences to ensure a higher proportion of these students. For example, one charter school with a stated mission of educating military-connected children gave first preference to children of active-duty personnel, who represented the preponderance of enrolled students. The schools were established to address different interests, including family perceptions about the quality of education in local school districts and military officials' need to attract and retain military families to bases. In some instances the impetus for establishing a charter school on a military base originated with private entities. For example, a private developer hired to build housing on the base worked with a charter management organization to develop a charter school they thought would make living on the base more attractive to military families. Charter school officials cited several challenges to starting up and operating on military bases, such as using enrollment preferences for military-connected students, providing civilian access to schools, and obtaining facilities. Most states require schools to be open to all students, and when organizers of one school sought to enroll solely military-connected students, state law prohibited this because of the state's open enrollment requirements. Some states have changed or interpreted their charter school laws to enable schools to give enrollment preference to military-connected students. Furthermore, two charter schools that have enrollment preferences for military-connected students have received Department of Education (Education) Charter Schools Program (CSP) grants, which require charter schools to provide all students an equal opportunity to attend the school and admit students by lottery if there are more applicants than spaces available. Although these military base charter schools have received these grants, Education has expressed concern that the use of such enrollment preferences would violate CSP program requirements. Charter schools have also encountered operational challenges. For example, access for civilians can be difficult. Nearly all the military base charter schools were located behind the base's security gate, requiring civilians to complete a background check and show a pass. Several school officials reported difficulties conducting school activities such as open houses and sporting events because each base had a limit on the number of security passes for civilians. Like other charter schools, military base charter school officials also reported obstacles to obtaining facilities, such as financing. However, they also encountered unique challenges, such as complex military facility and land leases. Several school and military base officials said that having guidance and more information sharing could help with startup and operational challenges charter schools on military bases face. However, there is currently little guidance or information sharing about military base charter schools. GAO recommends that Education clarify whether military base charter schools that use enrollment preferences are eligible for charter school grants and that DOD and Education take actions to help address startup and operational challenges for these schools. In their responses, DOD and Education agreed with GAO’s recommendations.
Although about 60 percent of the U.S. population drinks alcoholic beverages without serious consequences, the misuse of alcohol by another 10 percent of the population has significant negative effects on the social, economic, and health status of both those who abuse it and society at large. According to the National Institute on Alcoholism and Alcohol Abuse (NIAAA), about 14 million Americans meet the medical diagnostic criteria for alcohol abuse or alcoholism, and an estimated 100,000 alcohol-related deaths occur each year. About half of the nation’s high school students report current alcohol use, a significant minority drink heavily, and few have difficulty obtaining alcohol. The younger the age of drinking onset, the greater the chance that an individual will develop a clinically defined alcohol disorder at some point in life. All states heavily regulate the sale of alcoholic beverages. Some states take a more direct approach to controlling sales than others. Eighteen states, including Virginia, are generally referred to as “control” states, because the final sale to consumers of, typically, liquor and in some cases wine and beer as well can occur only in state-operated stores at prices established by state beverage control boards. State-operated stores are the exclusive retailers of all legal liquor sold for off-premises consumption throughout Virginia. Final prices are set by Virginia’s Department of Alcohol Beverage Control (ABC). The District of Columbia and the remaining 32 states, including Maryland, are referred to as “license” states, because the distribution and sale of alcoholic beverages is carried out by private license holders. Maryland, however, is not completely a license state, because 1 of its 22 counties—Montgomery County, which borders the District—is a control jurisdiction. The county Department of Liquor Control is the exclusive wholesaler of all alcoholic beverages sold within its boundaries. The Department is also the exclusive retailer of liquor sold for off-premises consumption. To raise revenue and to help prevent alcohol misuse, most states and the District of Columbia levy both excise taxes and retail sales taxes on alcoholic beverages. Some county and city governments also impose their own alcohol taxes. In order to determine whether the District’s taxes on alcohol are greater or less than those in surrounding jurisdictions and other states, one needs to compare the District’s combined—sales and excise—tax structure with the tax structures in the other jurisdictions. Such a comparison is complicated by the fact that alcohol excise tax rates are generally stated as fixed amounts per unit of volume, but sales tax rates are almost always ad valorem (stated as percentages of product prices). The combined tax burden within a given jurisdiction will vary by type, size, and value of beverage and, in some cases, by type of retail establishment. It is also difficult to compare the alcohol tax systems in license jurisdictions with those in control jurisdictions. In a license state, private sector wholesalers determine wholesale prices, taking into account their costs, including the excise taxes that they pay to the state. Private sector retailers charge their own mark-ups on top of those wholesale prices. For these states, one can readily determine what share of the final price of the alcoholic beverage is attributable to the excise tax. In a control state such as Virginia, where the government acts as a monopoly wholesaler and retailer of liquor, the government collects revenue from the sale of alcohol in two ways: (1) by earning the profits that private sector wholesalers and retailers would otherwise have earned and (2) by imposing an excise tax on the alcohol. One cannot easily determine how much of the combined revenue that the state collects should be considered the profits of the state’s alcohol sales operations and how much should be considered an excise tax. The state may make a distinction between the price mark-up that it charges and the excise tax that it levies, but this distinction has little meaning to consumers or taxpayers. There is a wide range of mark-up/excise tax rate combinations that the state could have set to yield the same amount of revenue and impose the same costs on consumers. One way to define an “effective” excise tax rate on alcohol in a control state is to say that it equals the statutory excise tax rate, plus any cost that the state control system imposes on consumers above what those consumers would have borne if that system were not in place. For example, if a state-run store imposes a higher mark-up than a private dealer would have, then this additional cost to consumers can be considered part of the effective tax. The effective tax rate defined in this manner provides a better basis for comparison with the excise tax rates imposed in license states. To meet our first two objectives of (1) comparing the District’s taxes on alcoholic beverages with those of surrounding jurisdictions and other states and (2) determining whether the District’s tax structure can be brought into closer conformity with the tax structures in surrounding jurisdictions, we reviewed the relevant laws and regulations of the various jurisdictions. We also interviewed officials from the District’s Office of Tax and Revenue and from the alcohol beverage control commissions in the District; Montgomery County, Maryland; and Virginia. We also obtained alcohol tax information for the 50 states from the Federation of Tax Administrators and the Distilled Spirits Council of the United States (DISCUS). In an effort to determine whether the pricing policies of Montgomery County and Virginia result in effective tax rates that are higher than the statutory tax rates for certain alcoholic beverages, we obtained lists of the wholesale prices that Montgomery County charges on all of the alcoholic beverages that it sells to retailers in the county and lists of the wholesale prices for wine and liquor that private sector wholesalers charge retailers in the remainder of Maryland. The latter prices are published in the Maryland Beverage Journal. We compared Montgomery County’s prices (effective during January 1998) to those of the private wholesalers for (1) 14 liquor items identified by the Virginia ABC Commission or DISCUS as top sellers, (2) a random sample of an additional 28 liquor items from Montgomery County’s price list, and (3) a random sample of 29 wine items from the county’s price list. We could not make a comparison of beer wholesale prices, because the state of Maryland requires publication of only liquor and wine wholesale prices, not beer prices. As one possible way to determine whether Virginia’s controlled prices for liquor are, on average, higher than the competitive market prices in the same region, we considered doing a survey comparing prices in Virginia with those in the District and its Maryland suburbs for the top-selling liquor items. However, representatives of the District’s Alcohol Retailers Association told us that a common retail pricing practice is to sell selected popular items at a large discount, possibly even at a loss, in order to attract customers who will then also buy items that have higher price mark-ups. For this reason, a survey limited to just the best-selling items would probably be misleading. The retailers’ representatives also indicated that it would be very difficult to obtain good estimates of the average prices in each jurisdiction, because there are so many different brands and bottle sizes, and the prices for each item are likely to vary considerably across different types of retail outlets in each jurisdiction. It was beyond the scope of this study to undertake the extensive retail price survey and analysis needed to make such estimates. To determine how much higher the District’s alcohol excise tax rates would be if they had been indexed for inflation, we raised these rates by the same percentage increase as the average increase in alcoholic beverage prices since the last time each rate was changed. To compute the average price increases, we used the U.S. Department of Labor’s quarterly Consumer Price Index alcoholic beverage component for the Washington metropolitan area. We also compared the combined—excise and sales—tax burdens on typical alcoholic beverages over time. To determine whether existing empirical research indicates that raising the District’s alcohol taxes is likely to reduce alcohol abuse, particularly among youths, and related health problems, we reviewed and summarized several surveys of the relevant academic literature that have been published in recent years. We then obtained comments on our summary from five academic and government experts in this field of research and modified our summary to reflect those comments. To identify which states earmarked their alcoholic taxes for specific purposes, we used the latest summarization of state tax earmarking prepared by the National Conference of State Legislatures. The latest available earmarking information is for fiscal year 1993. We did not independently verify the accuracy of this information. To describe the characteristics of alcohol prevention programs and legal and regulatory strategies that researchers have deemed effective, we identified and reviewed selected literature on alcohol prevention research and evaluated syntheses of research literature. Given the vast literature on this subject and the time available to us, we relied heavily on information presented in NIAAA’s 1997 publication, Alcohol and Health. We also interviewed key officials responsible for overseeing alcohol prevention research for youth and other populations at NIAAA. To obtain information on the District of Columbia’s alcohol laws and regulations, we interviewed the Executive Director of the D.C. Alcoholic Beverages Control Board, reviewed relevant provisions of the District of Columbia’s codes and municipal regulations, and examined reports provided by the Board. To obtain information on how the District’s alcohol laws are being enforced, we interviewed responsible officials of the D.C. Metropolitan Police Department. We obtained similar information from officials responsible for managing and enforcing alcoholic beverage programs in Montgomery and Prince George’s Counties in Maryland and the City of Alexandria and Arlington County in Virginia. To obtain information on the District’s health and education prevention programs, we interviewed officials at the Department of Health’s Addiction Prevention and Recovery Administration and the District of Columbia Public Schools and reviewed key agency documents. We conducted our review in Washington, D.C.; Virginia; and Maryland between December 1997 and April 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the District of Columbia’s Office of Tax and Revenue, Department of Consumer and Regulatory Affairs, and Department of Health; the District of Columbia Financial Responsibility and Management Assistance Authority; the Virginia Department of Alcohol Beverage Control; NIAAA; the Substance Abuse and Mental Health Services Administration (SAMHSA); and the Prevention Research Center (PRC). These comments are summarized and discussed near the end of this letter. We also requested comments from the Montgomery County Department of Liquor Control but did not receive any in time to include them in this report. The interaction of the sales taxes and the excise taxes within each jurisdiction results in combined tax burdens that vary by type, size, and value of beverage and, in some cases, by type of retail establishment. We computed the combined statutory taxes paid in each jurisdiction on a range of different beverage items. In comparison to the taxes levied in all but one Maryland county, the District’s combined statutory tax rates are higher for all types of alcoholic beverages. In comparison to the taxes levied in adjacent Virginia jurisdictions, the District’s combined statutory tax rates are higher for almost all beers and for relatively high-priced wines, but the opposite is true for relatively low-priced wines. The effective tax rates on liquor in Virginia and on all alcoholic beverages in Montgomery County may differ from the statutory tax rates as a result of government controls over prices. Consequently, precise comparisons of the taxes on those controlled items cannot be made. The statutory alcohol excise tax rates levied by the District and the state of Maryland are very similar, as shown in table 1. Virginia’s excise taxes on beer and wine are significantly higher than those in the District and Maryland. Virginia’s liquor tax is not directly comparable to taxes in the other two jurisdictions—both because it has an ad valorem rate and because the state sets the price upon which the tax is computed. There are no local government alcohol excise taxes in either Maryland or Virginia. The District imposes an 8 percent sales tax on alcoholic beverages sold for off-premises consumption; it levies a 10-percent tax on alcoholic beverages and food sold for on-premises consumption. These rates are higher than the 5.75-percent rate the District imposes on the sales of most goods. They are also significantly higher than Maryland’s and Virginia’s sales taxes. The interaction of the sales taxes and the excise taxes within each jurisdiction results in combined tax burdens that vary by type, size, and value of beverage and, in some cases, by type of retail establishment. In order to compare the combined statutory tax burdens on alcohol across jurisdictions, we have computed the taxes that would be paid on three beverage items sold for off-premises consumption and on three items sold for on-premises consumption. In order to demonstrate the importance of prices in the calculation of combined tax burdens, we used a wide range of prices for each beverage item. (See tables 2 and 3.) The District’s combined taxes on beer are also higher than those in adjacent Virginia jurisdictions. In contrast, Virginia’s combined statutory taxes on liquor are higher than those in the District across the full range of prices we examined. For wine, Virginia’s combined taxes are higher than the District’s, except at the high end of the price range. The monopoly power that the governments of Virginia and Montgomery County have over the sale of some alcoholic beverages within their boundaries provides them with an opportunity to set prices for those beverages to achieve objectives, such as discouraging alcohol consumption or maximizing the government’s monopoly profits, that either would not be considered by private sector businesses, or would not be achievable in a competitive market. If the government’s pricing policy results in final prices to consumers that are higher than those that private sector businesses would charge, then the effective tax rates on those beverage items exceed the statutory tax rates. Conversely, if the government’s pricing policy results in lower final prices, then the effective tax rates are lower than the statutory tax rates. The explanations of Virginia’s and Montgomery County’s pricing practices given to us by government officials did not provide a sufficient basis to allow us to say whether the controlled prices in those jurisdictions are likely to be higher or lower than those that would have existed without government controls. Officials from the Virginia Department of Alcohol Beverage Control told us that the goal of their pricing policy is to generate a reasonable rate of return for the state. The prices for liquor items sold in state stores are standard across the state. The officials told us that these statewide prices are not greatly influenced by price competition from the District. An official from Montgomery County’s Department of Liquor Control told us that it was his understanding that the county’s wholesale price mark-ups for alcoholic beverages are intended to reflect the prevailing mark-up practices in the industry. Information provided by Virginia and Montgomery County officials and by representatives of the District’s Alcohol Retailers Association, whom we interviewed, suggests that the relationship between controlled and free market prices is likely to vary across beverage items. Our limited comparison of liquor and wine wholesale prices in Montgomery County and the remainder of Maryland supports this idea. We do not have sufficient data to determine whether the controlled prices in Virginia and Montgomery County are, on average, higher or lower than those that would have existed without the controls. Consequently, we are not able to say whether Virginia’s effective tax rates on liquor are higher or lower than the statutory rates, or whether the effective tax rates on all alcoholic beverages in Montgomery County are higher or lower than the statutory rates. Although the District’s excise taxes on alcoholic beverages are lower than those in most of the 50 states, the District’s combined tax rates on beer and wine are higher than those in most states because its sales tax is among the highest. The District’s combined taxes on liquor are higher, for at least some items, than in most of the states for which we could readily make comparisons. Our analysis of data published by the Federation of Tax Administrators indicates that the District’s excise tax rate on beer is lower than the rates in 40 states; its excise tax rate on liquor is tied with that of Maryland as the lowest among the 32 states that do not control the sale of liquor; and its excise tax rate on wine is lower than the rates in 38 out of the 46 states that do not control the sale of wine (see app. II). It was beyond the scope of this study to estimate effective excise tax rates for those states that control the sales of liquor and/or wine. In contrast to the excise taxes, the District’s sales taxes on alcoholic beverages are higher than those in 45 states and equal to those in another state. For the remaining four states, the comparison is mixed—depending on whether the sales are for on-premises or off-premises consumption. The tax rates shown in appendix II do not include any taxes that may be levied by governments below the state level. We computed combined alcohol tax burdens for most of the states in a manner similar to our computations for the District, Maryland, and Virginia in tables 2 and 3. We did not have sufficient time to incorporate all of the complexities of the alcohol taxes of some states into our computations, so we left those states out of our comparison. Consequently, although we can say that the District’s combined taxes on beer and wine are higher than those in most states (for the price ranges we examined), we cannot say exactly where the District ranks. We could not make adequate comparisons with enough states to say whether or not the District’s combined taxes on liquor are higher than those in most states. However, for the states with which we could make comparisons, the District’s combined taxes on liquor were higher than most of them, for at least part of the price range we examined. There are significant differences among the alcohol tax structures of the various jurisdictions surrounding the District. The District cannot conform to all of those tax structures at the same time. Moreover, the District would not be able to impose exactly the same effective tax rates that exist in either Virginia or Montgomery County, because those effective rates are difficult to estimate precisely; might change frequently; and are likely to vary by beverage type, brand, and container size. In order to conform its combined statutory tax rates to those in Maryland, the District would have to lower its taxes on all alcoholic beverages. The only way for the District to make its statutory taxes similar to those in Virginia, across the entire range of alcoholic beverages, would be to adopt a single sales tax rate for alcoholic beverages that is close to Virginia’s and to adopt excise tax rates that are all very close to Virginia’s. This change would lower taxes on beer and high-priced wine in the District while raising the tax on liquor and low-priced wine. The District would face a difficult enforcement task if it adopted the ad valorem excise tax rate on liquor that Virginia currently levies. Virginia’s liquor excise tax is computed as 20 percent of the liquor price after the state has taken its combined wholesale and retail price mark-up. To levy an equivalent tax, the District would have to impose the 20-percent rate on the final prices that retail stores charge their customers before applying the sales tax. This tax would be more difficult to enforce than the District’s current liquor excise tax, because (1) it would have to be collected from a much greater number of taxpayers; (2) taxpayers would have more of an incentive to understate their liquor sales, because the tax rate paid by each taxpayer would be higher than it currently is; and (3) to verify compliance, District auditors would have to examine a taxpayer’s detailed sales receipts. Virginia does not face these enforcement difficulties, because it collects the tax from its own stores. No license state levies an ad valorem alcohol excise tax. It would be difficult for the District to devise an ad valorem liquor tax that closely approximates the tax that Virginia imposes on liquor sold for on-premises consumption. Restaurants and bars in Virginia buy their liquor from state-run stores at retail prices that already reflect the 20 percent excise tax. Those prices are not equivalent to either the wholesale prices that restaurants and bars in the District pay for their liquor or to the prices that District restaurants and bars charge their final customers. Consequently, the District could not replicate Virginia’s tax simply by imposing a tax rate on either the wholesale or final prices. The District would have to define a new tax base that approximates the state-determined prices on which Virginia’s excise tax is imposed. We are unable to say whether average alcohol prices and revenues from alcohol taxes would increase or decrease if the District conformed its statutory alcohol tax rates to those in Virginia. In order to estimate the effects on average prices and revenues, we would need to know the distribution of alcohol sales in the District, by beverage type and by price range. However, this information does not exist. The total value of alcohol sales in the District each year is unknown. Without knowing how average prices would be affected, it is not possible to predict how total alcohol consumption in the District would be affected by this specific change in the District’s tax structure. The decrease in the combined tax on beer likely would cause beer consumption to increase, and the increase in the combined tax on liquor likely would cause liquor consumption to decline. In order to determine whether such a trade-off would be desirable, we would have to be able to estimate the relative sizes of the changes in beer and liquor consumption. The data needed to make such estimates do not exist. National surveys indicate that beer is the alcoholic beverage of choice among youths who drink alcoholic beverages. There is also some evidence that beer is disproportionately preferred by those who drink a lot during a typical session and that drinkers who prefer beer are more likely to drive while intoxicated than those who prefer wine or liquor. All of the District’s per-unit excise tax rates have declined in inflation-adjusted terms since they were last changed. During this time, however, the District increased its ad valorem special sales tax rates on all alcoholic beverages. For most of the beverage items we examined, the increases in the sales tax rates have, to date, more than compensated for the lack of indexation of the excise tax rates. The District’s excise tax rates for beer, wine, and liquor were last changed in 1989, 1990, and 1978, respectively. Each has declined in inflation-adjusted terms since those last changes. Table 4 shows what the rates would have been at the time we did our review if they had been increased to keep pace with inflation. The differences between the current tax rates and the inflation-adjusted rates for beer and wine are relatively small, because the last changes were relatively recent and price inflation has been very moderate. In contrast, the inflation-adjusted excise tax rate on liquor would be more than twice as high as the current rate. Since the time that all of these excise tax rates were last increased, two changes have been made to the special sales tax rates for alcoholic beverages. In 1992 the sales tax rate for alcoholic beverages sold for off-premises consumption was increased from 6 percent to 8 percent. In 1994 the rate for on-premises consumption was increased from 9 to 10 percent. There was one additional change in the on-premises sales tax rate—from 8 to 9 percent in 1989—since 1978, when the liquor excise tax rate was last changed. The increases in the sales tax rates have more than compensated for the lack of indexation of the excise tax rates for most of the beverage items we examined. For each alcoholic beverage item included in tables 2 and 3, we computed the tax burdens at two points in time—now and the date at which the excise tax on that item was last changed. We also computed each tax burden as a percentage of the final price of the item. (See table 5.) Only for the lowest-priced liquor item sold for off-premises consumption has the tax burden declined noticeably since the earlier date. Currently, the District’s combined taxes on liquor account for 13.0 percent of the final price that consumers pay for an $8.00, 1-liter bottle of liquor. In 1978, the District’s combined taxes on that same bottle accounted for 15.7 percent of its final price. The District’s alcohol excise taxes will continue to decline gradually in inflation-adjusted terms. In order to keep the real value of its combined tax rates on alcohol close to what they are currently, the District would have to increase its excise tax rates periodically, in step with inflation. Taxes on alcohol are a means of reducing alcohol misuse while at the same time raising revenue. Economic theory and empirical evidence indicate that higher alcohol taxes increase the prices for alcoholic beverages, and higher prices affect alcohol consumption. However, there is some uncertainty regarding the extent to which the taxes are passed through to consumers as higher prices, and empirical research estimates vary on the degree to which changes in prices affect alcohol consumption. Researchers have found that changes in alcohol prices affect most categories of drinkers, but to different degrees. Youths and young adults appear to be more sensitive to price than older adults. Some recent studies have found that higher alcohol taxes and prices are associated with declines in drunken driving, motor vehicle fatalities, rapes, and robberies. The special geographic circumstances of the District—where all of its suburbs are in other jurisdictions—may serve to weaken the effect that an increase in the District’s taxes would have on local alcohol consumption. Economists have suggested that the extent to which any excise tax increase is passed along to consumers varies, depending on the characteristics of the markets where consumers purchase their beverages. Such characteristics would include how much competition among sellers exists in the markets. There is a clear presumption in the economic literature that in the long run, under perfectly competitive market conditions, tax increases on consumer goods are completely passed along by producers, wholesalers, and retailers to the final consumers in the form of higher prices for the taxed goods. The alcohol industry, however, does not operate under purely competitive conditions. Researchers have concluded that the alcohol industry is oligopolistic, meaning that it is dominated by a few large suppliers. There is no generally accepted theory of how prices are determined in an oligopolistic industry; therefore, the exact extent to which alcohol excise taxes will affect the prices of alcoholic beverages is uncertain. Nor is there clear empirical evidence to indicate what portion of a tax increase will be passed along to alcoholic beverage consumers in the form of increased prices. Some researchers have estimated that past alcohol excise tax increases have caused prices to increase by more than the full amount of the tax increase. Other economists have suggested that the extent to which excise tax increases are passed along to consumers varies depending on the characteristics of the markets where consumers purchase their beverages. Such characteristics include how responsive market demand is to price changes and how much competition among sellers exists in the markets. In the absence of more conclusive evidence, most researchers trying to model the effects of taxes and prices on alcohol consumption assume that an excise tax increase will cause sellers to raise their prices to consumers by at least the full amount of the tax. Numerous empirical studies confirm the conclusion of economic theory that the higher prices that result from tax increases will have a negative effect on alcohol consumption. However, precise estimates of the degree to which consumption is affected are difficult to obtain given the limitations of available data on alcohol prices and consumption. Researchers have found that light, moderate, and heavy drinkers in the general population cut back on consumption when alcohol prices are increased. However, study results vary concerning the relative price sensitivity of light, moderate, and heavy drinkers. The preponderance of the evidence on youth drinking indicates that youths and young adults are more sensitive to price than older adults, particularly those adults who have developed a long-term lifestyle that includes heavy drinking. Researchers also found that beer is the beverage of choice among youth who drink alcoholic beverages and that youth seem to be more responsive to changes in alcohol prices than the population in general. Researchers also have found negative relationships between alcohol prices or tax rates and the adverse consequences associated with alcohol misuse, especially between alcohol use and auto crashes and fatalities. For example, a recent study demonstrated that higher state beer excise tax rates had a significant impact on lowering total driver fatalities, night driver fatalities, and alcohol-related fatalities for drivers of all ages and for drivers 18 to 20 years old. Another study found that alcohol prices had a negative effect on binge drinking—a 9-percent reduction in the number of binge episodes per month resulted from a 10-percent increase in price. Other research indicates that higher alcohol taxes (or prices) have a negative and statistically significant effect on suicide rates; possibly on the liver cirrhosis death rate; on mortality rates from other cancers to which alcohol contributes; and on violent crimes, such as rape and robbery. Some authors observe that the bulk of evidence supports the conclusion that increasing alcohol taxes would extend life expectancy. If the District raises its alcohol taxes while Maryland and Virginia do not, then some consumers who currently purchase their alcohol in the District may shift the location of some of their purchases to neighboring jurisdictions. The actual public health benefits of an increase in alcohol taxes would be reduced to the extent that the tax increase merely shifted the location of purchases rather than reducing consumption. Prior research indicates that the rates of local sales and/or excise taxes across jurisdictions within a region can influence where the consumers of that region shop. The importance of these so-called “border effects” of tax rate differences depends on the specific border situation in question, but several characteristics of the District’s metropolitan area imply that policymakers should not ignore these effects when considering changes in the District’s alcohol taxes. First, most of the District’s residents live within a relatively short distance of alcoholic beverage retail outlets in Maryland or Virginia. Second, every work day the District has a large influx of commuters, who might shop in the District with relative ease if they had a sufficient incentive. Finally, residents in the immediate metropolitan area have a wide range of choices of bars and restaurants in all three jurisdictions. If the District were the only jurisdiction in the region that raised its taxes on alcohol, then the after-tax prices of alcoholic beverages sold in the District would increase relative to prices in the surrounding jurisdictions, and retailers operating in the District could lose some business to competitors in the surrounding jurisdictions. In comparison to, say, a 10-cents-per-gallon increase in the federal beer tax, a 10-cents-per-gallon increase in the District’s beer tax would have less of an effect on alcohol consumption (and associated problems) of individuals who currently buy alcohol in the District. Those individuals could avoid a District tax increase that was passed on to consumers by making their purchases in Maryland or Virginia. If the cost of shifting the location of their purchases is less than the cost imposed by the District tax increase, then their cost of consumption would not have risen by the full 10 cents per gallon. In contrast, these individuals could not avoid a full increase in the federal excise tax that was passed on to consumers by shifting their purchases. Because the District tax would increase the cost of consumption for some consumers by less than a similar federal tax increase would, it would produce smaller aggregate behavioral changes.In addition to reducing the beneficial behavioral effects of an alcohol tax increase, the shifting of sales would reduce the potential revenue gains for the District. It would be difficult to accurately estimate the size of the shift in sales that would occur from any given increase in the District’s alcohol taxes because of the many factors involved. As of 1993, 24 states earmarked at least a portion of their alcohol excise tax revenues for specific purposes. The percentage of the alcohol tax revenue that was earmarked in each of these states ranged from 4.5 percent in Colorado to 100 percent in West Virginia. The purposes for which the alcohol revenue were earmarked also varied substantially across these states—from public schools, to local governments, to convention promotion. Appendix IV shows how much revenue was earmarked for each purpose in each state in fiscal year 1993. In 15 of the states some of the revenue was specifically earmarked for alcohol treatment, substance abuse, and/or mental health programs. Since 1994, the District has earmarked 10 percent of its sales tax on alcoholic beverages sold for on-premises consumption to the Washington Convention Center Authority Fund. Communities have opted to use alcohol prevention approaches that fall into three general categories. The first approach emphasizes education and skill-building programs directed toward individuals in schools, families, colleges, and specific population groups (e.g., women and minorities). The second approach is more population-based, using legal and regulatory strategies to influence the physical and social environments in which drinking occurs or is promoted. For example, state and local governments seek to control the availability of alcohol by regulating the location, hours of operation, and number of establishments that sell alcoholic beverages. A third approach combines these two by creating multiple communitywide strategies, such as using education and skill-building programs to support new laws and regulations. When programs directed toward individuals have shown success, their effects have been small. Many of these programs need to be evaluated more rigorously and over time to determine their effectiveness. Research has shown that several laws and regulations that control the physical availability of alcohol or the social environment in which drinking occurs have resulted in lowered consumption and fewer alcohol-related problems. However, research on other preventive approaches influencing children’s social environment, such as controlling advertising, has been less definitive. Studies have shown that combining the use of individual and population-based legal and regulatory strategies can be successful; additional studies of this approach are still under way. Alcohol prevention approaches that are directed toward the individual generally use education, information, and skill-building activities to change attitudes and beliefs that influence drinking behavior and enhance people’s ability to resist underage and abusive drinking. These approaches, although directed toward the individual, are most often presented in group settings, such as schools, families, and colleges and universities; or may focus on specific population groups, such as minorities and women. Table 6 describes different types of education and skill-building programs that are commonly used to combat alcohol abuse and prevent drinking among youth. Since the 1960s, school-based programs have played a key role in the prevention efforts of many states and communities, primarily because they give easy access to a young audience. Research shows that although these programs are one of the most popular alcohol prevention approaches and continue to target thousands of today’s young drinkers and potential drinkers, experts still debate their effectiveness. The main goals of school-based programs are to decrease the overall prevalence and level of drinking among youth; reduce the progression of alcohol consumption to problem levels; and, ideally, prevent young persons from starting to drink. One of their major strategies is to influence knowledge, beliefs, or attitudes about alcohol and its effects. Participants in one life skills program reported lower alcohol use than nonparticipants after 5 years.The prevention literature suggests, however, that the success of most school-based programs in preventing the onset of drinking and reducing the use of alcohol has been small. Communities also use education and skill-building programs directed to individuals in family units, colleges, and specific groups, such as women and minorities. Research results have suggested that parent participation in alcohol prevention programs could be effective in reducing alcohol use, but such programs generally have a difficult time getting large numbers of parents to participate on a regular basis. Research has also shown that a prevention program for freshman students at one university succeeded in reducing alcohol consumption and problems associated with excessive drinking. Researchers are exploring whether minorities, women, and other special populations could benefit from prevention programs tailored to their needs. The research literature suggests that certain legal and regulatory strategies that influence the physical and social environments in which alcohol is consumed are effective in reducing consumption and alcohol-related problems. Prevention approaches that use these strategies generally fall into two categories: (1) those intended to influence individual drinking practices, such as enforcement of impaired driving laws; and (2) those aimed at regulating the availability of alcoholic beverages, such as restricting the number and location of establishments selling alcohol. Prevention research has produced evidence demonstrating the effectiveness of several legal and regulatory strategies, such as laws prohibiting the sale of alcohol to minors, server training programs, and various measures to deter drinking and driving. Research has not been conclusive, however, regarding the effectiveness of laws and regulations that, for example, control the hours and days of alcohol sales, restrict or ban alcohol advertisements, or require warning labels on alcoholic beverages. Table 7 shows various types of legal and regulatory strategies and their success in reducing alcohol consumption and problems associated with excessive drinking. The literature suggests that visible enforcement programs and education can enhance the beneficial effects of certain legal and regulatory strategies. The deterrent effect of a law depends, at least in part, on the public’s belief that violations are likely to be detected and violators punished. The District of Columbia and the states of Maryland and Virginia use a number of enforcement techniques to ensure compliance with state and local laws covering, among other things, who may serve or be served alcohol, the type of alcohol that can be sold, and the time during which alcohol can be sold. Appendix V describes selected provisions of alcohol control laws, along with related penalties, for the District, Maryland, and Virginia. Alcohol beverage control (ABC) boards establish conditions for issuing licenses to sell alcohol and rely on a cadre of enforcement officials to monitor compliance with these regulations and impose penalties for violations. Owners of licensed drinking establishments and alcohol servers can be punished by fines and short jail terms if they violate alcohol-related laws and regulations. The District and neighboring jurisdictions in Maryland and Virginia devote different levels of resources to enforcement. For example, the ratio of enforcement officials to licensees in the District is about 1 to 400; the ratio in the City of Alexandria is about 1 to 150. In Virginia, unlike Maryland and the District, ABC enforcement officials have enhanced authority, which, among other things, allows them not only to fine the establishment that sells alcohol to underage purchasers, but also to fine the employee and the youth attempting to purchase alcohol. ABC officials in the District and Maryland would like similar enforcement authority granted to their staff, because they believe that this authority is a highly effective deterrent to underage drinking. Following is a description of several legal and regulatory strategies whose effectiveness has been demonstrated by research. Studies of server training programs reveal that they can modify servers’ and managers’ knowledge and beliefs about alcohol service and bring about changes in serving practices that help reduce the rate and amount of alcohol consumed by patrons. Such training increased staff intervention with intoxicated patrons and increased servers’ willingness to suggest alternative beverages and forms of transportation. Research also shows that server intervention can be greatly enhanced through increased enforcement of alcohol control laws and server liability laws. Several researchers have explored the effectiveness of increasing the visibility of enforcement and the rigorousness of prosecution of alcohol control laws. They found declines in both the number of arrests for driving under the influence of alcohol obtained at bars and restaurants and incidents of alcohol service to researchers posing as intoxicated patrons. Research also shows that liability laws have affected the behavior of persons who serve alcohol, which in turn affects the drinking practices of patrons. Server liability laws place the server at risk of committing a violation for serving alcohol to underage drinkers or highly intoxicated patrons. Further, under civil liability, or dram shop laws, an alcohol server has potential legal responsibility for damage that intoxicated patrons and underage drinkers inflict on themselves or others. The financial loss that bar and restaurant servers and managers may incur is expected to deter serving practices that could increase a patron’s risk of a motor vehicle accident and other liabilities. The minimum legal drinking age (MLDA) policy has been heavily studied, with numerous research findings demonstrating the effectiveness of a higher MLDA in preventing injuries and deaths among youth. For example, the MLDA of 21 is estimated to save more than 1,000 young lives each year. A recent review of 50 studies provided evidence that raising the legal drinking age to 21 reduced youth drinking and related problems, such as traffic crashes. In response to many concerns about people under the age of 21 easily obtaining alcohol, research has also suggested that the MLDA could become even more effective with increased enforcement, including deterrents for adults who might sell or provide alcohol to minors. Lower legal blood alcohol concentration (BAC) limits for youth and adults have been found to decrease alcohol-related traffic fatalities. Many states have lowered legally allowable BAC limits for young drivers in an effort to reduce their involvement in alcohol-related crashes; for these states, BACs range from .00 to .05. An analysis of the first four states to lower BAC levels found that these states experienced a decline in teenage nighttime fatal crashes 30 percent greater than declines in nearby comparison states that did not lower the BAC limit. A number of states have also lowered the legal BAC for adults from .10 to .08 percent. Studies of a subset of these states project that if all states adopted a .08 percent BAC law for adults, at least 500 to 600 fewer deaths would occur annually. Lately, communitywide prevention efforts that combine multiple strategies have become a more popular response to problems related to alcohol misuse. These communitywide programs incorporate strategies to both regulate the physical and social conditions in which drinking occurs and educate individuals about alcohol use and enhance their ability to reduce or resist drinking. Early research indicated that using multiple approaches produced only temporary changes in drinking behavior and the prevalence of alcohol-related problems. Programs were redesigned in response to these findings, and preliminary data suggest that these newer multifaceted strategies may be more successful in reducing alcohol consumption than either individual or environmental approaches alone. For example, in several states implementation of a strong educational program, along with lowering the legal BAC limit for teen drivers, was reported to significantly reduce nighttime fatal automobile crashes. A 5 year community prevention trial that combined several strategies, including mobilizing the community, increasing enforcement of drinking and driving laws, and enforcing underage sales laws, resulted in a reduction in alcohol-involved traffic crashes of about 10 percent a year and significant reductions in alcohol sales to minors. Evaluations of several major studies of communitywide approaches are still under way and no final outcome data are available. One of the studies, Project Northland-Phase II, focuses on reducing drinking and alcohol-related problems among 15- to 17-year-olds and includes a combination of school and media curricula; youth social action programs; parent involvement and education; and community task forces for numerous policy and social interventions (e.g., enforcing existing laws prohibiting alcohol sales to minors and restricting alcohol sales at sporting, music, and other public events). Based on the current research literature, only a few alcohol prevention approaches have been adequately evaluated and proven effective. With the exception of several well-designed studies, such as Project Northland and research on the MLDA, most published evaluations of the effectiveness of alcohol prevention programs and strategies were shown to be methodologically weak. Detailed reviews of the alcohol prevention literature by NIAAA, the Institute of Medicine, and other experts found limitations in the study designs that affect the evaluation of outcomes and may compromise conclusions. Common problems include questions about the validity of self-reported data, the selection of inappropriate research designs and statistical analyses, lack of comparable experimental and control groups, and the potential impact of high attrition rates. Evaluations of early school-based programs, for example, relied heavily on self-reported data to measure alcohol use, which raises concerns about possible underreporting or overreporting by program participants. Although recent studies have attempted to address many methodological challenges that commonly face researchers of prevention programs, concerns continue to surface. For example, although research has shown that some legal and regulatory approaches are effective, the inability to control factors beyond the study interventions makes it difficult to determine the exact nature of the relationship between the prevention strategy and changes in drinking behavior. The District of Columbia, Maryland, and Virginia support a number of enforcement strategies to enforce laws intended to prevent underage drinking and the misuse of alcohol by adults. Some of these strategies are directed at sales outlets, while others are aimed at preventing underage access to alcohol and promoting responsible drinking. ABC Boards, ABC enforcement officials, and local police departments are responsible for implementing these strategies. Although the following strategies have not been formally evaluated, officials we interviewed cite them as successful. A reverse sting operation, commonly referred to as cops in shops, was an effort in which police officers, posing as store clerks, apprehended minors using false identification to purchase alcoholic beverages while ABC officials and/or police officers waited outside in cars. Some of the officials we interviewed ranked this program as the most successful enforcement program in their jurisdictions, claiming numerous citations issued to youth and subsequent decreases in underage attempts to purchase alcohol. Officials said that establishment owners welcomed this program, because only underage violators were fined, not the establishment. Additionally, the establishments did not have to pay the police officer, who worked as a clerk for several hours in the liquor outlet. This program, funded through a federal grant with the assistance of the Washington Regional Alcohol Program of Northern Virginia, was implemented for 1 year in the District and several Maryland and Virginia counties. Maryland requires an alcohol awareness training course for every licensed establishment. Alcohol beverage control officials in the District and Virginia support efforts to offer such training, and the District requires training for establishment owners who have violated liquor laws. Officials we interviewed believe that statewide laws should require that a trained person be on the premises of an establishment at all times, or else the training has little effect on consumption. Although Maryland law does not require this, Montgomery County supplemented Maryland law to require a trained person on the premises at all times. Beer keg registration, a strategy in use in the District, Maryland, and Virginia, requires that every keg sold must be registered, with information on a label as to who bought it, what kind of identification was shown, and where the keg came from. This is to discourage adults from purchasing kegs for youth; if the label is removed, legal responsibility rests with the person hosting the party. Officials told us this has resulted in a decline in adult beer keg purchases for underage drinkers and has cut down on the number of youth drinking parties. Montgomery County, Maryland, and Virginia use sting operations involving underage decoys who accompany ABC enforcement officials to establishments to attempt to purchase alcohol. If an establishment sells alcohol to a decoy, it can be penalized. Montgomery County also uses these underage volunteers in a program to monitor hotel and motel room service operations. In this sting operation, the ABC official rents a room and the underage volunteer calls for room service, ordering an alcoholic beverage. The official waits unseen to observe if the youth is illegally allowed to purchase alcohol. According to Montgomery County officials, this program has been highly effective; hotels had 100 percent compliance rates with underage drinking laws following 2 consecutive years in which they had a 66 percent violation rate. Prince George’s County officials disagree with the concept of a sting operation; they believe it inappropriately entraps establishments. In addition to its enforcement efforts, the District funds a variety of health and education programs to prevent and treat alcoholism, most of which are components of overall substance abuse programs. In fiscal year 1997, District agencies spent about $66 million providing substance abuse services to its residents. Most of these dollars were used for treatment services. The District Department of Health’s Addiction Prevention and Recovery Administration (APRA) is one of the major providers of substance abuse treatment and prevention services, with total spending of about $24 million in fiscal year 1997. The District of Columbia Public Schools are a source of funding for prevention activities for school-aged youth. APRA funds alcohol prevention programs and offers counseling and treatment services to residents, either directly or through contractors. Alcohol prevention activities directed to youth who have not begun to use alcohol range from disseminating information and educating targeted populations, such as school-aged and college youth, to helping community groups develop programs. During fiscal year 1997, APRA spent about $1.2 million of its federal block grant on alcohol prevention activities and treatment services, most of it spent on treatment. Major prevention activities included a telephone hotline and neighborhood outreach centers. APRA worked with other government agencies and community groups to provide prevention activities to youth and adults. The District also offers school children a systemwide drug prevention education program, using funding under the Safe and Drug Free Schools and Communities Act of 1994. Administered by the District of Columbia Public Schools, the Substance Abuse Prevention Education (SAPE) Program spent about $1.7 million during fiscal year 1997, providing a variety of prevention activities to public, parochial, and private school students; teachers and other school staff; parents; and community groups. The SAPE Program provides education, training, program development, and information dissemination to teach its participants about the use and abuse of alcohol and other drugs. Although the District’s alcohol tax structure differs from the tax structures of most states, its combined taxes on alcohol are generally higher. The District cannot conform its alcohol tax structure to those in all surrounding jurisdictions at the same time, because the tax structures among those neighboring jurisdictions differ significantly. Moreover, the District would not be able to impose exactly the same effective tax rates as those in either Virginia or Montgomery County, because those effective rates are difficult to estimate precisely. The District’s taxes on beer are currently the highest in the region. Increasing taxes on alcoholic beverages has been associated with reductions in alcohol consumption and related health and social problems. The special geographic circumstances of the District—where all of its suburbs are in other jurisdictions—could weaken the effect that an increase in the District’s taxes would have on local alcohol consumption. Strategies to prevent youth from using alcohol and adults from drinking excessively generally either try to educate individuals and build their resistance skills or use legal and regulatory controls to affect the availability and consumption of alcohol. Although communities across the nation have invested significant resources in these efforts, there is mixed evidence about which prevention approaches are most effective. The best current evidence suggests, however, that some legal and regulatory strategies, when enforced, can help reduce illegal drinking and alcohol-related problems. If District officials are interested in investing in new alcohol prevention initiatives, it appears that greater efforts to enforce existing laws and regulations might produce the best short-term results. At the same time, however, more rigorous evaluations of prevention strategies and programs would be needed in order to provide better information about the effectiveness of the full range of prevention approaches. We obtained written comments on a draft of this report from NIAAA, PRC, and SAMHSA; and oral comments from the District of Columbia’s Office of Tax and Revenue, Department of Consumer and Regulatory Affairs, and Department of Health; the District of Columbia Financial Responsibility and Management Assistance Authority; and the Virginia Department of Alcohol Beverage Control. NIAAA commented that in general, the report makes a good start in identifying the complex issues involved in designing programs and policies to reduce alcohol abuse in a specific jurisdiction, particularly one with the unique characteristics of the District. It also provided detailed comments and suggestions for improving our presentation, which we incorporated where appropriate. SAMHSA and the Department of Consumer and Regulatory Affairs said they generally agreed with the findings of the report. The other oral comments involved minor wording clarifications, which we made where appropriate. Officials from PRC suggested several technical changes to the report that we incorporated where appropriate. In response to their comment that our review of the literature did not sufficiently acknowledge the success of prevention strategies that combine educational and environmental interventions, we added the results of a major study that combined several strategies. They also said that our review gave inadequate recognition to some limitations of the economic literature on the effects of alcohol taxes. Their conclusion is that because of these limitations, no evidence exists regarding the effects of local alcohol taxes. Our report makes clear that there is much uncertainty regarding the size of the effect that an increase in the District’s alcohol taxes would have on consumption. However, we believe that economic theory and the weight of the available empirical evidence suggest that a District tax increase likely would have some effect on alcohol consumption. We made changes to address PRC’s other concerns about our presentation of the alcohol prevention literature when the concerns were supported by the evidence we reviewed, including additional studies PRC provided. We are sending copies of this report to other appropriate congressional committees and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VI. If you have any questions, please contact Mr. White on (202) 512-9110 or Ms. Lillie-Blanton on (202) 512-7119. The Virginia Department of Alcoholic Beverage Control operates state stores that have the exclusive authority to sell liquor both to final consumers for their own off-premises use; and to resellers, such as hotels, restaurants, and taverns, that serve liquor on-premises. The retail “shelf” prices in the state stores are uniform across the state. Private sector resellers pay the same state shelf prices that final consumers do. Final consumers pay the state sales tax on top of the shelf price. Resellers add their own mark-ups on top of the state’s shelf price and then add the sales tax to the prices they charge their customers. Container size (in liters) In the mid-1980s the department experimented by lowering prices on popular liquor items in selected northern Virginia stores to be more competitive with the District. The revenues for these stores declined, because the lower prices did not attract enough business away from the District to make up for the revenue that the stores lost from customers who had already been patronizing the stores. The last price survey that the department conducted in the early 1990s showed that prices in the District were generally lower than in Virginia for popular liquor brands in 1.75 liter size bottles. The survey also showed that prices on liquor sold in bottle sizes of 750 ml or less were generally lower in Virginia. Representatives from the District’s Alcohol Retailers Association indicated that this varied relationship between liquor prices in the District and Virginia likely still existed. Montgomery County’s Department of Liquor Control operates 22 stores that offer the full array of alcoholic beverage types. These are the only stores in the county permitted to sell liquor for off-premises use. Licensed private sector stores may sell beer and wine. Hotels, restaurants, and clubs can serve all types of alcoholic beverages, but only for on-premises consumption. Taverns are licensed to serve beer and wine but not liquor. All of the licensed private sector sellers of alcohol are required to purchase all of their alcoholic beverages from dispensaries operated by the Department of Liquor Control. Container size (in liters) For its own stores, the county adds a further mark-up of 18 percent to the wholesale case price to arrive at the retail shelf price. Consumers pay the 5 percent state sale tax on this price. The same procedure is used for wine, except that a different state excise tax rate applies, and the county uses different wholesale and retail mark-ups: 35 percent and 28 percent. Beer wholesale and retail prices are not established by formal mark-up rules. The county relies on price surveys of wholesale and retail prices in nearby jurisdictions for guidance in setting beer prices. In the limited time available to us, we could not complete a retail price survey of sufficient quality that would enable us to make a useful comparison of retail liquor prices in Montgomery County with retail prices in the private sector. However, because Maryland requires all wholesalers operating in the state to (1) publish the prices that they charge retailers for liquor and wine and (2) charge the same price to all retailers in the state (excluding Montgomery County), we were able to compare the wholesale prices that the county charges for those beverages with the wholesale prices that would have been charged in the county if the control system did not exist. Our comparison of prices in effect during January 1998 indicates that Montgomery County’s wholesale prices for some alcoholic beverage items are higher than those of private sector wholesalers operating in the rest of Maryland, but for other items the county’s prices are lower. The county’s prices were higher than those of the private sector wholesalers for 9 of the 14 liquor items that either the Virginia ABC Commission or DISCUS identified as top sellers. The county’s wholesale prices also were higher for 15 out of the 28 liquor items and 15 out of the 29 wine items that we randomly selected from the county’s price lists. In the absence of information on sales volumes for each beverage item, we were not able to determine whether Montgomery County’s average prices for liquor and wine are higher or lower than they would be if the county did not control prices. Consequently, we were not able to determine whether the average effective tax rate on alcohol in the county is above or below the average statutory tax rate applicable in the county. The fact that the relationship between the county’s prices and the private sector prices varies across beverage items means that the relationship between effective and statutory tax rates on alcohol sold in Montgomery County is likely to vary across beverage items also. Excise tax rates ($ per gallon) alcohol (percent) Other tax rates 4.000 Wine: over 14% alcohol content sold Liquor: under 21% alcohol content, $0.85 per gallon 4.625 Beer: under 3.2% alcohol content, $0.16 per gallon; over 3.2% and under 5% alcohol content, $.23 per gallon; $0.008 per gallon enforcement tax; malt liquor, $0.20 per gallon; 10% on-premises gross receipts tax (for clubs). Wine: under 5% alcohol content, $0.25 per gallon; over 5% alcohol content, $0.75 per gallon; $0.05 per case tax; 3% off-premises and 10% on-premises gross receipts tax (for clubs). Liquor: under 5% alcohol content, $0.50 per gallon; over 5% but under 21% alcohol content, $1.00 per gallon; 3% off-premises and 14% on-premises gross receipts tax. Liquor: over 50% alcohol content, $6.60 per gallon Wine: sparkling wine, $0.30 per gallon Liquor: under 7% alcohol content, $2.05 per gallon Wine: over 21% alcohol content and sparkling wine, $1.50 per gallon Liquor: under 25% alcohol content, $2.50 per gallon (continued) Excise tax rates ($ per gallon) Liquor: alcohol content under 17.259%, $2.25 per gallon; alcohol content over 55.78%, $9.53 per gallon Liquor: retail tax $0.10 per ounce for on-premises consumption Wine: alcohol content over 17.259%, $3.00 per gallon; sparkling wine, $3.50 per gallon Wine: retail tax $0.10 per 4 ounce on-premises consumption Beer: retail tax $0.04 per 12 ounce on-premises consumption Liquor: $0.83 per gallon local tax Wine: alcohol content over 14%, $2.54 per gallon, $0.83 per gallon local tax Beer: $0.53 per gallon local tax 4.000 Wine: sparkling wine, $2.09 per gallon and wine coolers, $0.84 per gallon Beer: $0.53 per gallon for draft beer 5.000 Beer: alcohol content over 4%-$0.45 per Liquor: alcohol content under 14%, $0.23 per gallon Wine: alcohol content over 14%, $0.60 per gallon Liquor: alcohol content under 15%, $0.47 per gallon Wine: alcohol content over 21%, $2.68 per gallon 5.000 Wine: alcohol content under 5%, $0.19 8.0/10.0Wine: alcohol content over 14%, $0.75 Beer: alcohol content under 3.2%, a sales tax of 4.25% Liquor: alcohol content under 6%, $0.25 per gallon; $0.05 per case Liquor, beer, and wine: 9% wholesale tax (continued) Excise tax rates ($ per gallon) Liquor: alcohol content under 6%, $0.32 per gallon Wine: alcohol content from 14% to 24%, $0.23 per gallon, over 24% and sparkling wine, $1.59 per gallon Beer: $0.048 per gallon local sales tax 6.0/7.0Wine: alcohol content over 15.5% , sold through state stores; sparkling wine, $1.25 per gallon Liquor: Sold only through state stores if alcohol content is over 4% Beer: Sold through private outlets Liquor: alcohol content under 15%, $1.10 per gallon, over 50%, $4.05 per proof gallon; 0.57% gross receipts tax on private club sales Wine: sparkling wine, $0.70 per gallon Beer: 0.57% gross receipts tax on private club sales 6.000 Wine: alcohol content over 16%, $0.76 Liquor: $0.01 per bottle (except miniatures) Wine: alcohol content 14% to 21%, $0.95 per gallon; under 24% and sparkling wine, $1.82 per gallon; $0.01 per bottle (except miniatures) Beer: alcohol content under 3.2%, $0.077 per gallon None Wine: alcohol content over 16%, sold through state stores; 7% surtax Beer: 7% surtax 5.000 Wine: alcohol content over 14%, $1.35 (continued) Excise tax rates ($ per gallon) Liquor: alcohol content under 14%, $0.40 per gallon and under 21%, $0.75 per gallon Wine: alcohol content 14% to 22%, $0.75 per gallon; over 22%-$2.05 per gallon Liquor and wine: all sales are through state stores 5.000 Wine: alcohol content over 14%, $6.06 Liquor: alcohol content under 24%, $2.54 per gallon Liquor: sales tax applies to on-premises consumption only Wine: alcohol content over 17%, $0.91 per gallon 7.000 Wine: alcohol content over 17%, $0.60 per gallon; sparkling wine, $1 per gallon 5.000 Wine: alcohol content over 14%, $1 per gallon; vermouth, $1.10 per gallon; sparkling wine, $1.50 per gallon 4.5/12.0Liquor: $1 per bottle on-premises tax Wine: alcohol content over 14%, $1.44 per gallon; sparkling wine $2.08 per gallon; $1 per bottle on-premises Beer: alcohol content under 3.2%, $0.36 per gallon; $1 per case on-premises None Wine: alcohol content over 14%, $0.77 Liquor and wine: all sold through state stores 7.000 Wine: sparkling wine, $0.75 per gallon Liquor: $5.36 per case charge and 9% surtax on retail “shelf” price Wine: $0.18 per gallon additional tax Liquor: alcohol content under 14%, $0.93 per gallon and 2% wholesale tax Wine: alcohol content 14% to 20%, $1.45 per gallon, over 21% and sparkling wine, $2.07 per gallon and 2% wholesale tax (continued) Excise tax rates ($ per gallon) alcohol (percent) Other tax rates 6.0/15.0Liquor and wine: $0.15 per case charge; alcohol content under 7%, $1.10 per gallon Beer: 17% wholesale tax 6.25/14.0Wine: alcohol content over 14%, $0.408 Beer: alcohol content under 4%, $0.198 per gallon Liquor and wine: sold through state stores only Beer: alcohol content over 3.2%, sold only in state stores 5.000 Wine: alcohol content over 16%, sold Beer: alcohol content 6% to 8% - $0.55; 10% on-premise sales tax 4.500 Wine: alcohol content under 4%, $0.2565 per gallon and over 14%, sold through state stores 6.500 Wine: alcohol content over 14%, $1.72 Beer: state excise tax plus $4.78 per barrel additional tax 6.000 Wine: alcohol content over 14%, sold 5.000 Wine: alcohol content over 14%, $0.45 Liquor and wine: all sales through state stores 8.0/10.0Wine: alcohol content over 14%, $0.40 per gallon and sparkling wine, $0.45 per gallon In 18 states, the government directly controls the sales of liquor and, in some cases, beer and wine. Revenue in these states is generated from various taxes, fees, and alcohol beverage receipts. Empirical research conducted since the early 1980s generally concludes that increases in the prices of alcoholic beverages reduce drinking; heavy drinking; and related outcomes, such as motor vehicle and other accidents; liver cirrhosis mortality; “crime”; and reduced education, employment, and labor productivity. According to a review of recent research, price-induced reductions in alcohol consumption “are not limited to infrequent, light, or moderate drinkers, but also occur among frequent and heavy drinkers.” The review also finds “youth and young adults, the age groups where alcohol-related problems are disproportionately high, are generally more responsive to increases in price than are adults.” Higher prices for alcoholic beverages could be achieved by higher taxation. There is a clear presumption that higher taxes on alcoholic beverages are correlated with higher prices for those beverages. In general, however, the link between alcohol taxes and alcohol prices requires further study. Economists believe that the extent to which any excise tax increase is passed along to consumers varies depending on the characteristics of the markets in which consumers purchase their beverages. Such characteristics would include how responsive market demand is to price changes and how much competition among sellers exists in the market. Most researchers studying the economics of alcohol consumption assume that the full amount of the excise tax increase is passed along to consumers in the form of higher prices. In the absence of more complete evidence, researchers believe these are the best assumptions that can be made. Many researchers have used the variation in state-level excise tax rates across states as a proxy for the variation of alcohol prices across states. Researchers have estimated a range of values for the degree to which consumption of beer, wine, or liquor responds to changes in the prices of these beverages. A comprehensive survey of empirical research conducted between 1983 and 1992 on the effect of price increases on alcohol consumption found that in response to a 10 percent beer price increase, beer consumption would decline by between 1.2 percent and 10.7 percent, with most studies estimating that the change in consumption would be less than 5 percent. Generally, studies have tended to show that liquor and wine consumption is somewhat more responsive to price changes than is beer consumption. Experts estimated that liquor consumption would decline by between 5 percent and 10 percent in response to a 10 percent liquor price increase, but most of the estimates for wine were in the range of 5 to 20 percent. Other, generally more recent, studies have used data from surveys of individual alcohol consumption. These studies have found higher estimates for the consumption response to an increase in the price of alcohol. NIAAA’s 1997 report to Congress reviewed and summarized the post-1992 studies of the effect of alcohol price increases on consumption. According to this report, there continues to be substantial variation in estimates of the responsiveness of alcohol consumption to changes in alcohol prices. One reason why the effects of price increases on alcohol consumption remain uncertain is the quality of the data that researchers have to work with. To make more precise estimates of the effects of price increases on alcohol consumption, one would need to use accurate measures of the prices that individual consumers pay for various types of alcohol rather than consumption data aggregated to a state or national level. However, collecting price data for a large sample of consumers is difficult and costly. There may also be problems with the data on alcohol consumption that have been used in the empirical literature—self-reported consumption data—which tend to understate actual consumption. The use of alternatives to self-reported consumption data, such as expenditures on alcoholic beverages, may introduce a different set of errors and biases. Light, moderate, and fairly heavy drinkers respond to alcohol price increases by cutting back on consumption. However, among a relatively small number of the very heaviest drinkers—those often considered to be addicted to alcohol—some researchers have found very little, if any, response to changes in price, while others have found some price responsiveness. One study found that consumers in the middle of the distribution of drinkers were the most sensitive to price changes, and very light and very heavy drinkers were less sensitive. This study also found that the higher the price of alcohol, the less likely consumers were to have any days of heavy drinking. Another study that examined the effects of alcohol prices on the frequency of heavy drinking and drunk driving found that a higher price of alcohol was associated with significant reductions in the frequency of heavy drinking for males of all ages, for females of all ages, and for females aged 21 and younger, but not for males aged 21 and younger. In another study, the same researcher has found that reported familiarity with the health consequences of drinking was important in determining the extent to which the heaviest drinkers responded to price changes. The least-informed heavy drinkers did not appear to be sensitive to price changes, but the best-informed heavy drinkers appeared to be very sensitive. The author notes that the heaviest-drinking, least-informed consumers might be alcoholics who are in denial over the adverse consequences of drinking. He and a colleague also note that his finding is consistent with results of the Manning et al. study that found that very light and very heavy drinkers were less sensitive to price than others. The least well-informed consumers in his study were, on average, also very heavy drinkers. Most researchers have found that youth and young adults exhibit more responsiveness to changes in alcohol prices than do older drinkers. One explanation of the greater price sensitivity of younger drinkers is that younger drinkers may have less income to spend than their older counterparts. Whatever the reason for the greater price sensitivity of younger drinkers, there may be public policy implications. If older drinkers—those with a long-term lifestyle that includes heavy drinking—are less sensitive to price while younger drinkers are more sensitive to price, higher alcohol taxes may have a two-fold effect. Higher alcohol prices may be an effective policy for reducing youth alcohol consumption and its related problems, as well as in reducing the likelihood of developing a long-term lifestyle that includes heavy drinking. Most researchers have found that beer is the beverage of choice among youths who drink alcoholic beverages. Some researchers have concluded that beer is disproportionately preferred by higher risk groups—for example, by those who drink a lot during a typical session far more than by those who drink moderately. It also has been noted that beer drinkers are more likely to drive while intoxicated than drinkers of other alcoholic beverages. Additionally (as noted above) researchers believe that the responsiveness of alcohol consumption to changes in its price is greater for youth than for adults. One study estimated that a 10-percent increase in the price of beer could cause youths’ consumption to decline by 23 percent. For those 17 to 29, another study found that on average, a 10-percent increase in the price of beer would lead to about a 7-percent decrease in consumption in the long run. Yet another study found that after the states increased their legal drinking ages to 21 in the late 1980s, the price sensitivity of youth alcohol use fell. Recent studies show that the drinking behavior of youths who are frequent, heavy, or binge drinkers is especially sensitive to alcohol price changes. One study found that a 10-percent decline in the price of beer would increase the number of youths (aged 16 to 21) who drink beer 4 to 7 times per week by about 10 percent. The same 10-percent decline in price would cause the number of youths (aged 16 to 21) who consumed no beer per week to fall by about 7 percent. Another study by some of the same experts found that the number of youths who drink six or more cans of beer on a typical drinking day would decline by about 31 percent in response to a price increase of 10 percent; the number of youths who drink only 1 to 2 cans of beer on a typical drinking day would decline by about 12 percent in response to a price increase of 10 percent. In contrast, another recent study suggests that prices would have little impact on drinking and binge drinking among male college students. The effects of alcohol taxation on heavy and binge drinking are of special interest because of the high fatality rates from drunken driving that are associated with it. Alcohol involvement in motor vehicle accidents is estimated to be three times higher in the 18- to 20-year-old group than it is in the general population. Results from other studies indicate binge drinking and heavy drinking are inversely related to price among adults as well. A number of studies have examined the relationship between alcohol prices or tax rates and adverse consequences associated with alcohol misuse. According to a summary of the most recent research, it has been clearly demonstrated that increases in alcohol prices “can significantly reduce many of the problems associated with alcohol abuse, as well as improve educational attainment.” Problems associated with alcohol use and abuse include drinking and driving and motor vehicle accidents, liver cirrhosis and other health effects, decreased educational attainment and employment, and violence and other crime. One of the most studied relationships is the relation between alcohol use and auto accidents and fatalities. There is a consensus in the empirical literature that an increase in the price of alcoholic beverages would reduce the number of lives lost in vehicle fatalities. According to one study, the occurrence of drunk driving declines as its full price increases. The study also found the risk of death or injury from an auto accident rises precipitously with the intensity of drinking; i.e., binge drinking. Another study found that higher state beer excise tax rates were associated with reductions in motor vehicle fatalities for youths aged 15 through 24.Likewise, in another study the state beer excise tax rate exhibited large negative and statistically significant associations with total driver fatalities, night driver fatalities, and alcohol-involved fatalities for both drivers of all ages and drivers 18 to 20 years old. Other researchers found higher state beer tax rates to be weakly associated with a reduced propensity to drive drunk. A recent study of the relation between beer prices and drunken driving included a relatively comprehensive set of explanatory variables and examined a variety of different model specifications. This study found that a 10-percent increase in the price of beer would result in an almost 10-percent decrease in the fatality rate from drunken driving, a 14-percent decrease in the fatality rate from nighttime drunken driving, and a 14-percent decrease in the fatality rate from drunken driving for those aged 18 to 20. Another recent study estimated that alcohol prices have a negative and significant effect on binge drinking—the behavior that leads to drunken driving—with a 10-percent increase in the price of alcohol leading to a 9-percent decrease in the expected number of binge episodes per month. Two studies have found that the excise tax rate on liquor has a negative and significant effect on the liver cirrhosis death rate. In contrast, other researchers found higher alcohol prices were not significantly related to lower death rates from liver cirrhosis. These studies did find a significant negative relationship between alcohol prices and suicide rates and mortality rates from other cancers to which alcohol contributes. They also found weak or insignificant effects of alcohol price on death rates from homicide and from falls, fires, and other accidents. Two recent studies investigated the relationship between alcohol use and crime. These researchers found significant relations between the real tax rate on beer and the incidence of rape and robbery. Other recent studies examined the impact of alcohol use and heavy use on the level of education attained. These researchers note that there is evidence that heavy drinking is associated with reductions in the average number of years of schooling completed and reduction in employment as well as a tendency toward alcohol abuse in later life. They observe that the bulk of evidence supports the conclusion that increasing alcohol taxes would extend life expectancy. Several other experts have suggested that people who misuse alcohol are less likely to be employed and tend to have lower incomes than people who do not. 26.2 University of Arkansas Medical Center 1.6 Child and adolescent substance abuse 3.7 Alcoholic beverage and tobacco trust fund Prison construction, enforcement, and administration 21.2 County or city where sold 2.6 Alcoholism treatment and prevention 8.1 Department of Mental Health 24.0 Alcohol treatment and rehabilitation 4.1 Alcohol and drug abuse programs 12.5 Alcohol education, rehabilitation, and 52.0 Community alcoholism and detoxification (continued) Substance Abuse and Mental Health: Reauthorization Issues Facing the Substance Abuse and Mental Health Services Administration (GAO/T-HEHS-97-135, May 22, 1997). Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/T-GGD-97-97, May 1, 1997). Drug Control: Observations on Elements of the Federal Drug Control Strategy (GAO/GGD-97-42, Mar. 14, 1997). Substance Abuse Treatment: VA Programs Serve Psychologically and Economically Disadvantaged Veterans (GAO/HEHS-97-6, Nov. 5, 1996). Drug and Alcohol Abuse: Billions Spent Annually for Treatment and Prevention Activities (GAO/HEHS-97-12, Oct. 8, 1996). Substance Abuse Surveys (GAO/HEHS-96-179R, July 19, 1996). Cocaine Treatment: Early Results From Various Approaches (GAO/HEHS-96-80, June 7, 1996). At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions (GAO/HEHS-96-34, Mar. 6, 1996). Treatment of Hardcore Cocaine Users (GAO/HEHS-95-179R, July 31, 1995). Residential Care: Some High-Risk Youth Benefit, But More Study Needed (GAO/HEHS-94-56, Jan. 28, 1994). Drug Use Among Youth: No Simple Answers to Guide Prevention (GAO/HRD-94-24, Dec. 29, 1993). Confronting the Drug Problem: Debate Persists on Enforcement and Alternative Approaches (GAO/GGD-93-82, July 1, 1993). Indian Health Service: Basic Services Mostly Available; Substance Abuse Problems Need Attention (GAO/HRD-93-48, Apr. 9, 1993). Drug Education: Limited Progress in Program Evaluation (GAO/T-PEMD-93-2, Mar. 31, 1993). Community-Based Drug Prevention: Comprehensive Evaluations of Efforts are Needed (GAO/GGD-93-75, Mar. 24, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO examined issues relating to the taxation and regulation of alcoholic beverages in the District of Columbia, focusing on: (1) a comparison of the District's alcohol taxes with surrounding jurisdictions; (2) whether those taxes could be set closer to surrounding jurisdictions; (3) how much higher the alcohol tax would be if it had been indexed for inflation; (4) which states earmark alcohol taxes for specific purposes; (5) whether raising alcohol taxes will reduce abuse; and (6) the characteristics of effective alcohol prevention programs. GAO noted that: (1) compared to taxes levied in nearby jurisdictions, the District's combined tax rates are higher because its sales tax is among the highest; (2) the District cannot conform its alcohol tax structure to match those in surrounding jurisdictions because tax structures among neighboring jurisdictions differ significantly; (3) the District's per-unit excise tax rates have declined in inflation-adjusted terms since they were last changed, and special sales tax rates on all alcoholic beverages have compensated for the lack of indexation of the excise tax rates, these taxes will continue to decline gradually in inflation-adjusted terms; (4) economic theory and empirical evidence indicates that increases in the District's alcohol taxes are likely to reduce alcohol use, especially among youths; (5) at least 24 states have earmarked at least a portion of their alcohol excise tax revenues for specific purposes; and (6) best current evidence suggests that several legal and regulatory strategies, along with visible enforcement with education about these laws, can reduce illegal drinking and alcohol-related problems in the District of Columbia.
In 2002, the President reinforced ballistic missile defense as a national priority and directed DOD to proceed with plans to develop and put in place an initial capability beginning in 2004. To expedite the delivery of an operationally capable BMDS, in 2002 the Secretary of Defense established MDA, granted the agency expanded responsibility and authority to develop globally integrated capabilities, directed it to manage all ballistic missile defense systems then under development, and transferred those systems controlled by the military services to the agency. The systems transferred from the services and the new systems whose development MDA initiates are all considered to be ballistic missile defense elements. Since its creation in 2002, MDA has developed, fielded, and declared ready for operations an increasingly complex set of ballistic missile defenses designed to defend the United States, deployed forces, allies, and friends from limited ballistic missile attacks. By leveraging existing service weapon systems and developmental concepts, MDA fielded an initial defensive capability beginning in 2004 to defend the United States from a limited, long-range ballistic missile attack. This initial defensive capability included the Ground-based Midcourse Defense system of interceptors and fire control systems, Upgraded Early Warning Radars, sea-based radars installed aboard Aegis cruisers and destroyers, and an early version of the Command, Control, Battle Management, and Communications (C2BMC) element. MDA first made these elements available for operations in April 2005 by establishing the initial BMDS operational baseline. DOD first put these elements to operational use by activating them in 2006 in response to North Korean ballistic missile activity. Since that time, DOD has added some elements to the operational baseline while declaring others ready for contingencies. Table 1 identifies the fielding locations and dates that MDA first delivered operational elements to the combatant commands as of July 2009. As table 1 indicates, DOD has designated lead services for seven of the eight elements that have been delivered to the combatant commands for operational use; MDA currently plans to retain control of the eighth element delivered to date—C2BMC—and not transition it to a single lead service. Lead military services are expected to provide the rest of the military force structure—the organizations, personnel, and training— required for operations as the elements become more technically mature. Lead military services are also expected to begin funding operational and support costs as elements transition from MDA to the services. To develop ballistic missile defense capabilities, MDA both modified existing service weapon systems to perform ballistic missile defense missions and developed new elements, many based on previously existing concepts, expressly for ballistic missile defense purposes. For example, MDA developed the Upgraded Early Warning Radar and Aegis BMD elements as modifications to existing service weapon systems, whereas MDA developed the Ground-based Midcourse Defense and THAAD elements based on developmental programs transferred to MDA in 2002. MDA has spent about $56 billion since 2002 to develop these assets. Additionally, MDA’s fiscal year 2010 budget request proposes to develop more advanced Aegis BMD interceptors capable of addressing intermediate-range ballistic missile threats, enhance the C2BMC element’s capabilities, and undertake other developmental initiatives, including research into ascent phase technologies. These developments are likely to affect both element quantities and service force structure requirements as MDA begins to field these capabilities. MDA, under the direction and oversight of the Under Secretary of Defense for Acquisition, Technology and Logistics, is responsible for evaluating ballistic missile defense capabilities to determine which elements are ready to perform military operations, giving the Secretary of Defense the option of activating elements for operational use. Under MDA’s approach, an element is first available for crisis and contingency operations when it has achieved Early Capability Delivery, based upon MDA’s assessment of element-level tests and its determination that the element’s employment will not degrade other operational ballistic missile defenses. According to MDA’s current approach, an Early Capability Delivery declaration is the first point at which an element is made available for operational employment in defense of the United States and U.S. allies. Subsequently, MDA declares when an element is added to the operational baseline by declaring that it has achieved Partial Capability Delivery, and is capable of day-to-day operations, or Full Capability Delivery meaning that the element is able to sustain operations over longer periods. In May 2009, MDA updated its approach to making capability declarations so that it considers not only the agency’s own developmental assessments, but also a U.S. Strategic Command-led assessment of the element’s capabilities and limitations under operational conditions. MDA’s first capability review under this new approach is expected to occur later in 2009. Oversight of MDA is executed by the Under Secretary of Defense for Acquisition, Technology and Logistics. Because MDA is not subject to DOD’s traditional joint requirements determination processes and because it utilizes flexible acquisition practices, DOD developed alternative oversight mechanisms. For example, in 2007 the Deputy Secretary of Defense established the Missile Defense Executive Board, which is to provide the Under Secretary of Defense for Acquisition, Technology and Logistics or Deputy Secretary of Defense, as necessary, with a recommended ballistic missile defense strategic program plan and feasible funding strategy for approval. In September 2008, the Deputy Secretary of Defense also established the BMDS Life Cycle Management Process, and directed the board to use the process to oversee the annual preparation of a required capabilities portfolio and develop a program plan to meet the requirements with Research, Development, Test, and Evaluation; procurement; operations and maintenance; and military construction in defensewide accounts. DOD is currently undertaking a review of its approach and requirements for ballistic missile defenses. In the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, Congress required DOD to prepare a review of the ballistic missile defense policy and strategy of the United States. Among other matters, the congressionally mandated review is to address the full range of ballistic missile threats to the United States, deployed forces, friends, and allies; the organization, discharge, and oversight of acquisition for ballistic missile defense programs; roles and responsibilities of the Office of the Secretary of Defense, defense agencies, combatant commands, the Joint Chiefs of Staff, and military departments in such programs; DOD’s process for determining the force structure and inventory objectives for ballistic missile defense programs; the near-term and long-term affordability and cost-effectiveness of such programs; and the role of international cooperation on missile defense in the ballistic missile defense policy and strategy of the United States. Congress required DOD to provide a report on its review by January 31, 2010. This report is one in a series of reports we have issued on ballistic missile defense that have identified key acquisition, management, and operational challenges associated with the development of the BMDS. In August 2009 we published a report identifying actions that DOD needs to take to improve planning and to increase the transparency of total costs for the proposed European Interceptor Site and European Midcourse Radar elements. In March 2009, we issued our sixth annual assessment of DOD’s progress in developing the BMDS; this report concluded that although MDA had shown the benefits of its flexible acquisition practices by fielding and improving upon an initial ballistic missile defense capability since 2005, this approach also has limited the ability of DOD and congressional decision makers to measure MDA’s progress on cost, schedule, testing, and performance. In September 2008, we found that although DOD had begun preparing for BMDS operations and support, difficulties in transitioning these responsibilities from MDA to lead services had complicated long-term planning to operate and support the elements over their life cycle. In July 2008, we reported that DOD had taken some steps to address the combatant commands’ ballistic missile defense needs, but had yet to establish an effective process for identifying and addressing the overall priorities of the combatant commands when developing ballistic missile defense capabilities. We reported in May 2006 that DOD had begun preparations to operate ballistic missile defenses, such as identifying lead services, but had not established the criteria that must be met before the BMDS can be declared operational. DOD has identified its needs for establishing an initial and evolving ballistic missile defense capability, but lacks the comprehensive analytic basis needed to make fully informed decisions about the overall mix of elements and interceptors that it requires. A knowledge-based decision- making approach can help to provide the comprehensive analytic basis needed to establish missile defense policies and strategies and determine funding priorities. For ballistic missile defense, such an approach would require full examination of the optimal type and quantity of various ballistic missile defense elements and interceptors needed to meet all of DOD’s requirements—a complex task due to the many factors that should be considered, including the evolving nature of the threat and emerging technologies. For example, the same mix of Aegis BMD ships and THAAD batteries provides different defensive coverage depending on whether the elements are acting autonomously or are integrated with another X-band radar. However, DOD’s assessments of missile defense requirements prepared to date were limited in scope primarily because they were prepared for specific purposes. The Joint Staff, for example, conducted studies to identify the minimum interceptor quantities needed for certain ballistic missile defense elements designed to defend against short-to- intermediate-range threats. Additionally, the combatant commands have analyzed their ballistic missile defense requirements for their specific regions, and the services have studied requirements for specific elements. Without a comprehensive analytic basis that identifies the full range of operational type and quantity requirements for ballistic missile defense elements, DOD may not be acquiring the optimized mix of elements and interceptors that would provide the most effective missile defense. MDA identified how many and what type of ballistic missile defense elements were needed to begin fielding an initial set of capabilities in 2004 and to evolve the BMDS over time. Directed by the President in 2002 to begin fielding an initial set of missile defense capabilities in 2004, MDA undertook the major early assessments that established DOD’s initial and evolving ballistic missile defensive capability, which formed the foundation of the current BMDS. According to a February 2004 MDA briefing, the initial defensive capability prepared in response to the President’s policy direction included the Cobra Dane Radar Upgrade, the Beale Upgraded Early Warning Radar, up to 20 ground-based interceptors located in Alaska and California, command and control in Colorado, and sea-based radars deployed aboard Aegis ships. Additionally, based on the President’s policy direction and direction from the Secretary of Defense, also issued in 2002, MDA planned to expand the initial capability over time. To do so, MDA conducted internal studies and developed plans in 2002, 2003, and 2004 that identified the quantities of elements and interceptors it needed for research and development purposes and to defeat long-range ballistic missiles from rogue states. As of February 2005, these studies resulted in plans for fielding 48 ground-based interceptors to address the long-range ballistic missile threat, with 36 of the interceptors planned for fielding in Alaska, 2 in California, and 10 in Europe. The studies also resulted in plans to establish a network of sensors—including radars aboard Aegis ships and land-based radars in North America, Asia, and Europe. Additionally, MDA planned to build up to 48 THAAD interceptors and 101 Aegis BMD interceptors by the end of calendar year 2011 as part of its efforts to develop and field capabilities to defeat short-, medium-, and intermediate-range ballistic missiles. However, these initial plans did not define DOD’s overall requirements for ballistic missile defense elements and interceptors. In particular, MDA’s analyses were primarily focused on addressing the requirements of an initial and evolving ballistic missile defense capability and were not intended to address all of DOD’s operational requirements for performing ballistic missile defense missions worldwide. Establishing requirements for ballistic missile defense involves balancing several interrelated factors. A comprehensive analytic basis would include determining the optimum types and numbers of ballistic missile defense elements and interceptors for performing missile defense missions worldwide. However, optimizing the quantities of each element and interceptor involves many factors, including the integration of various types of ballistic missile defense elements, various risk assessments, the potential contributions of friends and allies, optimizing elements that can address multiple threats, and the evolving nature of the threat and emerging technologies. Our prior work shows that a knowledge-based decision-making process can help to provide the comprehensive analytic basis needed for establishing funding priorities, including determining the affordability of DOD’s missile defense policies and strategies. A knowledge-based decision-making process includes providing decision makers with evidence that warfighting requirements are valid, that they can be met with the chosen weapon system designs, and that the chosen designs can be developed and produced within existing resources. Optimizing the numbers and types of each element and interceptor needed involves looking across the BMDS to see how the different elements can best work together as an integrated system. According to the Director of MDA, the integration of the many ballistic missile defense elements into a system makes the BMDS more effective than would the individual elements operating independently. Integration may include improving systems integration among elements, adding a different type of interceptor, adding a sensor, or a combination of these and other options in order to increase a defended area. For example, figure 1 illustrates how the same mix of Aegis BMD ships and THAAD batteries provides vastly different defensive coverage depending on whether the elements are acting autonomously (smaller coverage) or are integrated with a radar (larger coverage). Increased integration could therefore affect requirements, perhaps lessening the quantity of elements needed to defend an area. However, Air Force officials told us that the cost of integrating elements could be high enough in some circumstances that it may be more efficient to purchase additional elements and interceptors. Assessments of the threat and other risk assessments are also factors affecting overall requirements for the types and quantities of missile defense elements and interceptors. According to the Director of MDA, optimizing the size and type of the ballistic missile defense force requires an operational risk assessment of the adversary’s ballistic missile arsenal that would have to be engaged. It also requires understanding the capabilities and limitations of BMDS elements needed to counter these threats, an understanding that continues to improve with additional testing. For example, the required number of ground-based interceptors needed to defend the United States from long-range threats would be affected if additional testing were to reveal an increase or decrease in the expected capability of that type of interceptor. Office of the Secretary of Defense and U.S. Strategic Command officials told us that risk assessments should also consider the extent to which different kinds of elements and interceptors provide redundant coverage. Air Force officials added that redundant capabilities should be considered when optimizing force structure, stating that even if there were a single element that could provide defensive coverage for an entire region, an optimized force structure may include additional elements so that the area would still be defended if the original element were incapacitated. The extent to which the United States can depend upon contributions from friends and allies also can affect the determination of DOD’s optimized ballistic missile defense force structure. For example, U.S. Central Command officials told us that coordination with friends and allies on ballistic missile defenses and their purchase of ballistic missile defense elements and interceptors may allow the command to reorient its forces to fill other gaps. Similarly, U.S. Pacific Command told us that close ballistic missile defense cooperation with Japan has improved overall ballistic missile defense protection in the command’s area of responsibility, allowing the command to expand protection of critical assets. The Director of MDA testified before Congress in June 2009 that if cooperative efforts with Russia were successful in integrating some radar facilities, it could enhance the ability of ground-based interceptors in Alaska and California. Finally, in regard to the proposed ballistic missile defense sites in Europe, DOD and the North Atlantic Treaty Organization (NATO) have been exploring ways to link U.S. missile defense assets with NATO’s missile defense efforts. In April 2008, NATO declared its intention to develop options for a comprehensive missile defense architecture to extend coverage to all allied territory and populations not otherwise covered by the proposed U.S. system. A key factor affecting the requirements for some elements is that they are designed to address multiple types of ballistic missile threats. For example, potential choices about whether to use the interceptors based in Europe as a reserve to defend the United States or to use them to intercept all incoming long-range threats regardless of the intended target could significantly affect how many ground-based interceptors would be needed overall. Similarly, the Aegis BMD element was designed to provide search and track capabilities to help the Ground-based Midcourse Defense element defend the United States, and as a stand-alone element capable of defending deployed U.S. forces and population centers abroad from shorter-range threats. In addition, Navy and U.S. Pacific Command officials told us that Aegis ships are also in high demand to perform other maritime missions, such as antisubmarine warfare. As a result, the use of Aegis ships as ballistic missile defense weapon systems may constrain the ability of combatant commanders to use those ships for other purposes without increasing the size of the available force structure. In coming years, as the Aegis BMD element takes on new roles to intercept longer-range missiles that are targeting the United States, regional combatant commanders who rely on the Aegis ships for multiple missions may be further constrained in how they deploy those assets. Consequently, even as the Aegis BMD element becomes more capable, requirements for Aegis force structure may increase in order to satisfy the multiple missions. The evolving nature of the threat and emerging technologies also have implications for the quantity requirements for ballistic missile defense elements and interceptors. For example, MDA reported to Congress in July 2009 that the requirement for emplaced ground-based interceptors was reduced, in part, because the original intelligence estimate of the number of missiles that the ground-based interceptors were intended to counter was later assessed to be off by 10 to 20 missiles. Similarly, improvements in BMDS capabilities affect requirements. For example, the Director of MDA testified before Congress in May 2009 that new ascent phase capabilities will eliminate the need for the Multiple Kill Vehicle program and would reduce overall the number of ballistic missile defense interceptors needed to defeat an attack. Our review of DOD’s analyses of its type and quantity requirements for ballistic missile defenses show that the studies prepared to date have been limited in scope and did not create the comprehensive analytic basis for making programwide decisions about policies, strategies, and investments. MDA’s initial analyses were completed for the purpose of establishing an initial and evolving set of ballistic missile defense capabilities, not to determine DOD’s overall operational requirements. Similarly, we found that the assessments of ballistic missile defense quantity requirements conducted by other DOD organizations were prepared for specific purposes: The Joint Staff conducted two analyses beginning in 2006 that identified a minimum baseline need to double the number of THAAD and Aegis BMD interceptors planned in the fiscal year 2008 budget as well as a need for an additional THAAD battery and an upgraded AN/TPY-2 forward-based radar with self-defense capability. The Joint Staff focused on THAAD and Aegis BMD interceptor inventory requirements because production decisions for additional interceptors needed to be made in DOD’s fiscal year 2010 future years’ funding plan in order to avoid the possibility of closing down production. Combatant commands were also voicing a demand for these capabilities in order to protect deployed U.S. forces and population centers abroad. The Joint Staff characterized the studies as an “initial mark on the wall” because the studies made assumptions that tended to drive down the identified quantities in the baseline. For example, the studies did not factor in quantities needed for spares, training, testing, or in transit; assumed the lack of enemy countermeasures; and assumed that ballistic missile defense command and control systems would work perfectly under operational conditions. Acknowledging these limitations, Members of Congress and DOD officials nevertheless have cited the Joint Staff studies as identifying the requirement for boosting THAAD and Aegis BMD quantities and affecting DOD’s fiscal year 2010 budget request. The geographic combatant commands regularly assess their individual requirements for ballistic missile defense forces, but these analyses are limited in scope to each command’s unique area of responsibility, as assigned by the President. For example, U.S. Central Command officials told us that their requirements for ballistic missile defenses are driven by the need to protect against short- to medium-range threats from within the command’s own theater. U.S. Northern Command officials told us that their requirements for ballistic missile defense forces are driven primarily by the command’s need to protect against long-range strikes from states outside of their area of responsibility. U.S. Northern Command conducted an independent three-phase study on where to field ground-based interceptors that included looking at the operational benefits of an interceptor site located in the eastern United States in order to augment the planned European Interceptor Site. However, this study did not address whether MDA’s budgeted requirement of ground-based interceptors—which at the time of the study included 44 interceptors in the United States and 10 in Europe—was sufficient to meet the command’s requirement. The military services have also started to perform assessments on ballistic missile defense quantity requirements, but these assessments have been limited in scope and do not attempt to optimize the number of ballistic missile defense elements and interceptors worldwide. For example, in 2007, the Navy completed a study assessing its requirement for making Aegis ships capable of performing the ballistic missile defense mission. Based on the study’s findings, the Navy concluded that the entire Aegis fleet should have this capability and that ballistic missile defense was a core Navy mission. However, the Navy neither attempted to assess the requirements for the number and type of interceptors to be used aboard these ships, nor scoped the assessment to try to vary the mix of other elements and interceptors in order to optimize the number of Aegis BMD ships. For example, the Navy did not vary the number of THAAD, Patriot Advanced Capability-3, AN/TPY-2 forward-based radar, or other elements in order to see if that affected the requirement for Aegis BMD ships. The Army also recently undertook a short-turnaround study to identify whether it is a better option to maintain the THAAD battery procurement plan outlined in the fiscal year 2010 budget or to buy fewer batteries and instead develop and field a more capable THAAD interceptor. The Army study intends to explore different options for gaining the same capability that a new interceptor could provide, including placing THAAD interceptors forward of the battery and operating them remotely, as well as the use of sea- and land-based Aegis BMD interceptors. However, Army officials told us that while the study is looking at several combat scenarios, it is not intended to establish the global quantity requirements for THAAD or establish a global optimum mix of joint BMDS elements and interceptors. Having prepared various but limited assessments of ballistic missile defense quantity requirements to support an initial and evolving ballistic missile defense capability, DOD now has the opportunity to build upon these studies to better define its overall requirements for ballistic missile defense elements and interceptors. The newly established BMDS Life Cycle Management Process, which DOD has started using to prepare an annual capabilities portfolio and program plan to meet requirements, has broadened the participation of stakeholders from across DOD in developing the annual budget proposal for ballistic missile defense capabilities development, operations, and support. The Life Cycle Management Process is designed to allow DOD to balance long-term and near-term needs by reviewing ballistic missile defense capability developments as a portfolio. However, to date the Missile Defense Executive Board, which oversees the process, has not commissioned a broad-based analysis of DOD’s overall requirements, and instead has depended on more limited analyses of quantity requirements to inform its deliberations over the missile defense budget. For example, in preparing DOD’s fiscal year 2010 budget proposal, and again in beginning to prepare for the fiscal year 2011 proposal, the board relied on the Joint Staff’s limited analysis of THAAD and Aegis BMD requirements. The Joint Staff is completing additional studies focused on the impact of countermeasures on ballistic missile defenses and plans on studying how ballistic missile defense and air defense can be integrated. However, according to Joint Staff officials, these studies do not assess ballistic missile defense requirements in their entirety. As part of the congressionally mandated review of ballistic missile defense policy and strategy, DOD expects to examine, among other things, the appropriate balance among elements to defend against ballistic missiles of all ranges; the role of allied contributions; and options for defending Europe from Iranian ballistic missile defense attack. The review is required to be completed by January 2010 and is expected to inform future budget requests. Given its broad charter and short time frame, the review is not expected to include an underpinning, comprehensive analysis of all requirements. However, the policy and strategy review could potentially lead to revised ballistic missile defense requirements. DOD has faced challenges in fully establishing units to operate five of the eight ballistic missile defense elements that have been put into operational use. DOD typically requires that major weapon systems be fielded with a full complement of organized and trained personnel. To defend against potentially catastrophic threats posed by rogue states armed with ballistic missiles, however, DOD has in some cases put ballistic missile defense elements into operational use before first ensuring that the military services had created units and trained servicemembers to operate them. DOD had in place operational units to operate the three elements that were based on existing service weapon systems, such as Aegis ships and Air Force early warning radars that were upgraded to take on ballistic missile defense capabilities. However, the five remaining elements that have been put into operational use represent new capabilities designed expressly for ballistic missile defense purposes and for which new operational units had to be created. As a result, early fielding meant that units were not fully in place and required, in some cases, that personnel be temporarily assigned or borrowed from other organizations when the elements are put into operational use to address these potential threats. For example, the Army has faced personnel shortfalls to operate the Ground-based Midcourse Defense element, which necessitated augmentation with personnel from the Army National Guard to overcome operational readiness concerns. These personnel shortages primarily resulted from the need for Army units to participate in MDA research and development activities, which are important to improving the element’s capabilities. MDA and the military services are taking steps to establish the forces needed for operations, but this may take years for some elements. DOD recognizes the challenges created by putting elements into early use, but has not set criteria requiring that operational units be in place before new elements are made available for use. In the future, emerging threats or crises could again require DOD to press developmental capabilities into use. However, until DOD reconsiders its approach to making elements available for operational use before the units are fully organized, manned, and trained to perform all of the missions they will be expected to execute, the combatant commanders will lack certainty that the forces can operate the elements as expected. DOD’s approach to ballistic missile defense development differs from its standard weapons development process in order to stress the early fielding of new capabilities. DOD practices for developing military capabilities typically require that major weapon systems complete developmental activities and then be fielded with a full complement of organized and trained personnel so that servicemembers are capable of operating the systems on behalf of the combatant commands. DOD customarily prepares planning documents that identify organizational, personnel, and training requirements that must be established before a new weapon system can be declared operational for the first time. These requirements typically include an assessment of the military specialties needed; identification of personnel requirements; and the development of individual, unit, and joint training programs. The individual services also typically require the establishment of an operational unit that is manned with trained servicemembers before new weapon systems are used operationally. According to Army officials, the Army declares new weapon systems to be initially operational only after units have been activated and soldiers have completed collective training requirements for operating the systems. Navy and Air Force practices also emphasize establishing the organizations, personnel, and training needed to operate a weapon system before it is declared operational. DOD adopted a unique acquisition approach for ballistic missile defense capabilities in order to meet the President’s direction to begin fielding in 2004 an initial capability to defend against ballistic missiles that may carry weapons of mass destruction. In establishing MDA, the Secretary of Defense directed it to use prototype and test assets to provide early capability, if necessary, and improve the effectiveness of deployed capabilities by continuing research and development activities and inserting new technologies as they become available. Further, the Secretary gave MDA the flexibility to field ballistic missile defense systems in limited numbers when available, and to base production decisions on test performance. Although the Secretary directed that the services provide forces to support ballistic missile defense operations, he also canceled the services’ requirements documentation prepared for then-developmental programs—such as THAAD and Ground-based Midcourse Defense—because the service-generated requirements were not consistent with the BMDS developmental objectives. Additionally, the Secretary directed that BMDS development would not be subject to DOD’s traditional joint requirements determination processes and would utilize certain flexible acquisition practices until a mature ballistic missile defense capability had been developed and was ready to be handed over to a military service for production and operation. Consequently, the services initially had little basis on which to determine force structure requirements for some ballistic missile defense elements, even as MDA began to develop elements and add them to the BMDS operational baseline. Our analysis determined that the units operating the existing service systems that were modified for ballistic missile defense have been organized, manned, and trained to execute their ballistic missile defense capabilities. Such systems make up three of the ballistic missile defense elements that DOD first put into operational use by activating them in 2006 in response to North Korea’s ballistic missile threat: Upgraded Early Warning Radars. Air Force early warning radars, such as those at Beale Air Force Base and Royal Air Force Base Fylingdales, United Kingdom, were first developed and operated in the Cold War. As these radars have been modified for ballistic missile defense missions, the Air Force assigned responsibility to the 21st Space Wing for operating the Beale Upgraded Early Warning Radar, while the United Kingdom has agreed to provide forces to operate and maintain the Fylingdales radar. The Air Force has provided stand-alone training equipment to train and qualify site personnel at the two Upgraded Early Warning Radars that DOD has already declared operational, and has certified that operational crews are fully trained at these radar sites. The Air Force has made similar preparations to begin operating a third Upgraded Early Warning Radar, located at Thule, Greenland, later in 2009. Cobra Dane Radar Upgrade. In accepting the transfer of the Cobra Dane Radar Upgrade from MDA, which was approved by the Under Secretary of Defense for Acquisition, Technology and Logistics in February 2009, the Air Force agreed to continue to manage the radar on behalf of its multiple missions and stakeholders, while MDA agreed to fund missile defense mission-specific operations and maintenance training and to assist the Air Force in identifying mission-specific operations costs. MDA also is providing maintenance support through fiscal year 2013, when maintenance support becomes an Air Force-funded responsibility. Aegis BMD. Aegis BMD-capable ships are operated by the Navy, and the Navy supports those ships through existing service-based infrastructure and processes. Servicemembers have been initially qualified on the ballistic missile defense mission through existing Navy commands and according to Navy practices. The Navy updated its training and personnel requirements and relied on established procedures to certify the performance of Aegis crews to perform the full range of Aegis BMD missions. Our analysis determined that DOD has not yet put into place operational units that are fully organized, manned, and trained to execute all of their ballistic missile defense responsibilities for the remaining five ballistic missile defense elements, which were designed expressly for ballistic missile defense and thus required DOD to create new units. In order to address existing and emerging threats, DOD used flexible acquisition practices to make these elements available for operational use before the services were fully ready to operate them. However, without fully established organizations, personnel, and training, these units faced challenges in dealing with the rapid fielding of elements, the ongoing research and development activities involving fielded elements, and the lack of an established force structure for operating the BMDS command and control system. Operational units have faced challenges resulting from the rapid fielding of elements before the units have had all of the necessary organizations, personnel, and training in place. For example, the Army had only a few months after being named lead service to organize and train a detachment for managing the AN/TPY-2 forward-based radar, which MDA fielded in Japan and added to the BMDS operational baseline in September 2006. In contrast, the Army generally requires years to organize an operational unit, establish personnel requirements, and train servicemembers for operating a new weapon system. The rapid fielding required the Army to deploy soldiers without a complete and approved force structure for sensor management operations when MDA added the radar to the baseline. For example, the Army did not yet have a program to train Army soldiers; to mitigate this shortfall, MDA provided the first group of Army sensor managers with an orientation of the AN/TPY-2 forward-based radar and of the radar management software then in use. A U.S. Army Space and Missile Defense Command official told us that the initial servicemembers’ orientation lacked the requirements, curriculum, training devices, standards, and evaluations that are generally expected to be in place as part of an initial qualification training course when the Army fields a new weapon system. As a result of the Army’s initiative, the initial sensor managers developed their own tactics, techniques, and procedures for managing the radar before the Army had in place a training course to qualify servicemembers in sensor management. Since that time, the Army has established a training course, which has graduated a sufficient number of servicemembers projected to meet combatant command needs. Despite the Army’s successes in training servicemembers, DOD still faces interrelated organizational and personnel challenges for the sensor management of the second AN/TPY-2 forward-based radar, which MDA fielded in Israel and made available for contingency operations in November 2008. At the time DOD fielded the radar, the Europe-based Army unit responsible for sensor management operations lacked both the organizational structure and sufficient personnel to perform these functions on a continual basis. Rather, the unit was organized and manned to perform air and missile defense operations on behalf of U.S. European Command, including command and control operations of Patriot air and missile defense forces, and air and missile defense operational and exercise planning. To minimize the potential risk to the unit’s primary missions as it performed the newly assigned sensor management operations, the Air Force has deployed servicemembers, at U.S. European Command’s request, and will deploy them throughout 2009 to augment the unit. However, these deployments have not fully addressed the stress to the unit. In March 2009, the Commander, U.S. European Command, testified that the unit’s increasing requirements were “a moving target” and would demand considerable flexibility to identify and resource them in the near- to mid-term. U.S. Army Space and Missile Defense Command officials told us that the Army has established an operational unit in its force structure planning system to provide sensor management for the second AN/TPY-2 forward-based radar; however, the officials added that the Army has not activated the unit because DOD has not determined whether the radar will be permanently fielded in Israel. The Sea-based X-Band Radar was first declared available for contingencies in 2008, and has been made operational for brief periods, without the full Navy force structure in place. Unlike Aegis BMD, which is based on existing Navy ships and support systems, the Sea-based X-Band Radar is a new system. In March 2007 the Navy agreed in principle to become the lead service for the Sea-based X-Band Radar, which could transfer to the Navy as early as 2011. However, to transfer to the Navy, the Sea-based X-Band radar element must pass a Navy inspection; and the combatant commands must determine not only that the element can perform all of its assigned missions, but also that the operator crew understands its current capabilities and limitations. Additionally, the Navy has agreed to the transfer of the element as long as funds for operating it are also transferred to the Navy; however, as we testified in March 2009, the transfer agreement does not specify how these funds will be transferred to the Navy in the long term. Further, the Navy had yet to determine personnel requirements for the radar. To mitigate the potential risk of an incomplete force structure before the radar transfers, MDA has provided contractor personnel to support day-to-day operations, as needed. MDA also declared the THAAD element to be available for contingencies in September 2008, and the Secretary of Defense activated the element in the Pacific region twice during 2009, before the Army had the opportunity to fully establish the unit that will operate the first THAAD battery. The Army activated a unit of 99 soldiers in 2008 to operate the first THAAD battery, but does not expect to complete the training and organizational activities needed to fully establish the unit and declare an initial operational capability until late in fiscal year 2010. As a result, U.S. Pacific Command and other combatant commands are operating the element during contingencies with a unit composed of a mix of MDA personnel, contractors, and Army soldiers. According to MDA’s August 2008 assessment of the element’s capabilities and limitations at the time it was declared available for contingencies, the nonstandard unit lacks experience in tactical operations, has not completed collective training, and requires significant external support. Despite these force structure limitations, a U.S. Pacific Command official told us that the command requires THAAD in the event of a crisis. Further, Army and MDA officials told us that Army’s approach to prepare forces to operate THAAD has been closely coordinated with MDA’s schedule to acquire the element. Army officials added that the Army modified its approach from standard Army practices to more rapidly achieve an initial operational capability. However, Army officials told us that until the Army fully establishes the force structure to operate THAAD, the combatant commands may overestimate the Army’s preparedness to deploy an operational unit to defend U.S. forces and population centers during a drawn out contingency. As a result, the benefit of rapidly fielding THAAD could be offset by the risks associated with depending on a unit that does not have the full complement of organized and trained personnel. Operational units have also faced challenges resulting from ongoing research and development activities for which the units have not been organized, manned, and trained. U.S. Army Space and Missile Defense Command officials told us that involving operational units in BMDS research and development activities can be beneficial because it allows the lead service and operational personnel to directly affect an element’s development. Like other BMDS elements, the Ground-based Midcourse Defense element was put into operational use to address existing threats, but is also simultaneously being tested and refined by MDA. Consequently, the Army units responsible for operating the element are also responsible for sending operational crews to participate in MDA-sponsored tests of new capabilities, such as upgraded versions of the Ground-based Midcourse Defense element’s fire control software. However, like most other Army units, the Ground-based Midcourse Defense units are not organized, manned, and trained for tasks such as the testing associated with research and development activities. As a result, the Commanding General, U.S. Army Space and Missile Defense Command, concluded in May 2009 that the units’ mismatch between the available crews and mission responsibility was creating an adverse impact on their operational readiness and performance of the Ground-based Midcourse Defense mission. Lacking additional crews and funding, the Commanding General determined that the units’ operational requirements would preclude them from fully contributing to MDA’s developmental efforts, which in turn would have a negative impact on both the operational crews’ readiness and the efforts to rapidly develop the Ground-based Midcourse Defense element. To address this mismatch, the Army has agreed to temporarily activate Army National Guard soldiers to augment the units’ personnel. However, the Army has not solved the long-term mismatch between operational requirements and available personnel, and has requested that U.S. Army Space and Missile Defense Command evaluate and present alternatives for meeting the long-term requirements that the mission entails. Ongoing research and development, as well as upgrades to elements, also create uncertainty about the preparedness of some operational units to operate elements under realistic conditions. For example, as new versions of the Ground-based Midcourse Defense element’s fire control software are installed, Army soldiers operating the software typically complete their initial qualification training, and crews are certified, according to standard Army practices. However, in August 2008, following the Army’s participation in an MDA test using high-fidelity modeling and simulation capabilities, U.S. Northern Command determined that the existing training equipment provided by MDA did not adequately simulate how other ballistic missile defense elements interact with the fire control system. As a result, the Deputy Commander, U.S. Northern Command, stated that the Army’s operational crews would no longer be certified on the fire control software until the crews had access to training systems that better reflected the operational behavior of BMDS elements. Since that time, MDA has installed an upgraded training system for Army operators to use. U.S. Northern Command officials stated to us that the upgraded training system is an improvement over the prior capability, and the Army units were using the upgraded system to train servicemembers on the next version of the fire control software. Officials from the 100th Brigade, U.S. Strategic Command, and MDA told us that MDA delayed declaring the upgraded fire control capability to be operational until the units had an opportunity to train on the upgraded operational system. However, as of July 2009 the Commander, U.S. Northern Command, had not determined whether the upgraded training capabilities were sufficient to certify the crews for operations. MDA retains lead responsibility for the command and control element, or C2BMC, unlike the other ballistic missile defense elements, which are being made part of the military services’ force structure. According to MDA, retaining responsibility of C2BMC helps the agency control the configuration of the element as it is upgraded to more capable versions. Therefore, none of the services have been required to create units, train personnel, or provide servicemembers to the combatant commands to operate the C2BMC element. However, unlike the services, MDA lacks the responsibility for providing forces to support military operations. As a result, the combatant commands have had to identify and organize C2BMC operators from within their existing resources by drawing upon servicemembers who are already deployed to the commands for other warfighting responsibilities. MDA has provided personnel and training to support the combatant commands’ C2BMC operational requirements, but additional steps are needed to ensure that the combatant commands’ needs are met. The C2BMC element is the integrating element that makes the BMDS a global system by providing combatant commanders with communications links, real-time battle information to make decisions, and a planning capability to optimize the fielding of ballistic missile defense forces on a global scale. It is also used to perform sensor management of the AN/TPY-2 forward- based radar, and future C2BMC versions are expected to have the capability to control additional sensors. To help meet the combatant commands’ operational needs, MDA has trained hundreds of servicemembers who were already assigned to the combatant commands; through the end of 2008, MDA trained more than 200 personnel at U.S. Pacific Command and the Navy Pacific Fleet, 250 personnel at U.S. Northern Command, and more than 175 personnel at U.S. Strategic Command. MDA also deploys its own personnel to 26 locations around the world to help the combatant commands and other users operate the element. However, according to U.S. Army Space and Missile Defense Command officials, the inability to identify and request additional personnel from the services to operate the C2BMC element creates a potential personnel shortfall in combatant commanders’ operations centers, which may become acute during a crisis when there are not enough personnel to effectively perform all required activities. Officials from U.S. Army Space and Missile Defense Command and the U.S. Pacific Command-based Army unit using the C2BMC element also told us that the detachment responsible for managing the AN/TPY-2 forward-based radar can become overtaxed by the responsibility to operate the C2BMC element for other functions and purposes. Though none has been designated the lead service for the C2BMC element, the Army, Navy, and Air Force have started preparing to support the organizational, training, and personnel requirements to operate ballistic missile defense command and control and battle management systems. Such requirements could grow as MDA continues to add functions to the C2BMC element. Although the services have not established personnel requirements for operating the C2BMC system, DOD officials told us that future versions of the software may require crews of up to five personnel per shift. Moreover, at present MDA trains only individual servicemembers, not crews, to operate the C2BMC system. Furthermore, as of July 2009, the services’ effort to establish requirements for the C2BMC element is in its very early stages. Until the services determine their respective requirements for manning and training for the C2BMC element, operational risks and impacts will persist. DOD has taken steps to evaluate the operational capabilities and limitations of ballistic missile defenses when they are first made available for operations. DOD recognized the potential operational risk of using developmental ballistic missile defense elements for military operations following the fielding of the AN/TPY-2 forward-based radar to Japan in 2006. In 2006, we also recommended that DOD develop operational criteria for evaluating ballistic missile defense elements before the Secretary of Defense declares the elements operational. We found that without such criteria, the Secretary of Defense lacked a baseline against which to objectively assess the combatant commands’ and services’ preparations to conduct ballistic missile defense operations. Moreover, we found that lacking clear criteria, DOD may have difficulty determining whether the return on its significant development investment in the BMDS can be realized. Since our report was issued, U.S. Strategic Command’s functional component for integrated missile defense has developed and begun evaluating ballistic missile defense elements against operational criteria to help the combatant commands and element operators understand the capabilities and limitations of ballistic missile defense elements as they are added to the BMDS operational baseline. However, these criteria were not designed to evaluate the extent to which the services had fully established the organizations, training, and personnel needed to operate ballistic missile defense elements. In May 2009, MDA updated its BMDS Master Plan to more fully consider the extent to which the services are developing the organizations, personnel, and training needed for operations when declaring that an element has achieved Early Capability Delivery, which is the first point where the element is made available for operational employment in defense of the United States or U.S. allies. MDA’s plan incorporates reviews of the elements’ performance under the commands’ operational criteria before the MDA Director makes capability delivery declarations. The updated plan also states that MDA will support service and combatant command requirements for new equipment training, unit training, and certification, and that MDA will provide appropriate training facilities and support. These steps could help coordinate the services’ force structure development with MDA’s capability delivery schedule in the future. However, MDA’s updated plan does not require that organizations, personnel, and training of the operational unit be in place before MDA makes an Early Capability Delivery declaration, or before the Secretary of Defense subsequently activates the element. The tension between the early fielding of ballistic missile defense capabilities and the desirability of preparing units to operate these capabilities was reflected in the views expressed by officials from across DOD during our review. Officials from the Office of the Secretary of Defense told us that MDA’s flexibility to shift resources when developing and fielding ballistic missile defenses has allowed DOD to employ ballistic missile defense capabilities more quickly than if the services had been responsible for their development. Such flexibilities continue to reflect the urgency and national priority of the ballistic missile defense mission. However, they stated that it was appropriate to consider a ballistic missile defense element to be part of the respective service’s force structure when MDA declared that the element had achieved Early Capability Delivery. Office of the Secretary of Defense, U.S. Strategic Command, and Army officials emphasized the need to establish a lead service early in development and to provide adequate lead time to establish an operational force structure before operating elements. For example, Army officials told us that the Army has established the operational units needed to perform ballistic missile defense missions, but agreed that the previous lack of coordination with MDA on the timing of fielding missile defense elements and declaring them operational has been problematic. Navy officials told us that the Navy does not recognize distinctions among MDA’s capability delivery declarations; the Navy does not consider a ballistic missile defense element to be operational until the element has been fully incorporated into the Navy force structure. A U.S. Pacific Command official told us that some crises could require DOD to put developmental capabilities to operational use, adding that shifting emphasis to the establishment of the services’ force structure could delay the availability of ballistic missile defense capabilities to the combatant commanders. However, the official agreed that it was reasonable for DOD to ensure that the services had fully established the units’ organizations, personnel, and training needed to operate ballistic missile defenses before the elements were declared available for operations, provided that such assurances reflected a broader shift in DOD’s policy goals from fielding systems quickly to the more deliberate development of capabilities that can be readily operated over sustained periods. Better linkage between force structure development and element fielding plans is important because the currently configured BMDS is the starting point for additional capabilities and elements that await future deployment. For example, MDA plans to field and declare operational additional AN/TPY-2 forward-based radars; although the Army now has in place the units to operate these radars in its force structure plans, the Army requires time to activate these units and prepare them for operations. Similarly, although both the Army and the Air Force have started planning to operate the proposed European Interceptor Site and European Midcourse Radar elements, which would be fielded in Europe to defend against ballistic missiles launched from the Middle East, both services will require time to prepare the operational units in order to be ready when MDA completes the development and fielding of these systems. Additionally, DOD’s fiscal year 2010 missile defense budget proposal shifts emphasis toward developing new ascent phase capabilities, which are expected to intercept ballistic missiles before they can release countermeasures to defeat U.S. defenses. As DOD makes this shift, MDA and the services will need to closely coordinate their efforts in order to avoid the challenges that affected the operations of elements that have been previously fielded. Ballistic missile defense elements and interceptors of various types are in demand from the geographic combatant commands, but DOD faces a high price tag to develop, acquire, operate, and support ballistic missile defense capabilities over the long term. Thus far, decisions regarding the shape and structure of the BMDS have been made based on policy first established in 2002 and on limited analyses of force structure options. DOD’s analyses to date have helped the department understand some of its requirements and inform its policies, but these analyses are incomplete and have not covered the full range of ballistic missile defense missions. DOD’s ongoing review of its ballistic missile defense policy and strategy provides a good opportunity for DOD to reassess its ballistic missile defense priorities and needs. However, the review is moving forward without the benefits that a comprehensive assessment of DOD’s quantity requirements would provide. Lacking the solid foundation of a knowledge-based, comprehensive analytic basis for making decisions, which includes careful assessments of DOD’s overall ballistic missile defense quantity requirements, DOD will continue to lack crucial data it needs to make the best possible policy, strategy, and budgetary decisions for ballistic missile defense. Making BMDS elements available for operational use before units were fully established reflected DOD’s sense of urgency to rapidly field defenses against potentially catastrophic threats. However, now that some ballistic missile defenses are in place, the risk of putting additional elements in use before operational units are fully established must be weighed against the marginal benefits, absent an imminent threat. Looking forward, reassessing this approach is important because DOD has several elements in development that may be fielded in coming years, including additional forward-based radars, the interceptors and radars that are planned for fielding in Europe, and new elements associated with ascent phase intercept. To establish the foundation needed to make effective policy, strategy, budgetary, and acquisition decisions, we recommend that the Secretary of Defense take the following two actions: Direct the preparation and periodic updating of a comprehensive analysis of the types and quantities of ballistic missile defense elements and interceptors that are required for performing ballistic missile defense missions worldwide. The analysis should consider the integration of elements; risk assessments of the threat, capabilities and limitations of the BMDS, and redundancy requirements; allied contributions; the employment of elements that can perform multiple types of ballistic missile defense missions and other missions; and any other relevant factors identified by the department. Use this analysis as a foundation for evaluating DOD’s ballistic missile defense developmental and acquisition priorities in future budget requests as well as its overall ballistic missile defense policy and strategy direction. To reduce the potential risks associated with operating ballistic missile defense elements with insufficient force structure, we further recommend that the Secretary of Defense require, in the absence of an immediate threat or crisis, that operational units be established with the organizations, personnel, and training needed to perform all of their ballistic missile defense responsibilities before first making elements available for operational use. In written comments on a draft of this report, DOD partially concurred with one and concurred with two of our recommendations. DOD’s comments are reprinted in appendix II. DOD also provided technical comments that we incorporated as appropriate. DOD partially concurred with our first recommendation to prepare and periodically update a comprehensive analysis of the types and quantities of ballistic missile defense elements and interceptors that are required for performing ballistic missile defense missions worldwide. In its comments, DOD validated the need for a comprehensive and recurring analysis. DOD indicated that the ongoing ballistic missile defense review will develop the strategic themes and analytic bases to be used in future analyses. DOD also noted the interrelationships between ballistic missile defense and air defense, and that a comprehensive assessment must include these defenses. Moreover, DOD stated that decisions related to ballistic missile defenses must factor in the priorities of other government agencies, like the State Department. In our recommendation, we stated that DOD should consider any other relevant factors it identifies, and the inclusion of air defense and priorities of other government agencies can reasonably be seen as such relevant factors. DOD intends to perform a detailed assessment for ballistic missile defense requirements during each Quadrennial Defense Review cycle and once in the intervening years. Overall, we generally agree with DOD’s suggested approach to implement our first recommendation; such steps, if taken, would meet its intent. In its response to our second recommendation that DOD use the comprehensive analysis as a foundation for future ballistic missile defense budget requests as well as setting policy and strategy direction, DOD concurred and indicated that this analysis would be used to shape ballistic missile defense developmental and acquisition priorities in future budget requests, and to shape overall ballistic missile defense policy, strategy, and future deployment options. However, until DOD conducts this detailed assessment of its overall ballistic missile defense quantity requirements, it will continue to lack crucial data needed to make policy, strategy, and budgetary decisions. DOD concurred without comment with our third recommendation to require, in the absence of an immediate threat or crisis, that operational units be established with the organizations, personnel, and training needed to perform all of their ballistic missile defense responsibilities before first making elements available for operational use. Our recommendation recognizes that facing an immediate threat or crisis, DOD may need to field elements without first fully establishing operational units. However, now that some ballistic missile defenses are in place, we continue to believe that DOD must carefully weigh the risk of putting additional elements in use before operational units are fully established against the marginal benefits of rapid fielding. We are sending copies of this report to the Secretary of Defense; the Director, Missile Defense Agency; the Chairman, Joint Chiefs of Staff; the Commander, U.S. Strategic Command; and the Chiefs of Staff and Secretaries of the Army, Navy, and Air Force. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. During this review, we evaluated the Department of Defense’s (DOD) assessments, prepared since 2002, of the types and quantities of ballistic missile defense elements required for ballistic missile defense missions, and DOD’s efforts to establish the units to operate elements that have been put into use through July 2009. To determine the extent to which DOD has identified the types and quantities of ballistic missile defense elements that it requires, we identified, obtained, and reviewed key guidance, studies, and analyses from the Office of the Secretary of Defense, Missile Defense Agency (MDA), the Joint Staff, U.S. Strategic Command, other combatant commands, and the military services. These documents included memorandums from the Office of the Secretary of Defense and DOD Directive 5134.9, Missile Defense Agency (MDA), dated October 9, 2004, which established MDA and directed the development of the Ballistic Missile Defense System (BMDS); Office of the Secretary of Defense budget guidance establishing the goals and objectives of the BMDS; and direction from the Deputy Secretary of Defense establishing the Missile Defense Executive Board and BMDS Life Cycle Management Process. We obtained and reviewed classified briefings summarizing MDA studies, including the September 26, 2002, Missile Defense Agency Response to Defense Planning Guidance Tasking; the October 26, 2004, briefing titled Missile Defense Capability; and the March 23, 2007, European Site Technical Rationale. We confirmed with MDA officials that these studies constituted the key initial MDA analyses outlining the types and quantities of elements and interceptors constituting the BMDS. We also obtained and reviewed unclassified briefings summarizing MDA’s 2002-2004 plans to establish an initial and evolving defensive capability against ballistic missile threats. To understand the Joint Staff’s roles and contributions to determining DOD’s quantity requirements for ballistic missile defense elements and interceptors, we obtained and reviewed briefings summarizing the Joint Staff’s studies of Aegis Ballistic Missile Defense (Aegis BMD) and Terminal High-Altitude Area Defense (THAAD) quantity requirements, including the 2006 Joint Ballistic Missile Defense Capability Mix Study and the subsequent Ballistic Missile Defense Joint Capability Mix II and Ballistic Missile Defense Joint Capability Mix Sensitivity Analysis studies. To understand how these studies were used to develop MDA’s fiscal year 2010 budget request, we obtained and reviewed key memorandums from the Joint Staff and the Office of the Secretary of Defense. We also obtained and reviewed guidance approved by the Deputy Secretary of Defense establishing the Joint Staff’s and U.S. Strategic Command’s roles to develop analytical studies that are to be used as the basis for developing annual BMDS budget proposals. From U.S. Strategic Command, we obtained and reviewed Strategic Command Instruction 538.3, Warfighter Involvement Process, dated June 2008, and the 2007 Prioritized Capabilities List to help us to understand the command’s role in identifying and advocating for BMDS quantity requirements. We also used the U.S. Strategic Command documentation to identify key geographic combatant commands with ballistic missile defense requirements. These commands are U.S. Central Command, U.S. European Command, U.S. Northern Command, and U.S. Pacific Command. We then obtained and reviewed briefings and other documents to understand the extent to which these commands had identified quantity requirements for ballistic missile defense elements and interceptors. We also identified and reviewed Army and Navy analyses to identify the quantities of key elements. We analyzed DOD’s various studies by comparing them with criteria for establishing a knowledge-based approach to acquiring major weapon systems, which we established based on our prior work on knowledge-based acquisition and on DOD documentation. We also met with officials from the Office of the Secretary of Defense, Joint Staff, MDA headquarters and element program offices, key geographic combatant commands, U.S. Strategic Command, and each of the military services to discuss DOD’s efforts to establish type and quantity requirements for ballistic missile defense force structure, their respective roles and responsibilities in preparing such analyses, and the challenges of doing so. To determine the extent to which the military services have established the units needed to operate ballistic missile defense elements, we performed our work at each of the military services, MDA, the Office of the Secretary of Defense, and key combatant commands. During our work at each of the services, we adopted an element-by-element approach to review the progress made by each service: To review the extent to which the Air Force has established units for operating the Upgraded Early Warning Radars and the Cobra Dane Radar Upgrade, we obtained and reviewed Air Force plans for declaring the Beale and Fylingdales radars operational. We obtained Air Force memorandums declaring whether the radars had met Air Force operational criteria for being considered initially operational. We also met with officials from the Air Force Air Staff and Air Force Space Command, and submitted questions to Air Force Space Command, which provided us with written responses. We also reviewed an agreement between the Air Force and MDA describing each organization’s roles and responsibilities upon the transfer of the Cobra Dane Radar Upgrade from MDA to the Air Force. To review the extent to which the Navy has established the force structure for Aegis BMD and Sea-based X-Band Radar elements, we obtained and reviewed Navy certifications of the Aegis BMD capability and the Pacific Fleet’s December 2008 draft Sea-based X-Band Radar Concept of Operations. We also reviewed an agreement between MDA and the Navy describing each organization’s roles and responsibilities for providing operational forces for the Sea-based X-Band Radar until the radar transfers to the Navy. We also met with Navy officials from the Office of the Chief of Naval Operations and from the Office of the Commander, Pacific Fleet. To review the extent to which the Army has established units with the required organizations, training, and personnel for the Ground-based Midcourse Defense, THAAD, and AN/TPY-2 forward-based radar elements, we reviewed documentation establishing each of the Army units covered by our review. We obtained and reviewed Army doctrine for THAAD and Ground-based Midcourse Defense operations and the Army’s 2009-2013 and 2010-2015 force structure plans. We obtained and reviewed key U.S. Army Space and Missile Defense Command documentation regarding a command initiative to review and update the force structure for Ground-based Midcourse Defense. We met with officials from the Army staff, U.S. Army Space and Missile Defense Command, 100th Missile Defense Brigade, 49th Missile Defense Battalion, Forward-based X-Band Radar Detachment, 94th Army Air and Missile Defense Command, and 357th Air Defense Artillery Detachment. In addition to our work at the services, we also met with officials from MDA to discuss the agency’s perspectives and contributions to the ballistic missile defense force structure, particularly for the Command, Control, Battle Management, and Communications element. We submitted questions to each element program office and received written responses. We also obtained and reviewed key documents from the Office of the Secretary of Defense, including the BMDS 2007 Transition and Transfer Plan, which was published in February 2008. We established criteria for assessing the services’ efforts to establish units with the required organizations, personnel, and training by reviewing our prior work on planning for ballistic missile defense operations, and by obtaining and reviewing key DOD and service documents. These included Chairman, Joint Chiefs of Staff, Instruction 3170.01G, Joint Capabilities Integration and Development System; Army Regulation 71-11, Total Army Analysis; Army Regulation 71-32, Force Development and Documentation— Consolidated Policies; and Air Force Instruction 10-601, Capabilities- Based Requirements Development. We obtained and reviewed documents outlining MDA’s process and criteria for declaring elements to be available for operational use; these included MDA’s Ballistic Missile Defense (BMDS) Master Plan, version 9.1, which was signed in May 2009, and prior versions of this plan; integrated master schedules; and other MDA guidance. We determined when MDA had first delivered capabilities to the combatant commands for operational use by reviewing MDA’s initial operational baseline, dated April 2005, and subsequent memorandums issued by the MDA Director to update this baseline or declare elements to be available for contingency operations. We met with officials from U.S. Strategic Command and from the four geographic combatant commands that have identified ballistic missile defense priorities: U.S. Central Command, U.S. European Command, U.S. Northern Command, and U.S. Pacific Command. We also met with officials from U.S. Strategic Command’s Joint Functional Component Command for Integrated Missile Defense, who provided us with the component’s most recently completed Force Preparation Campaign Plan that outlines the command’s approach and operational criteria for assessing ballistic missile defense element performance. We conducted this performance audit from August 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marie Mak, Assistant Director; David Best; Colin Chambers; Tara Copp Connolly; Nicolaas C. Cornelisse; Susan Ditto; and Kevin L. O’Neill, Analyst-in-Charge, made significant contributions to this report.
In 2002, the Department of Defense (DOD) began developing and rapidly fielding a global Ballistic Missile Defense System (BMDS) composed of elements that include radars, interceptors, and command and control systems. These elements are envisioned to be linked together to defend against a broad range of ballistic missile threats. In 2009, DOD began a broadly scoped review of missile defense policy and strategy intended to reassess the BMDS and set direction for the future. In response to congressional interest in missile defense requirements and operations, GAO reviewed the extent to which DOD has (1) identified the types and quantities of elements and interceptors it needs and (2) established the units to operate elements that have been put into use. GAO reviewed key analyses, studies, plans, and other documents from the Missile Defense Agency (MDA), the services, combatant commands, and Joint Staff; and interviewed officials from across DOD. DOD lacks the comprehensive analytic basis needed to make fully informed decisions about the types and quantities of elements and interceptors it needs. Such an analytic basis would include a comprehensive examination of the optimal mix of elements and interceptors needed to meet all of DOD's ballistic missile defense requirements. DOD studies prepared to date were completed for specific purposes, such as addressing regional threats. However, none of the studies have taken a comprehensive approach that addressed the full range of requirements. The Joint Staff conducted studies, for example, to identify the minimum interceptor quantities needed for certain ballistic missile defense elements designed to defend against short-to-intermediate-range threats. Additionally, the combatant commands have analyzed their ballistic missile defense requirements for their specific regions, and the services have studied requirements for specific elements. Without a full assessment of its overall requirements, DOD lacks the information it needs to make the best possible policy, strategy, and budgetary decisions for ballistic missile defense. DOD has faced challenges in fully establishing units to operate five of eight ballistic missile defense elements that have been put into operational use. DOD typically requires that major weapon systems be fielded with a full complement of organized and trained personnel. To rapidly field missile defenses, however, DOD has in some cases put ballistic missile defense elements into operational use before first ensuring that the military services had created units and trained servicemembers to operate them. Three of the eight elements were modifications to existing systems, like the Navy's Aegis ships, so units already existed to operate these modified elements. The five remaining elements--the midcourse defense system designed to defend the United States from long-range threats; the high-altitude, theater missile defense system; a powerful radar placed on a sea-based, movable platform; ground-based radars currently fielded in Japan and Israel; and the command and control system designed to link the BMDS together--were put into use before operational units were fully established. As a result, DOD has faced a number of challenges. For example, the Army faced personnel shortfalls to operate the midcourse defense system. These shortages affected the Army units' ability to support ongoing research and development activities and ultimately resulted in operational readiness concerns. MDA and the military services are taking steps to establish the needed forces, but this may take years for some elements. DOD recognizes the challenges created by putting elements into early use, but has not set criteria requiring that operational units be in place before new elements are made available for use. Looking ahead, several new elements are in development, like the radars and interceptors currently being considered for deployment in Europe, and emerging threats could again cause DOD to press those capabilities into use. Unless fully trained units are in place to support missile defense elements when they are made operational, DOD will continue to face uncertainties and operational risks associated with the elements.
The Consolidated Health Centers program is administered by HRSA’s Bureau of Primary Health Care (BPHC). In addition to program grants from HRSA, which constitute about one-quarter of the centers’ budgets, the health centers receive funding from a variety of other sources, including Medicaid and state and local grants and contracts. (See fig. 1.) In 2003, health centers reported total revenues of about $5.96 billion. Health centers are required by law to serve a federally designated medically underserved area or a federally designated medically underserved population. In 2003, 69 percent of health center patients had a family income at or below the federal poverty level, and 39 percent were uninsured. In addition, 64 percent of patients were members of racial or ethnic minority populations, and 30 percent spoke a primary language other than English. Health centers are private, nonprofit community-based organizations or, less commonly, public organizations such as public health department clinics. The centers are typically managed by an executive director, a financial officer, and a clinical director. In addition, health centers are required by law to have a governing board, the majority of whose members must be patients of the health center. Health centers are required to provide a comprehensive set of primary health care services, which include treatment and consultative services, diagnostic laboratory and radiology services, emergency medical services, preventive dental services, immunizations, and prenatal and postpartum care. Centers are also required to provide referrals for specialty care and substance abuse and mental health services, and although centers may use program funds to provide such services themselves or to reimburse other providers, they are not required to do so. In addition, a distinguishing feature of health centers is that they are required to provide enabling services that facilitate access to care, such as case management, translation, and transportation. The health care services are provided by clinical staff—including physicians, nurses, dentists, and mental health and substance abuse professionals—or through contracts or cooperative arrangements with other providers. Health center services are offered at one or more delivery sites and are required to be available to all people in the center’s service area. Services must be provided regardless of patients’ ability to pay. Uninsured users are charged for services based on a sliding fee schedule that takes into account their income level, and health centers seek reimbursement from public or private insurers for patients with health insurance. HRSA uses a competitive process to award grants to health centers. Grant applications undergo an initial review for eligibility in which HRSA screens applications based on specific criteria—the applicant must be a public or private nonprofit entity, the applicant must be applying for an appropriate grant (e.g., certain grants funded by the program are available only to existing grantees), and the application must include the correct documents and meet page limitations and format requirements. Independent reviewers who have expertise in the health center program are selected by HRSA to review and score all eligible applications. The reviewers score an application by assessing each component of the applicant’s proposal, including descriptions of the need for health care services in the applicant’s proposed service area, how the applicant would integrate services with other efforts in the community, and the applicant’s capacity and readiness to initiate the proposed services. The Administrator of HRSA makes final award decisions and is required to take into account whether a center is located in a sparsely populated rural area, the urban/rural distribution of grants, and the distribution of funds across types of health centers (community, homeless, migrant, and public housing). In addition, the Administrator of HRSA also considers geographic distribution in making award decisions. The scope of a health center’s grant is delineated in its application and consists of its services, sites, providers, target population, and service area. (See app. II for additional information on HRSA’s process for awarding health center grants.) BPHC administers several competitive grants under the Consolidated Health Centers program, including new access point, expanded medical capacity, service expansion, and service area competition grants. (See table 1.) HRSA approves funding for a specific project period—which can be up to 5 years for existing grantees and up to 3 years for new organizations— and provides funds for the first year. For subsequent years, health centers must obtain funding annually through a noncompeting continuation grant application process in which the grantee must demonstrate that it has made satisfactory progress in providing services. A grantee’s continued receipt of grant funds also depends on the availability of funding. To monitor health centers’ performance and compliance with federal statutes, regulations, and policies, HRSA relies on periodic on-site monitoring reviews, as well as ongoing monitoring. Through early 2004, HRSA used BPHC’s Primary Care Effectiveness Review (PCER) to provide periodic on-site monitoring of health center operations. The PCER was scheduled to occur every 3 to 5 years as a mandatory part of the competitive grant renewal process when a health center’s project period was about to expire. During on-site PCER visits, a team of reviewers identified strengths and weaknesses in health center administration, governance, clinical and fiscal operations, and management information systems. According to HRSA officials, review team members were generally not HRSA staff, but contractors. The last PCER review was conducted in March 2004. HRSA created a new process for the periodic on-site review of all agency grantees, including health centers, and reviewers from HRSA’s Office of Performance Review (OPR) began to use this new process in May 2004. OPR reviews grantees in the middle of their project period—in the second year for new grantees and in the third or fourth year for existing grantees. According to HRSA officials, a goal of the OPR performance review process is to reduce the burden on grantees by consolidating the on-site monitoring of all HRSA grants to a health center into one comprehensive review. For example, if a health center receives a Ryan White Title III HIV Early Intervention grant, the OPR performance review covers both the Ryan White grant and the Consolidated Health Centers program grant(s). Each health center review team has three or four reviewers; HRSA’s goal is for the reviewers to be OPR staff, who are located in HRSA’s regional offices, with contractors being used to supplement OPR staff only when necessary. For each health center review, the review team prepares a performance report describing its findings. As necessary, the report identifies the health center’s technical assistance needs and actions the center needs to take to ensure its compliance with program requirements. HRSA also conducts ongoing monitoring of health centers through its project officers, who serve as grantees’ main point of contact with the agency. Project officers use various tools to monitor compliance with program requirements and to assess the overall condition of health centers. For example, project officers review annual noncompeting continuation grant applications, conduct midyear assessments, and regularly examine available data, including financial audits and UDS data. They are also expected to have regular contact with health centers by telephone and through e-mail and to connect grantees to resources for assistance when necessary, such as referring a health center to a HRSA-funded contractor for technical assistance to improve health center operations. In July 2003, HRSA transferred project officer responsibilities from its 10 regional offices and centralized this function within BPHC to improve the consistency of program oversight. In addition, about one-third of the health centers funded under the Consolidated Health Centers program are accredited by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and receive additional periodic on-site monitoring. These reviews include an assessment of a health center’s compliance with program laws and regulations, clinical procedures, and organizational processes, such as performance improvement activities and human resource management. HRSA began promoting accreditation for health centers in 1996, and under its current agreement with JCAHO, HRSA pays the fees for health center surveys, reducing the financial burden of accreditation for health centers. HRSA also provides financial support to the National Association of Community Health Centers to encourage accreditation and educate health centers about its benefits. HRSA uses UDS data to monitor aspects of health center and overall program performance. Each year, health centers are required to report administrative data on their operations through UDS. These data include a list of each center’s service delivery sites and information about the center’s patients (e.g., race/ethnicity, insurance status); revenues; expenses; and service, staffing, and utilization patterns. HRSA uses UDS data to prepare its annual National Rollup Report, which summarizes the Consolidated Health Centers program; to prepare Comparison Reports, which allow the centers to compare their performance on certain measures (e.g., productivity, cost per encounter) against that of other centers; and to generate analyses that HRSA uses when evaluating the program. In March 2000, we reported on HRSA’s monitoring of the Consolidated Health Centers program. We analyzed UDS data from 1996 through 1998 and noted deficiencies in data completeness and quality. Specifically, some grantees failed to report certain data elements or reported them very late, resulting in missing data. Furthermore, we found that the data editing and cleaning processes that were in place at the time did not always correct data errors that they were designed to detect. We recommended that HRSA improve the quality of UDS data and enforce the requirement that every grantee report complete and accurate data. In response to the recommendation, HRSA reported that a new requirement was in place for grantees to submit their UDS reports electronically, which improved the timeliness and accuracy of data by eliminating the need for a second level of data entry. In addition, the agency implemented formal training for centers on how to report UDS data. Competition for new access point, expanded medical capacity, and service expansion grants increased during the first 3 years of the President’s Health Centers Initiative. For example, while HRSA funding of new access point grants decreased by about half from fiscal year 2002 to fiscal year 2004, the number of applicants rose by 28 percent. HRSA is concerned that its current process for awarding new access point grants may not be consistent with the goal of funding health centers in the neediest communities. Therefore, the agency is considering both revising the measures it uses to assess need and increasing the relative weight of need in the award process. Competition for new access point grants increased over the first 3 years of the President’s Health Centers Initiative. Although the majority of grant funds are awarded for continuation grants, for which funding increased, funding for other types of grants declined. (See fig. 2.) For example, funding for new access point grants decreased from about $80 million in fiscal year 2002 to about $38 million in fiscal year 2004, a 53 percent decline. At the same time, the number of eligible new access point applications increased by 28 percent. Combined with the decrease in new access point funding, this resulted in a decrease in the proportion of applicants that HRSA funded—from 52 percent of fiscal year 2002 applicants to 20 percent of fiscal year 2004 applicants. Some of these applicants received funding in the same year they applied, and others received funding the following year. (See fig. 3.) The percentage of new access point applicants HRSA funded in the same year they applied decreased from 43 percent in fiscal year 2002 to 3 percent in fiscal year 2004. In addition, HRSA approved 17 percent of the applications it received in fiscal year 2004 for funding in fiscal year 2005. Competition for expanded medical capacity and service expansion grants also increased during the President’s Health Centers Initiative. Funding for expanded medical capacity grants decreased from about $56 million in fiscal year 2002 to about $19 million in fiscal year 2004, and funding for service expansion grants decreased from about $27 million in fiscal year 2002 to about $9 million in fiscal year 2004. With the decrease in funding amounts, the percentage of funded applicants also decreased. HRSA funded 66 percent of fiscal year 2002 expanded medical capacity applicants and 57 percent of fiscal year 2002 service expansion applicants; in fiscal year 2004, it funded 34 percent and 21 percent of the applicants, respectively. Although HRSA funded fewer grants to increase health center services during the second and third years of the President’s Health Centers Initiative, HRSA officials believe program funding for fiscal year 2005 and the President’s proposed budget for fiscal year 2006 will allow them to exceed the initiative’s goal. From fiscal year 2002 through fiscal year 2004, HRSA funded 334 new access point grants and 285 expanded medical capacity grants, representing about half of the initiative’s 5-year goal of providing 630 new access point grants and 570 expanded medical capacity grants. The process HRSA uses to assess the need for services in a new access point applicant’s proposed service area has changed since the beginning of the President’s Health Centers Initiative. In fiscal year 2002, new access point applicants were ranked according to both the score they received on a need-for-assistance worksheet and the score assigned by independent reviewers after they evaluated the technical merit of the application. In fiscal years 2003, 2004, and 2005, however, HRSA did not use the worksheet scores to rank applicants. Instead, it used the worksheet scores to screen applicants; only applicants that scored 70 or higher on the worksheet had their application forwarded to independent reviewers for an evaluation of its technical merit. In addition to changing the role of the need-for- assistance worksheet score, HRSA also increased the relative weight of the need criterion in the application score. In fiscal year 2002, the maximum need criterion score constituted 5 percent of the maximum total application score; in fiscal years 2003, 2004, and 2005, the maximum need criterion score constituted 10 percent of the maximum total score. HRSA has raised concerns that its current process for assessing the need for services in a new access point applicant’s proposed service area may not be consistent with the goal of the President’s Health Centers Initiative to fund health centers in the neediest communities. HRSA reported that the process had resulted in little distinction among applicants’ need-for- assistance worksheet scores and that almost all applicants received a score of 70 or higher. During the first 3 years of the President’s Health Centers Initiative, only 24 of 1,346 applications scored lower than 70 points. In addition, HRSA reported that the relative weight assigned to an applicant’s description of the need for health care in its proposed service area (10 percent) might be too low. In light of these concerns, HRSA commissioned a study to evaluate whether the measures in the need-for- assistance worksheet reflected the relative need of different applicants and whether the review criteria were weighted appropriately to ensure that grants were awarded to the neediest communities. The report, which was issued in November 2003, recommended several changes, including revising measures in the need-for-assistance worksheet and increasing the maximum need score from 10 percent to 20 percent of the maximum total score. In response to these recommendations and feedback from program applicants, HRSA is considering revising the method it uses to assess the need for services in new access point applicants’ service areas. On February 4, 2005, HRSA issued a Federal Register notice seeking comments on a proposal to change the measures used in the need-for- assistance worksheet and to substitute the need-for-assistance worksheet for the current need criterion in the grant application. HRSA also sought comments on what weight the agency should give need in the application score. Comments on the Federal Register notice were due on March 7, 2005, and HRSA expected to complete its analysis by June 2005. HRSA reported it would delay the May 23, 2005, due date for new access point applications until its analysis was complete. To further strengthen its ability to award new access point grants in the neediest communities, HRSA has indicated that it may focus its efforts on high-poverty counties without a health center delivery site. In its fiscal year 2006 budget justification, HRSA noted that, without special attention to high-poverty counties, the current award process may result in some of these counties not having a health center site. For example, it may be difficult for an applicant in a high-poverty county to demonstrate its financial viability. In the budget justification, HRSA requested funds specifically for awarding new access point grants to centers serving high- poverty counties and planning grants to community-based organizations to support the establishment of centers in such counties. The number of health centers receiving new access point grants varied widely by state during the first 3 years of the President’s Health Centers Initiative. During that period, HRSA awarded 334 new access point grants, with at least one grantee in each state. About half of the grantees were in 10 states—Alaska, California, Illinois, Massachusetts, New Mexico, New York, Oregon, South Carolina, Texas, and Virginia. The number of grantees in each state ranged from 57 in California to 1 each in Delaware, the District of Columbia, Kansas, and Wyoming. (See app. III for additional information on the number of new access point grants by state and territory. See app. IV for the numbers of all health center grantees, by state and territory, operating in 2001—before the initiative began—and in 2003— the most recent year for which data were available at the time we conducted our review. Figure 4 shows the location of health centers that HRSA was funding in 2003.) In 2003, the distribution of all health center grantees was 48 percent urban and 52 percent rural. HRSA is required by law to make awards so that 40 to 60 percent of patients expected to be served reside in rural areas. HRSA officials told us that the agency meets this requirement by ensuring that the proportion of awards to rural health centers is from 40 to 60 percent. Based on the numbers of patients reported by health centers to the UDS, the proportion of patients served by urban health centers in 2003 was 54 percent and the proportion served by rural centers was 46 percent. While HRSA can provide information on the geographic distribution of health center grantees, it does not have reliable information on the number and geographic distribution of the delivery sites where the centers provide care. In its budget justification documents and Government Performance and Results Act reports, HRSA has used the number of delivery sites it funds to provide information on its progress toward achieving its goals for the Consolidated Health Centers program. For example, in its fiscal year 2005 performance plan, HRSA has a performance goal of increasing access points in the health centers program, and it used 2001 UDS data on the number of health center delivery sites as a baseline to measure progress toward this goal. HRSA, however, is not confident that UDS data accurately reflect the number of sites supported by program dollars. HRSA officials told us that the agency does not verify the accuracy of the delivery site information grantees provide to UDS. They also said that UDS delivery site data through 2003 may include sites not funded by the health centers program and sites that HRSA did not approve in the scope of a health center’s grant. Moreover, HRSA has been reporting inconsistent data on the number of health center delivery sites in the program. For example, in its fiscal year 2005 performance plan, HRSA reported funding 3,588 delivery sites in fiscal year 2003, consisting of 3,317 delivery sites operating in fiscal year 2001 and 271 new access point grants funded in fiscal years 2002 and 2003; however, some of the new access point grants represent more than one delivery site. As a result, HRSA underestimated the number of new program delivery sites operating in fiscal years 2002 and 2003. HRSA’s new tool for periodic on-site review of health centers—the OPR performance review—focuses on monitoring individual health centers’ performance on selected measures, including health outcome measures. The OPR performance review generally does not provide HRSA with standardized performance information for evaluating the Consolidated Health Centers program as a whole. However, the agency is using other data collection tools, such as its Sentinel Centers Network, that could help it measure overall program performance. HRSA also uses UDS to monitor aspects of health centers’ performance, and the agency has taken steps to improve the accuracy and completeness of that data set. HRSA’s new health center reviews, conducted by OPR staff, focus on evaluating selected measures of performance and identifying ways to improve health centers’ operations and performance. OPR works with each health center to select three to five measures that reflect the specific needs of the center’s community and patient population, and then to ascertain the health center’s current performance on each measure. For the health centers we contacted that had undergone the OPR performance review, most of the measures were health outcome measures. These measures included the average number of days that asthmatic patients are symptom free, percentage of patients age 60 or older receiving influenza and pneumonia immunizations, and percentage of low-birth-weight infants born to health center patients. Health centers may set performance goals related to these measures. For example, one health center adopted the goal set by Healthy People 2010 of reducing the percentage of low-birth-weight infants born to its patients to less than 5 percent. HRSA officials told us that the agency intends to follow up annually on grantees’ performance on these measures. When possible, HRSA plans to track progress using data the grantee already reports. For example, HRSA would be able to use UDS data to track progress on the number of health center patients receiving care. HRSA officials told us that because the OPR performance reviews began recently, the agency is still determining how it will track performance on other measures, including many related to patient health outcomes. After assessing the health center’s performance on each measure, the review team analyzes the factors that contribute to and hinder the center’s performance on these measures, including the processes and systems the health center uses in its operations. During an on-site visit, the review team meets with health center staff to discuss these factors and determine which are the most important to address. The review team also identifies potential actions that could help the center improve its performance and identifies possible partners in making improvements. For example, to improve one health center’s performance on its low-birth-weight measure, the review team suggested the center undertake provider and patient education, training for health center staff, continued partnerships with other service providers and community groups, and an analysis of patient medical charts to identify the risk factors of patients who gave birth to low- birth-weight infants. HRSA requires that grantees develop an action plan to improve performance in response to the review team’s findings. The action plan describes the specific steps the grantee plans to take to improve performance on each measure and provides estimated completion dates. For example, the health center discussed above proposed hiring an outside physician to conduct chart reviews and showing a video on cultural competence to all staff as two specific actions to improve performance on its low-birth-weight measure. While the OPR review primarily focuses on health centers’ performance on specific measures, the reviews also verify key aspects of health centers’ compliance with Consolidated Health Centers program requirements. The review teams examine information HRSA maintains on each health center, including grant applications and financial audits. According to HRSA officials, OPR reviewers also follow up on concerns identified by project officers, who are the agency’s primary means for ongoing monitoring of health center operations and compliance. If the review team identifies any instances of noncompliance with program requirements—such as those related to the types of services the center must provide and the composition of its governing board—HRSA requires grantees to address them in the action plan. HRSA officials told us they hoped that in addition to providing information on individual health centers, the OPR performance reviews would result in information that could improve other centers’ services and operations. HRSA officials said that as reviewers gained more experience in evaluating health centers, they would be better able to identify best practices that contribute to outstanding patient health outcomes and share these practices among health centers. HRSA officials told us that OPR planned to use this information to develop a list of successful practices employed by health centers, such as a patient tracking system or prescription drug subsidy program. They said they expected to generate this list three times a year and to make it available as a resource for project officers and OPR review teams to share with other health centers. The health center officials we interviewed whose centers had undergone the OPR performance review said that, in general, it provided helpful suggestions for improving services and operations. Officials from some health centers told us that they planned to incorporate the performance goals and their progress in achieving them into their future grant applications. Health center staff also described the reviews as accurate and thorough and said they appreciated the in-depth method of looking at performance in targeted areas. Officials from a few health centers also noted that their reviewers had expertise on the health centers program because the reviewers had previously been project officers for the program; one health center official said that this expertise was critical to the review process. In many cases, HRSA field office staff conduct performance reviews of health centers in states or communities with which they are already familiar. HRSA officials told us this experience has allowed the OPR reviewers to understand performance in the context of the local, state, and regional environment, such as the effect state Medicaid funding and policy changes might have on the number of people receiving health center services. While the OPR review evaluates the performance of individual health centers, it generally does not provide standardized performance information for the Consolidated Health Centers program as a whole, and HRSA is using other tools to collect information that could help measure overall program performance. In 2002, HRSA began collecting data on health centers’ services and patient populations through its Sentinel Centers Network—a network of health centers designed to be geographically and sociodemographically representative. As of February 2005, 67 health centers, with more than 1 million patients, were participating in the network. Participating health centers report patient-, encounter-, and practitioner-level data. The network is intended to supplement HRSA’s other data sources, such as the Community Health Center User and Visit Survey,which is conducted only every 5 to 7 years, and the UDS, which generally provides grantee-level data. HRSA also collects information that could help it measure overall program performance through its Health Disparities Collaboratives, which the agency views as a tool for improving the quality of care. Participating health centers use a model for patient care that includes evidence-based practice guidelines. The model also includes a database in which the health centers collect standardized patient-level health outcome data that are used to track progress and are shared with all health centers in the collaborative. HRSA plans to expand the collaborative model from a focus on specific diseases to a focus on primary care in general. Through 2004, 497 health centers had implemented the collaborative model for at least one disease. An additional 150 centers began the collaborative process in February 2005. In the future, HRSA officials would like to extend the model to all health centers in the Consolidated Health Centers program. HRSA has a contract with Johns Hopkins University for evaluating data from the Sentinel Centers Network and other health center data, such as UDS data. According to HRSA officials, the purpose of this contract is to provide timely, short-term statistical analyses and longer-term evaluation studies using databases that contain information on health centers. One planned study will examine preventive services provided by health centers, and several will focus on the role of health centers in reducing racial/ethnic and socioeconomic disparities in health outcomes for health center users. Since our previous report on the health centers program in March 2000, HRSA has taken steps to improve the UDS data collection and reporting process by trying to ensure that all Consolidated Health Centers program grantees report to the system and that the information they report is complete and accurate. HRSA’s efforts resulted in near-universal reporting—99.8 percent—by grantees for 2003. HRSA contacts grantees that do not submit UDS data for the preceding calendar year by February 15. HRSA officials told us that after they made several efforts to try to obtain UDS data, only 2 of the 892 grantees required to report in 2003 did not submit data. To minimize errors in the data set, HRSA implements data quality assurance procedures in the UDS data collection process. Specifically, HRSA has programmed 474 edit checks into the software that grantees use to report UDS data. These edit checks detect mathematical and logical errors and are triggered while grantees are entering or verifying data. Mathematical edit checks ensure that rows and columns sum to the total submitted by the grantee, and logical edit checks ensure consistency within and across tables. For example, one logical edit check ensures that the total number of patients reported by age and sex equals the total number of patients reported by race/ethnicity. The grantee is prompted to address inaccuracies or inconsistencies identified by the edit checks before submitting the data to HRSA. When HRSA receives grantees’ UDS submissions, its contractor conducts additional edit checks. The contractor confirms that grantees’ submissions are substantially complete, which includes ensuring that tables are not blank, and forwards satisfactory submissions to an editor. The editors review the mathematical and logical checks triggered by the software and the checks for completeness conducted by the contractor. The editors also conduct 304 additional edit checks, which include comparisons to data submitted in the previous year and comparisons to industry norms. When they find an aberrant data element, editors contact grantees to determine if there is an error in the data or if there is a reasonable explanation. If there is an error, the editor and grantee agree on a process and timeline for the grantee to submit corrected data, and the grantee’s UDS data are revised. HRSA officials told us that editors were experienced with UDS, the Consolidated Health Centers program, and data editing. The editors have also attended training to ensure consistency across editors and to learn about new edit checks. In addition, editors are assigned to grantees in a single state or region to facilitate their understanding of unique regional issues that could affect UDS data, such as managed care participation. We found the UDS data for the selected data elements we evaluated to be generally accurate. For the mathematical and logical edit checks of 25 data elements we conducted, we found very few errors, and each error was due to missing data. In addition, we found no discrepancies in our replication of five analyses in HRSA’s 2003 National Rollup Report. To improve the accuracy of UDS data on the number and location of health center delivery sites, for 2004, HRSA revised the instructions to grantees for identifying their delivery sites. The new instructions specified that grantees should report delivery sites that provide services on a regularly scheduled basis and that are operated within the approved scope of the health center’s grant. HRSA also provided more detailed instructions to help grantees determine which delivery sites they should include in their UDS submission and which sites they should exclude. As of June 2005, HRSA had not validated the accuracy of the 2004 UDS data on delivery sites. In addition to providing comprehensive primary and preventive health care services, most health centers receiving Consolidated Health Centers program grants provide specialty care on site or have formal arrangements for referring patients to outside specialists for care. According to the 2003 UDS data, 32 percent of health centers provided some specialty care on site. Specialists providing services on site include health center employees and volunteers. In addition, 83 percent of health centers reported that they had formal referral arrangements for some specialty care, which included agreements with community providers, such as local hospitals and networks of specialty care providers. Almost all of these health centers reported that they did not pay for some of the services for which they referred patients. In addition to formal referrals, health centers also informally refer patients to specialty care. Health center officials told us that many of their referrals for specialty care were arranged informally through discussions between health center staff and the specialty care provider, and specialists donated their time to provide services to the health center’s patients. Health center officials told us that obtaining specialty care for center patients, especially patients who are uninsured, could be difficult. Officials from most of the health centers in our review said that there was a shortage of certain specialists available to receive referrals from their health center. For example, one official told us that there were only two specialists providing gynecologic oncology services in the county, and both physicians were overbooked with paying patients. Health center officials told us that some specialists—such as orthopedists, neurologists, oncologists, cardiologists, ophthalmologists, and dermatologists—were difficult to find. This problem is exacerbated because, according to officials from most of the health centers in our review, some specialists are not willing to provide free care for uninsured patients. As a result, there are often long waiting lists for health center patients to see a specialty care provider who is willing to provide donated services. For example, one health center official told us that a patient might have to wait 9 months for an appointment with a dermatologist. One health center official characterized the center’s efforts to secure specialty care for patients as “begging.” Although these issues present a problem for health centers in both urban and rural areas, people living in rural communities could face additional challenges affecting their access to care, such as a need to travel a long distance to obtain care. HRSA’s Consolidated Health Centers program has played a pivotal role in providing access to health care for people who are uninsured or who face other barriers to receiving needed care. When HRSA makes decisions about awarding program funds to support additional health center delivery sites, it is faced with the challenge of identifying applicants that will serve communities with a demonstrated need for services and that will operate centers that can effectively meet those needs and remain financially viable. HRSA has indicated that it is not confident that its award process for new access point grants—which is intended to meet this challenge—has sufficiently targeted communities with the greatest need. HRSA’s recent effort to evaluate the assessment and relative weight of need in the award process could result in greater confidence that the agency is appropriately considering community need in distributing federal resources to increase access to health care. In light of the growing federal investment in health centers during the President’s Health Centers Initiative, it is important for HRSA to ensure that health centers are operating effectively and improving patient health outcomes. HRSA’s adoption of a performance monitoring process that includes emphasis on patient health outcomes and its efforts to collect health outcome data constitute an important step in improving the agency’s capacity to assess health centers and the health centers program. Continued attention to such efforts could improve HRSA’s ability to evaluate its success in improving the health of people in underserved communities. It is also important for HRSA to ensure that it is collecting and reporting accurate and complete information about the number and location of delivery sites where health centers are providing care. In providing new UDS guidance to grantees, HRSA has taken a step toward improving the quality of its information on delivery sites. The agency will need to carefully assess the effectiveness of its new guidance and, if necessary, take additional steps to ensure that delivery site information is accurate. HRSA officials and the Congress need accurate and complete information on delivery sites to assess whether the health centers program is achieving its goal of expanding access to health care for underserved populations and to make decisions about managing and funding the program. We recommend that, to provide federal policymakers and program managers with accurate and complete information on the Consolidated Health Centers program’s activities and progress toward its performance goals, the Administrator of HRSA ensure that the agency collects reliable information from grantees on the number and location of delivery sites funded by the program and accurately reports this information to the Congress. We provided a draft of this report to HRSA for comment. HRSA acknowledged that more accurate and timely delivery site data would allow for improved management of the Consolidated Health Centers program and said that the agency already has efforts under way to increase the accuracy of delivery site data. (HRSA’s comments are reprinted in app. V.) HRSA stated that the accuracy of delivery site data does not affect its ability to assess and report the progress of the President’s Health Centers Initiative because it believes this progress is more appropriately assessed by the number of new access point and expanded medical capacity grants HRSA has awarded. While HRSA may choose to assess the progress of the President’s Health Centers Initiative on this basis, it is not appropriate to equate the number of new access point grants awarded to health centers with the number of delivery sites where these centers provide care. HRSA did not indicate whether it plans to revise its method of counting delivery sites for its future reports to the Congress to include all delivery sites funded since the President’s Health Centers Initiative began. We continue to believe it is important that HRSA collect and report accurate data on the number and location of all delivery sites funded by the program so that agency officials and the Congress will have the information they need to monitor the program’s progress in increasing access to health care and to make decisions about managing and funding the program. HRSA also provided technical comments, and we revised our report to reflect the comments where appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Admiistrator of the Centers for Medicare & Medicaid Services, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7119. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. An additional contact and the names of other staff members who made contributions to this report are listed in appendix VI. To do our work, we obtained Consolidated Health Centers program documents, pertinent studies, and data from the Department of Health and Human Services’ (HHS) Health Resources and Services Administration (HRSA). We also conducted structured interviews of officials from 12 health centers in California, Illinois, Pennsylvania, and Texas. We selected these states because of their geographic diversity and because they were among the states with the highest number of health centers. Within each of the four states, we selected 3 health centers, including at least 1 urban and 1 rural center in each state. To ensure that we could obtain information about securing specialty care for uninsured patients, we selected only centers where at least 26 percent of the patients were uninsured in calendar year 2003; 75 percent of all health centers had a proportion of uninsured patients of at least 26 percent. For each state we selected, we also interviewed officials from the state’s primary care association. We also reviewed the relevant literature and program statutes and regulations and interviewed officials from the National Association of Community Health Centers and the National Association of Free Clinics. To acquire information on health center funding, we examined Consolidated Health Centers program funding data by grant award type— new access point, expanded medical capacity, service expansion, service area competition, and noncompeting continuation—for fiscal years 2002, 2003, and 2004. In addition, we reviewed information on grant applications HRSA received during those 3 years. To describe the geographic distribution of health centers, we analyzed Uniform Data System (UDS) data on health center location by zip code and state and other data HRSA provided on centers’ urban/rural status. We assessed the reliability of the data on health center funding and geographic distribution of health centers by interviewing agency officials knowledgeable about the data and the systems that produced them, and we determined that the data were sufficiently reliable for the purposes of this report. To determine HRSA’s process for assessing the need for services, we reviewed agency grant announcements, grant applications, and application guidance documents for the various grant types. We also reviewed the need-for-assistance worksheet and the need criteria in the new access point grant application guidance. We interviewed agency officials about the criteria used to assess the application sections on need for services and about HRSA’s ongoing consideration of revising the way need is assessed for new access point grants. In addition, we interviewed health center officials and officials from national and state associations that work with health centers about their experiences with the grant process. To examine HRSA’s monitoring of health center performance, we reviewed agency reports and protocols related to the new monitoring process conducted by the Office of Performance Review (OPR). We interviewed agency officials about the development of the new process and the roles played by different agency branches in monitoring health centers. To obtain information about health centers’ experiences with the new OPR performance review process, we conducted interviews with officials from health centers that had completed the process. One of the 12 original health centers we interviewed had completed the OPR performance review process, and we also interviewed officials at an additional 6 health centers that were among the first to complete the process. In addition, we reviewed documents provided by the health centers, including performance reports and action plans. We also reviewed reports and documents related to HRSA’s ongoing monitoring, including sample tools used by project officers to monitor their grantees and schedules of site visits conducted by the project officers. In addition, we reviewed documents related to HRSA’s collection of health center performance data, including agency guidelines for the Health Disparities Collaboratives and the application for health center participation in the Sentinel Centers Network. To assess HRSA’s improvements to UDS, we evaluated the completeness and quality of 2003 data—the most recent data available at the time we conducted our review. To evaluate overall completeness, we obtained the master list of 2003 grantees from HRSA and matched the grantees on this list with those in the 2003 UDS data file. To evaluate the completeness and quality of specific data elements in the 2003 UDS data file, we developed and evaluated edit checks of those data elements. We selected variables that were identified as problematic in our March 2000 report and others that were used in our current analysis. We also independently conducted selected analyses and compared our findings to corresponding tables in the 2003 National Rollup Report. For example, using 2003 UDS data, we duplicated the table on services offered and delivery method in the National Rollup Report and verified that it matched the data HRSA reported. We did not perform edit checks on the delivery site data grantees reported to UDS. We interviewed agency officials about how HRSA collected UDS data on health center delivery sites and determined that the data were not sufficiently reliable for purposes of our report. We conducted our work from August 2004 through June 2005 in accordance with generally accepted government auditing standards. HRSA’s process for awarding grants through the Consolidated Health Centers program involves several steps. HRSA provides initial grant information for new access point, expanded medical capacity, service expansion, and service area competition grants through the HRSA Preview, a notice available on HRSA’s Web site. The preview includes information on eligibility requirements; the estimated number of awards to be made; the estimated amount of each award; and the dates that application guidance will be available, applications will be due, and awards will be made. HRSA later issues grant application guidance, which includes the forms applicants need to submit (such as forms describing the composition of the applicant’s governing board, summarizing the funding request, and describing the type of services to be provided) and a detailed description of the application review criteria and process. The application guidance for new access point grants also encourages applicants to submit a letter of interest prior to submitting a grant application. In the letter of interest, the applicant describes its community’s need for services and proposes services that the health center would offer to address those needs. HRSA officials told us that in fiscal year 2004, nearly one-half of applicants for new access point grants submitted a letter of interest. HRSA provides feedback to organizations on whether the proposal is consistent with the objectives of the health center program and whether HRSA thinks the organization is ready to establish a new delivery site. HRSA also provides applicants with technical assistance resources during the development of grant applications. For example, through cooperative agreements with HRSA, state primary care associations and the National Association of Community Health Centers offer regional training sessions on various topics, including strategic planning, proposal writing, community assessment, and data collection. Potential applicants may also contact their state primary care association for individual technical assistance and application review. HRSA approves funding for a specific project period—up to 5 years for existing grantees and up to 3 years for new grantees. HRSA provides funds for the first year of the project; for subsequent years, health centers must obtain funding annually through a noncompeting continuation grant application process in which the grantee must demonstrate that it has made satisfactory progress in providing services. A grantee’s continued receipt of funds also depends on the availability of funding. Applications submitted to HRSA go through several stages of review. HRSA initially screens applications for eligibility based on specific criteria—the applicant must be a public or private nonprofit entity, the applicant must be applying for an appropriate grant (e.g., expanded medical capacity and service expansion grants are available only to existing grantees), and the application must include the correct documents and comply with page limitations and format requirements. Eligible applications go through a review process in which independent reviewers evaluate and score applications. The reviewers are selected by HRSA and have expertise in a specific field relevant to the health center program. HRSA provides reviewers with the same application guidance that it provides to applicants, and reviewers are to use their professional judgment in scoring applications. During the first stage of the review process, HRSA forwards eligible applications to three independent reviewers, who have 3 to 4 weeks to individually evaluate the applications. Applications for new access point grants include a need-for-assistance worksheet, which is evaluated by the reviewers. HRSA uses the need-for-assistance worksheet to measure barriers to obtaining care and to measure health disparity factors in the applicant’s proposed service area. Applicants can score up to 100 points on the worksheet, and only those applicants that receive a score of 70 or higher on the worksheet go on to have the technical merits of their application evaluated. The reviewers evaluate the merits of all qualified applications; they base their review on a standard set of criteria (see table 2) and give each application a preliminary score of up to 100 points. For example, reviewers of new access point grant applications evaluate the need for services through the criterion that describes the applicant’s service area/community and target population and assign a score from 0 to 10, which constitutes a maximum of 10 percent of the applicant’s maximum final score. Similarly, reviewers evaluate the applicant’s service delivery strategy and model and assign a score from 0 to 20, which constitutes a maximum of 20 percent of the maximum final score. During the second stage of the review process, reviewers present the strengths and weaknesses of the application to a panel of 10 to 15 reviewers. After discussing the application, each panel member scores it. For each application, HRSA averages the scores assigned by each reviewer in the panel. The volume of applications may result in HRSA’s using multiple review panels during a funding cycle. When this occurs, HRSA uses a statistical method to adjust for variation in scores among different review panels. The adjusted score becomes the final application score, and the final scores are used to develop a rank order list of applicants. HRSA bases its award decisions on the rank order of scores and other factors. Two types of factors—the funding preference and awarding factors—can affect which applicants HRSA chooses for funding from the rank order list. The funding preference is given to applicants proposing to serve a sparsely populated rural area. To be considered for the preference, the applicant must demonstrate that the entire area proposed to be served by the delivery site has seven or fewer people per square mile. In addition to scoring an application, the review panel evaluates the requested funding amount and determines if an applicant should be considered for the funding preference. The funding preference does not affect the score, but may place an applicant in a more competitive position in relation to other applicants. For example, if the panel has determined that the applicant qualifies for the funding preference, it may receive a grant award over higher scoring applicants that did not qualify for the preference. In fiscal year 2004, of the five applicants that received a service expansion grant to provide new oral health services, three were determined to qualify for the funding preference. These three applicants—with scores of 83, 86, and 90— were each awarded a grant over six applicants with application scores above 90. As with the funding preference factor, the law requires HRSA to consider awarding factors in selecting applicants to fund from the rank order list. HRSA must consider the urban/rural distribution of awards, the distribution of funds across types of health centers (community, homeless, migrant, and public housing), and a health center’s compliance with program requirements. In fiscal year 2004, HRSA gave priority to funding homeless and migrant health centers and, from the new access point applications the agency received that year, it funded only health centers requesting homeless or migrant health center funding. HRSA officials said the agency did this because the applications it had already approved in fiscal year 2003 for funding in fiscal year 2004, pending funding availability, did not include applications for homeless or migrant health center funding. In addition to the preference and awarding factors specified in the law, HRSA also considers the geographic distribution of awards in making funding decisions. HRSA sends a Notice of Grant Award to successful applicants. The notice includes a set of standard terms and conditions with which the grantee must comply to receive grant funds, such as allowable uses of federal funds and reporting requirements. In addition, the notice may include grantee- specific conditions of award. For example, common conditions placed on new access point awards relate to the health center’s being operational within 120 days, having the appropriate governing board composition, and hiring key staff. About 80 percent of new access point awards receive at least one condition, according to HRSA officials. HRSA notifies unsuccessful applicants of the outcome of the review process and provides applicants with their score and a summary of their application’s strengths and weaknesses. 3 (Continued From Previous Page) 16 (Continued From Previous Page) In addition to the person named above, key contributors to this report were Donna Almario, Janina Austin, Anne McDermott, Julie Thomas, Roseanne Price, and Daniel Ries.
Health centers in the federal Consolidated Health Centers program provide comprehensive primary health care services at one or more delivery sites, without regard to patients' ability to pay. In fiscal year 2002, the Health Resources and Services Administration (HRSA) began implementing the 5-year President's Health Centers Initiative. The initiative's goal is for the program to provide 1,200 grants in the neediest communities--630 grants for new delivery sites and 570 grants for expanded services at existing sites--by fiscal year 2006. GAO was asked to provide information on (1) funding of health centers and HRSA's process for assessing the need for services, (2) geographic distribution of health centers, and (3) HRSA's monitoring of health center performance. Competition for Consolidated Health Centers program funding increased over the first 3 years of the President's Health Centers Initiative, and HRSA's process for assessing communities' need for additional primary care sites is evolving. Program funding, which primarily supported continuing health center services, increased from fiscal year 2002 to fiscal year 2004. However, funding for new access point grants, which fund one or more new delivery sites, decreased by 53 percent during this period. At the same time, the number of applicants for these grants increased by 28 percent. As a result, the proportion of applicants receiving new access point grants declined from 52 percent in fiscal year 2002 to 20 percent in fiscal year 2004. In fiscal years 2002 through 2004, HRSA funded 334 new access point grants and 285 grants for expanded services at existing sites. While HRSA includes an assessment of communities' need for services in its process for awarding new access point grants, agency officials indicated that they were not confident that the process has sufficiently targeted communities with the greatest need. Therefore, the agency is considering changes to the way it assesses community need and the relative weight it gives need in the award process. The number of health centers receiving new access point grants varied widely by state--from 1 to 57--during fiscal years 2002 through 2004, but HRSA lacks reliable data on the number and location of health centers' delivery sites. Although HRSA uses data on the number of delivery sites to track the progress of the Consolidated Health Centers program, it is not confident that grantees are accurately identifying delivery sites funded by the program. Furthermore, in its reporting, HRSA counted each new access point grant funded in fiscal years 2002 through 2004 as a single delivery site, although some represent more than one site. HRSA needs to collect and report accurate and complete delivery site data to give the agency and the Congress data they need to make decisions about the program. HRSA has increased the role of performance measurement in its monitoring of health centers and has improved its collection of data that could help measure overall program performance. In 2004, the agency began to use a new process for on-site monitoring of health centers that focuses on each center's performance on measures tailored to its community and patient population. However, the new review generally does not provide standardized performance information that HRSA can use to evaluate the health center program as a whole. The agency is using other tools to collect health outcome data on patients that could help measure program performance. Continued attention to such efforts could improve the agency's ability to evaluate its success in improving the health of people in underserved communities. In addition to developing these data collection tools, HRSA has taken steps to improve the accuracy and completeness of its Uniform Data System, a data set that HRSA uses to monitor aspects of the health centers' performance. For example, HRSA provided grantees with more detailed instructions on how to identify their delivery sites.
PEPFAR’s original authorization in 2003 established the Office of the U.S. Global AIDS Coordinator (OGAC) at the Department of State (State) and gave OGAC primary responsibility for the oversight and coordination of all resources and international activities of the U.S. government to combat the HIV/AIDS pandemic. OGAC also allocates appropriated funds to PEPFAR implementing agencies, particularly CDC and USAID. CDC and USAID obligate the majority of PEPFAR funds for HIV treatment, care, and prevention activities through grants, cooperative agreements, and contracts with selected implementing partners, such as U.S.-based nongovernmental organizations (NGO) and partner-country governmental entities and NGOs. This includes the 33 countries and three regions that developed PEPFAR annual operational plans for fiscal year 2012. The 33 countries were Angola, Botswana, Burundi, Cambodia, Cameroon, China, Côte d’Ivoire, Democratic Republic of the Congo, Dominican Republic, Ethiopia, Ghana, Guyana, Haiti, India, Indonesia, Kenya, Lesotho, Malawi, Mozambique, Namibia, Nigeria, Russia, Rwanda, South Africa, South Sudan, Swaziland, Tanzania, Thailand, Uganda, Ukraine, Vietnam, Zambia, and Zimbabwe. The three regions were the Caribbean, Central America, and Central Asia. that also provide support to HIV programs. Moreover, UNAIDS data indicate that support for HIV programs in many countries is increasingly a mix of resources from the country government, Global Fund, PEPFAR, and other donors. PEPFAR strategy stresses the importance of having the partner-country government play the coordinating role. PEPFAR funding supports country programs that provide comprehensive HIV treatment—a broad continuum of treatment, care, and supportive services. This continuum begins with HIV testing and associated counseling, during which patients learn their HIV status and receive interventions to help them understand test results and link them to subsequent HIV treatment services. For individuals who are HIV positive, eligibility for ARV treatment is assessed by means of standard clinical or laboratory criteria—using CD4 count tests to measure the strength of a Patients eligible for treatment receive ARV patient’s immune system. drugs as well as regular clinical assessment and laboratory monitoring of the treatment’s effectiveness. Patients on ARV treatment also receive various care and support services such as treatment of opportunistic infections including TB co-infection, nutritional support, and programs to promote retention and adherence to treatment. Patients are expected to take ARV drugs on a continuing, lifelong basis once they have initiated treatment. CD4 (cluster of differentiation antigen 4) cells are a type of white blood cell that fights infection. The CD4 count test measures the number of CD4 cells in a sample of blood. Along with other tests, the CD4 count test helps determine the strength of the person’s immune system, indicates the stage of the HIV disease, guides treatment, and predicts how the disease may progress. Normal CD4 counts range from 500 to 1,000 cells/mm. never been on ARV treatment, pediatric patients, and pregnant and breastfeeding women. In 2010, WHO updated its guidelines to recommend ARV treatment for all people with CD4 counts of less than 350 cells/mm Treatment and Care include many of the clinical, laboratory, and support services that make up the comprehensive HIV treatment continuum as well as support services for orphaned and vulnerable children. Prevention includes interventions to prevent HIV infection, such as preventing mother-to-child transmission of HIV, sexual prevention, and medical male circumcision. The program area known as Other includes PEPFAR funds for efforts to strengthen health care systems, establish or enhance laboratory infrastructure, and provide strategic health information. For additional detail on the services budgeted in each PEPFAR program area and associated PEPFAR budget codes, see appendix II. Declining prices for ARV drugs have been a key source of per-patient cost savings, with most of these savings coming from the purchase of generic ARV drugs. Costs have also declined because programs have benefited from economies of scale and program maturity as they have expanded. These savings have contributed to substantial growth in treatment programs—both in the number of patients that PEPFAR directly supports on treatment, as well as the number of patients treated within the country programs that PEPFAR supports more broadly. OGAC has reported a substantial decline in PEPFAR per-patient treatment costs, from $1,053 in 2005 to $339 in 2011. Using available program information, PEPFAR calculated these costs by dividing specific elements of its budgets for HIV treatment in a given year by the number of reported patients for the subsequent year (see fig. 1). For this calculation, PEPFAR defined its HIV treatment budget as the total amount budgeted for ARV drugs (hereafter referred to as ARVs), adult treatment, pediatric treatment, and laboratory infrastructure. The number of patients currently on ARV treatment directly supported by PEPFAR is routinely reported by country teams at the end of each fiscal year. PEPFAR officials told us that they use HIV treatment budgets to approximate trends in PEPFAR’s per-patient treatment costs because they lack detailed information on the costs of comprehensive HIV treatment over time. They acknowledged that the calculation is a rough approximation that does not capture the full scope of PEPFAR funds spent to support the broad continuum of services under comprehensive HIV treatment. The calculation also does not capture funds from other funding sources. Detailed PEPFAR studies of the estimated costs of providing comprehensive HIV treatment services in eight countries also show declining per-patient treatment costs. The average of PEPFAR’s estimates includes costs not only to PEPFAR but also to other funding sources for PEPFAR-supported treatment programs. Using the data from the country treatment-cost studies, PEPFAR estimated that in fiscal year 2011 the per-patient cost of providing comprehensive HIV treatment services averaged $768, with PEPFAR’s share amounting to an estimated $335. In comparison, the estimated per-patient treatment cost in fiscal year 2010 was $812, with PEPFAR’s share amounting to an estimated $436 of the total. These estimates represent average costs because per-patient treatment costs vary by country, by treatment facility within a country, and by different types of patients, such as adult patients on ARV treatment versus pediatric patients on ARV treatment. Two key factors have contributed significantly to declining per-patient ARV drug costs in PEPFAR-supported treatment programs: (1) the increasing use of generic products and (2) decreasing prices for specific ARV drugs. From fiscal year 2005 to 2011, PEPFAR-supported treatment programs substantially increased their use of generic products, as shown by PEPFAR’s data on ARV purchases. In fiscal year 2005, the first year when PEPFAR purchased ARVs, generics represented about 15 percent of ARV purchases (by volume). By fiscal year 2008, generic ARV products had risen to 89 percent of purchases. By fiscal year 2011, 98 percent of all ARVs PEPFAR purchased were for generic products. Although PEPFAR’s overall increases in generic ARV purchases have been steady and substantial over the 7 years of data that we reviewed, the percentage of PEPFAR purchases for generic ARVs each year has varied across countries based on the availability of quality-assured generic products in each country. This is because PEPFAR purchases only quality-assured ARV products that comply with the laws—including patent and drug-registration laws—that apply in each partner country. For example, because of country-specific requirements in South Africa, in fiscal year 2008 only 25 percent of the ARVs that PEPFAR purchased in South Africa were generic products. In 2010 and 2011, PEPFAR worked with the South African government to update its ARV procurement processes, and in fiscal year 2011 almost 97 percent of PEPFAR- purchased ARVs in South Africa were generic. PEPFAR estimates that in fiscal years 2005 to 2011, it saved almost $934 million by buying generic versions of ARVs instead of equivalent branded products. PEPFAR estimated these savings by determining the amount it spent each year on quality-assured generic products that have an equivalent branded product. for those generics with internationally negotiated prices for the equivalent branded products. (See table 1.) Purchasing generic ARVs has also allowed PEPFAR to broaden the selection of ARVs it purchases to include WHO-recommended products, particularly fixed-dose combination products that do not have an equivalent branded formulation. However, PEPFAR has not estimated savings associated with purchasing these fixed-dose combination products because there are no branded equivalents. An equivalent branded product is one that contains the same active ingredients and is available in the same form—tablet, capsule, liquid—and dose (for example, 100 mg and 300 mg). PEPFAR has also benefited from declining prices for specific ARV products, which have led to declining prices for the ARV treatment regimens recommended for use in resource-limited settings. WHO recommends that most patients starting ARV treatment for the first time receive one of several first-line regimens that combine three ARV drugs. Based on updated 2010 WHO treatment guidelines, these first-line regimens are built from combinations of the following six ARVs: tenofovir disoproxil furmarate (tenofovir), zidovudine, lamivudine, emtricitabine, nevirapine, and efavirenz. WHO’s 2010 guidelines recommended that countries move away from including stavudine, a previously recommended ARV, in first-line regimens, because of toxicities associated with the drug. Instead, WHO recommended that countries use tenofovir or zidovudine. At the time, stavudine had been a preferred component of many countries’ first-line regimens and was relatively inexpensive. In contrast, tenofovir and zidovudine were relatively more expensive. While prices for tenofovir-based regimens remain higher than prices for the stavudine regimens they replace, tenofovir prices have declined to the point where they are, on average, lower than prices for zidovudine, the current first-line alternative. Figure 2 shows how average prices have declined for three comparable first-line treatment regimens. PEPFAR has analyzed program characteristics that affect per-patient costs as treatment has expanded in PEPFAR-supported programs. PEPFAR evaluated treatment costs using a cost estimation approach that includes detailed country treatment-cost studies as its primary information source. These studies collect data through patient records and interviews from a selected number of delivery sites. PEPFAR has conducted country treatment-cost studies in eight countries. Five studies were completed in 2009 (Botswana, Ethiopia, Nigeria, Uganda, and Vietnam); two studies were completed in 2011 (Mozambique and Tanzania); and one study was completed in 2012 (Kenya). In the country treatment-cost studies, ARV and non-ARV drug costs (e.g., equipment, personnel, and supplies) were identified and evaluated over a period of at least 1 year as the treatment program expanded. Each country treatment- cost study stated that per-patient treatment costs declined over its evaluation period, from a 6 percent decline in Kenya’s 2012 study to a 74 percent decline in Vietnam’s 2009 study. In addition, a November 2012 peer-reviewed journal article summarized findings from PEPFAR-supported studies of the costs of providing comprehensive HIV treatment services. This analysis used available data (collected from 54 delivery sites across six country treatment-cost studies) to analyze the factors that contribute to declining per-patient The summary analysis concluded that program scale treatment costs. and maturity had the most significant relationship with per-patient costs. The 2012 summary analysis of the 54 delivery sites from six country treatment-cost studies was conducted by selecting possible factors (excluding ARV drugs) that might describe site characteristics and influence costs. The analysis used statistical modeling to identify the relationship between selected factors and costs. See N. A. Menzies, A. A. Berruti, and, J. M. Blandford, “The Determinants of HIV Treatment Costs in Resource Limited Settings,” PLOS ONE, vol. 7, issue 11 (2012). supported by the site in a defined period—and reduced per-patient treatment costs. This analysis estimated a 43 percent decline in per- patient costs if an additional 500 to 5,000 patients are put on ARV treatment, and a 28 percent decline in per-patient costs if an additional 5,000 to 10,000 patients are put on ARV treatment. Program scale was also identified in the eight country treatment-cost studies as a factor affecting per-patient treatment costs, as each country experienced large increases in the number of people put on ARV treatment after rapid expansion in clinic capacity and infrastructure in PEPFAR-supported treatment programs. Officials told us that these reductions with program scale are due to the efficiencies gained with larger patient cohorts. The 2012 summary analysis also identified a relationship between the program maturity—the time elapsed since sites began expanding their treatment programs—and reduced per-patient treatment costs. The summary analysis determined that per-patient costs declined an estimated 41 percent from 0 to 12 months, and declined an estimated 25 percent from 12 to 24 months. The majority of country treatment-cost studies found that the first year following expansion saw the greatest reduction in costs, followed by minor cost reductions in later evaluation periods. In each country studied, the expansion of treatment programs included one-time investments, such as training and equipment costs, as well as ongoing costs, such as personnel and laboratory supplies, that were analyzed over time. After the large increase in funding at the beginning of the study period, one-time costs fell by the end of the study period in all eight countries, ranging from a 9 to 93 percent decline. Ongoing costs also fell from the beginning to the end of the evaluation period, ranging from a 16 to 59 percent decline. PEPFAR attributes the relationship between declining per-patient treatment costs and program maturity as due primarily to the reduction in one-time investments and in part to fewer resources needed for ongoing investments as the programs expanded treatment. Officials also told us that as treatment programs mature, experience providing comprehensive HIV treatment can lead to program efficiencies—such as maximizing work flow in outpatient clinics—that reduce per-patient costs. As per-patient treatment costs have declined in PEPFAR-supported programs, savings have contributed to substantial increases in the number of people on ARV treatment, including both people directly supported by PEPFAR and those who receive treatment through country programs (see fig. 3). Since the end of fiscal year 2008, PEPFAR has directly supported ARV treatment for over 3.3 million additional people. Moreover, in fiscal year 2012 PEPFAR added more people to ARV treatment than in any previous year. As a result of the recent increases in the number of people on ARV treatment, PEPFAR reports that it has met the requirement in the 2008 Leadership Act to increase the number of patients on ARV treatment proportional to changes in appropriated funds and per-patient treatment costs. PEPFAR calculations indicate that, while funding for PEPFAR increased by about 10 percent and average per-patient treatment costs declined by almost 67 percent from fiscal year 2008 to 2011, the number of people under treatment due to direct PEPFAR support increased by 125 percent compared with the 2008 baseline. On the basis of these results, PEPFAR anticipates that it will continue to exceed the mandated treatment targets and is also making progress towards meeting another target—set by the President in December 2011—that calls for PEPFAR to provide direct support for ARV treatment for more than 6 million people by the end of fiscal year 2013. In addition to increasing the number of people it directly supports on ARV treatment, PEPFAR has supported partner countries in expanding their Declining per-programs to provide ARV treatment to more people. patient treatment costs have contributed to the countries’ abilities to expand their programs. Additionally, PEPFAR has increased its efforts to strengthen the capacity of partner-country programs to deliver treatment services. Some country governments are also contributing additional resources to treatment programs. As a result, national programs have also expanded rapidly. For example, in South Africa an estimated 1.7 million people were on ARV treatment at the end of 2011, almost 1 million more than were on ARV treatment at the end of 2008, according to UNAIDS data. Similarly, in Kenya almost 540,000 people were on ARV treatment at the end of 2011, an increase of almost 290,000 since 2008. PEPFAR expects that total costs for country programs will increase over the near term if country treatment programs expand to reach unmet needs and adhere to updated international guidelines. PEPFAR’s current cost information could help partner countries expand treatment because the information is useful for planning and identifies cost-cutting opportunities. However, PEPFAR’s cost estimation and expenditure analysis approaches have certain limitations—primarily relating to the timeliness and comprehensiveness of data—that do not allow PEPFAR to capture the full costs of treatment programs. Despite decreasing per-patient treatment costs, PEPFAR expects that country treatment programs will continue to expand to address large unmet needs, resulting in increases in total treatment costs. For example, in Uganda’s treatment cost study, although the estimated per-patient treatment cost in Uganda fell by 53 percent over the course of the evaluation, the total site-level costs grew as the program expanded to treat more people. As of 2011, Uganda had provided ARV treatment to about 290,000 people—half the number of those eligible for ARV treatment. In its 2012 country operational plan, Uganda set a goal of providing ARV treatment to 347,000 people with direct PEPFAR support. Given the magnitude of the unmet need for treatment in Uganda and other PEPFAR partner countries, higher treatment goals will continue to drive the expansion of treatment programs, and PEPFAR expects this will add to the amount of resources required. PEPFAR partner countries are also considering treatment program expansion on the basis of emerging scientific evidence. The new evidence demonstrates that ARV treatment can be highly effective not only for treating people with HIV but also for preventing HIV-positive people from transmitting the virus to others. In early 2012, WHO updated its guidance for certain elements of ARV treatment that advises countries to expand treatment programs to new groups, which will increase total treatment costs. The 2012 updates did not change WHO’s recommendations about when to initiate ARV treatment; however, the revised guidance described the long-term benefits of expanding eligibility for ARV treatment in several categories of HIV-positive people, including all pregnant and breastfeeding women and certain high-risk populations, in order to prevent HIV transmission. Some countries are beginning to expand eligibility for ARV treatment to some of these groups, particularly by initiating lifelong ARV treatment for all HIV-positive pregnant and breastfeeding women as part of concerted efforts to eliminate mother-to- child transmission of HIV. UNAIDS estimates that expanding programs to these groups would increase the number of people in low- and middle- income countries who are eligible for ARV treatment by over 50 percent, from 15 million to 23 million. PEPFAR and its partner countries use cost information to plan for expanding treatment programs. For example, some of PEPFAR’s country treatment-cost studies have projected total costs under different scenarios of expanded treatment. Four of the eight country treatment-cost studies we reviewed included scenarios that project total costs with different patterns and rates of treatment expansion over a 3- or 5-year period. For example, Nigeria’s 2009 country treatment-cost study projected costs under three scenarios: (1) keeping its treatment targets at 2008 levels, (2) adding 100,000 patients, and (3) adding more than 200,000 patients, which represented half of those estimated to need ARV treatment in 2008. or including more widespread approach to HIV testing with immediate initiation of ARV treatment for those found to be HIV positive. These estimates would increase the number of people eligible for ARV treatment to 25 million and 32 million people, respectively. routine cost monitoring and in-depth facility-based cost studies—that countries can use to produce robust information on costs at local and national levels.and help identify opportunities for greater efficiency. Such information can be used to analyze program costs PEPFAR uses two complementary approaches to analyze costs in the One approach provides comprehensive in-depth programs it supports. analysis of treatment costs, while the other approach will provide routine monitoring of spending data specific to PEPFAR. However, neither approach captures the full costs to country treatment programs of meeting increasing demand and resource needs in environments that are continually changing. PEPFAR’s cost estimation approach identifies the costs of providing comprehensive HIV treatment services in a partner country, examines the range of the costs across delivery sites and types of patients, and analyzes the costs over a period of at least 1 year. This approach—and the country treatment-cost studies conducted as its primary information source—provides valuable information on the costs of delivering comprehensive HIV treatment services. The country treatment-cost studies consist of in-depth analysis from patient record data and interviews from a selected number of delivery sites—outpatient clinics that provide comprehensive HIV treatment services. Each delivery site’s data is grouped by cost unit and segmented into 6-month periods in order to examine ARV drug and non-ARV drug costs over time. Cost estimation allows PEPFAR to assess costs to itself and to other funding sources—country governments, including Global Fund contributions, and other local and international organizations. However, there are three key limitations. First, the cost estimation approach has provided valuable information on the costs of delivering comprehensive HIV treatment services, but a lack of timely data is a significant limitation, particularly given the rapid pace of change in treatment programs. Data for five of the eight country treatment-cost studies were collected between April 2006 and March 2007—before the significant expansion of country treatment programs. PEPFAR officials noted that changes in treatment program costs can happen too fast to be captured, and because the data collection and analysis for country treatment-cost studies are time and resource intensive, the reported results from the studies lag behind conditions on the ground. PEPFAR collects retrospective data for a determined period of time—typically a few months—and analyzes that data for treatment costs and results, which requires a period of typically 2 years. For example, Nigeria’s treatment-cost study involved data collection at nine delivery sites and supporting organizations from April to October 2006, but the final report on the results was completed in December 2009. Moreover, most country cost estimates included data collected in 6-month periods beginning at or around the start of PEPFAR support, thus providing cost information on the impact of treatment expansion with PEPFAR funds. Only one country treatment-cost study—Kenya’s 2012 study—covered a time period of data collection that could indicate how costs changed after PEPFAR’s increased support of expanded treatment programs. Second, PEPFAR’s cost estimation approach has been limited in the scope of information it has provided because of the small number and type of delivery sites selected. For seven of the eight country treatment- cost studies, patient record data consist mostly of data that typically were collected from nine outpatient clinics per country that received direct or indirect PEPFAR support. In addition, PEPFAR reports that the selected sites vary in how representative they are of the respective country program.among sites because the services provided may differ widely. Additionally, services and costs at sites in one country may not represent the type of services provided under comprehensive HIV treatment available across other PEPFAR partner countries, which makes it difficult to identify best practices that can be applied to other programs to increase program efficiency. However, PEPFAR’s most recent country treatment-cost study (completed in Kenya in October 2012) included 29 delivery sites and was the first study to use random sampling to select sites. PEPFAR officials characterized the study as a representative sample of the country’s delivery sites. Separately, limited information is available for sites not supported by PEPFAR. Although entities outside PEPFAR have conducted studies to estimate treatment costs at different sites, PEPFAR reports that these studies have not assessed as many services (e.g., services for people living with HIV who are not yet on ARV treatment), and, as a result, there were not sufficient, comparable data available for a meaningful comparison of costs. The costs of comprehensive HIV treatment services vary Third, although PEPFAR’s cost estimation process enables it to analyze costs at the treatment facility level for PEPFAR and other funding sources, it does not include program management costs incurred above the facility level. In addition, PEPFAR has identified but not analyzed possible cost benefits associated with improved patient outcomes from standardization and extended monitoring intervals for stable patients, and continued decreases in ARV drug pricing because of better tolerated regimens and declines in second-line regimen formulations. Challenges in linking cost data to patient outcomes data was identified as a limitation by all of the country treatment-cost studies. Information on program management costs and outcomes will become increasingly important as countries take on additional responsibility for supporting treatment delivery and allocating resources across all program sites. To obtain more timely cost information, PEPFAR began piloting the use of expenditure analysis in 2009 to review country-specific PEPFAR spending across program activities, including treatment. PEPFAR’s expenditure analysis approach involves collecting data from PEPFAR implementing partners on amounts that each partner spent to provide direct or indirect treatment services, and links that spending to the numbers of patients receiving support for treatment through the partner. The expenditure analysis approach updates costs rapidly and includes information on PEPFAR costs above the facility level. Between 2009 and 2012, PEPFAR completed nine expenditure analysis pilots in eight countries. PEPFAR officials told us that, during fiscal year 2012, it began to use its formal expenditure analysis approach in a different set of nine countries, and these analyses were completed and disseminated to countries in February 2013. PEPFAR uses expenditure analysis to identify spending outliers among its implementing partners. PEPFAR officials said they use that information to discuss with implementing partners the causes of their relatively high or low expenditures per patient and to identify potential efficiencies that other partners can implement. For example, in Mozambique—the first country to complete a second expenditure analysis—PEPFAR officials found that the variation of per- patient expenditures for non-ARV drug costs narrowed among five implementing partners between 2009 and 2011. PEPFAR attributed the smaller range of expenditures in part to their ability to use expenditure analysis data to stress efficient delivery of services. Expenditure analysis does not provide a comprehensive picture of treatment costs, because it only includes spending by PEPFAR implementing partners. Although expenditure analysis enables PEPFAR to allocate resources more efficiently by comparing its implementing partners, it does not include spending from partner-country resources and other funding sources. Because PEPFAR cannot require reporting for non-PEPFAR resources, PEPFAR officials stated that using diplomatic efforts with country governments has been a priority to enable sharing of expenditure data. PEPFAR has reported that the vast majority of patients on PEPFAR-supported ARV treatment receive services in the public sector (36 of the 43 delivery sites among the five country treatment-cost studies completed by 2009 were government-run facilities). As a result, cost information across all treatment partners at the facility and country level is important for facilitating fully informed discussions among those partners about current and future resource allocation. (The features of PEPFAR’s cost estimation and expenditure analysis approaches for obtaining cost information are described in table 2.) Each of PEPFAR’s complementary approaches provides cost information that can help countries to plan for the efficient expansion of treatment programs, and PEPFAR has made some plans to strengthen each approach. As of February 2013, PEPFAR was preparing three additional country treatment-cost studies, including a follow-up study in Tanzania— PEPFAR’s first repetition of a study in a partner country. In addition, PEPFAR has shortened the time frame for examining costs, compared with the time frames for earlier studies. In the Kenya, Mozambique, and Tanzania treatment-cost studies that were completed in 2011 and 2012, the data collection period for all facilities was a maximum of 1 year (or two 6-month periods). PEPFAR officials told us that cost estimation is important for identifying cost drivers, especially because it includes non- PEPFAR costs and can be used to develop cost projections for various treatment scenarios. However, because the studies are in-depth analyses, requiring extensive field work, they will continue to be time and resource intensive. PEPFAR officials told us that conducting country treatment-cost studies more regularly has not been their highest priority; they noted that their efforts have been focused on implementing processes for routine expenditure analysis in PEPFAR partner countries. Although PEPFAR has taken steps to strengthen cost estimation, country treatment-cost studies have been conducted in only a small number of countries (eight partner countries) and delivery sites (usually about nine clinics per country). In addition, although PEPFAR-supported treatment programs are changing rapidly, for five of the eight studies that have been completed, data were collected between 2006 and 2007. PEPFAR currently does not have a plan for systematically conducting or repeating country treatment-cost studies, as appropriate, in partner countries. Without such a plan, PEPFAR may be missing opportunities to identify potential savings, which are critical for expanding HIV treatment programs to those in need. Using the expenditure analysis approach to obtain more rapid cost information to inform planning efforts by country teams addresses the timeliness limitations of the country treatment-cost studies, but does not capture non-PEPFAR costs. However, PEPFAR officials told us that non- PEPFAR spending data are difficult to obtain because the budget processes of each partner are often not aligned and country systems may not be structured to aggregate HIV-specific data. For example, in an expenditure analysis pilot in Guyana, officials said that aligning expenditure categories across all treatment partners (PEPFAR, Global Fund, and Guyana Ministry of Health) was a time-consuming process requiring negotiation with the country government on the level of alignment needed. PEPFAR reports that it has engaged with country governments and multilateral partners to address the ability to capture full country-expenditure data. Further, it has begun collaborating with up to three countries to obtain expenditure data for the full country program during 2013. Although we recognize the difficulties involved in capturing non-PEPFAR expenditures, these spending data are important for decision makers as countries take on additional responsibility for allocating resources. PEPFAR officials told us that, by the end of fiscal year 2014, they plan to roll out formal expenditure analysis to all PEPFAR countries as part of annual reporting requirements; however, they said there are no current plans to routinely capture non-PEPFAR costs in those analyses. Without comprehensive data on expenditures, PEPFAR- supported programs will not be fully informed when making decisions about how to allocate resources. The 2008 Leadership Act requires that more than half of PEPFAR funds be used to support specific aspects of treatment and care for people living with HIV. Using an OGAC-developed budgetary formula, PEPFAR has met this treatment spending requirement. Since PEPFAR was reauthorized in 2008, PEPFAR country teams’ budgets allocated to capacity building have increased. However, funding for capacity building is excluded from OGAC’s formula. OGAC currently does not have a methodology to account for the extent to which these funds contribute to HIV treatment and care. As a result, it is not possible to determine the full amount of PEPFAR funds that are allocated to support the HIV treatment and care services identified in the spending requirement. Budgets for “treatment and care for people living with HIV” (Budgets for Treatment + Care + Prevention program areas) To determine the amount of the PEPFAR budget that constitutes “treatment and care for people living with HIV,” OGAC sums the amounts allocated by all country teams each year to six of the seven budget codes within the Treatment and Care program areas (see app. II for more details regarding this calculation). PEPFAR budget data indicate that, using OGAC’s budgetary formula, the program met the spending requirement each year since reauthorization. Between fiscal years 2008 and 2012, the calculated budget for “treatment and care for people living with HIV” ranged from approximately 54 to 52 percent of total budgets for the Treatment, Care, and Prevention program areas. OGAC’s budgetary formula implementing the treatment spending requirement does not account for the increasing proportion of funds that PEPFAR country teams have allocated to country capacity building. The 2008 Leadership Act identifies health capacity building in order to promote the transition toward greater sustainability through country ownership as one of the purposes of the law. Consistent with this principle, PEPFAR country teams have increased investments to strengthen country health systems. These funds, which are typically allocated in the “Other” program area budget codes—health systems strengthening, strategic information, and laboratory infrastructure—are excluded from OGAC’s budgetary formula. However, from fiscal year 2008 to fiscal year 2012, country team budgets for the Other program area increased from $574 million to $710 million. Over the same time frame, OGAC-defined budgets for “treatment and care for people living with HIV” declined from about $1.8 billion to $1.4 billion. Total budgets for the Treatment, Care, and Prevention program areas were relatively constant from fiscal year 2008 to 2011 but declined to $2.6 billion in fiscal year 2012. (See fig. 4.) By fiscal year 2012, budgets in the Other program area represented more than 21 percent of all program area budgets, up from about 15 percent in fiscal year 2008. OGAC officials told us that the current budgetary formula was developed based on OGAC’s interpretation of the intent of the treatment spending requirement. Calculating the proportion of funds allocated to specific activities as a percentage of total country budgets allocated to the Treatment, Care, and Prevention program areas—excluding budgets for the Other program area—is consistent with the methods OGAC used to track spending under the first PEPFAR authorization. OGAC officials said that this approach allows OGAC to isolate budgeted funds that support the direct services that PEPFAR delivers to patients at the facility level, consistent with PEPFAR’s early focus on directly delivering treatment services as part of a broad emergency response. As PEPFAR’s role in each country has evolved, the components of PEPFAR country team budgets that contribute to the HIV treatment and care services specified in the spending requirement have also evolved. However, some of those funds are not accounted for in the current budgetary formula. In particular, although budgets allocated to capacity building have increased, those funds are not accounted for in either component of OGAC’s budgetary formula: the budget for “treatment and care for people living with HIV” or the total budgets for the Treatment, Care, and Prevention program areas. Some capacity-building efforts, such as enhancements to drug supply chain systems that are budgeted under health systems strengthening, also contribute to HIV treatment and care services. Other health systems strengthening activities may have a less direct effect on those services. Moreover, OGAC officials said that some funds budgeted for prevention activities—particularly funds for prevention of mother-to-child transmission of HIV that cover ARV treatment and care services for HIV-positive pregnant and breastfeeding women—also contribute to HIV treatment and care services. Those contributions are likewise not accounted for in the calculated budget for “treatment and care for people living with HIV.” OGAC officials told us that they currently do not have an agreed methodology that would allow them to determine the extent to which funds for capacity building, or certain prevention activities, contribute directly to HIV treatment and care. As a result, it is currently not possible to determine accurately the proportion of total country budgets that support the services specified in the treatment spending requirement, if the contributions of PEPFAR country teams’ capacity-building and prevention budgets are taken into account. OGAC officials acknowledged that as PEPFAR continues to evolve, addressing the challenge of accounting for the contributions that funds from budgets for capacity building and prevention make to HIV treatment and care programs may require revisions to the current budgetary formula. However, the treatment spending requirement expires at the end of September 2013. PEPFAR has supported rapid expansion of HIV programs since 2008, providing direct support for more than half of the estimated 8 million people on ARV treatment in low- and middle-income countries. Data from the last 4 years indicate that the growth in treatment programs is accelerating. Substantial declines in the costs of providing treatment to each individual have contributed to recent accomplishments. Despite this progress, there is substantial unmet need. More than 15 million people are estimated to be eligible for ARV treatment based on current WHO guidelines. Moreover, 23 million would be eligible if programs expanded eligibility to include groups such as all pregnant and breastfeeding women and certain high-risk populations, consistent with recommendations in recent updates to WHO guidelines. In order for the country programs that PEPFAR supports to be able to expand to meet these needs, it will be important that they maximize how efficiently they use available resources. Given the scale of the unmet need, countries’ plans to expand HIV treatment may continue to drive up the total costs of providing treatment even if per-patient treatment costs further decline. Each country’s ability to expand treatment, then, hinges on thorough planning based on data-driven analyses of the cost of delivering the full scope of comprehensive HIV treatment services. This is a complex task as cost inputs often cut across PEPFAR budget codes, and costs are incurred by PEPFAR and other donors, partner-country governments, and multilateral partners. Although PEPFAR has used its cost estimation and expenditure analysis approaches to assist countries’ planning efforts and describe opportunities for savings, treatment costs have not yet been fully studied. In particular, existing data are not always timely, come from a limited number of sites in select countries, and do not always capture non-PEPFAR costs. Thus, PEPFAR may be missing opportunities to identify further savings. Given the rapid pace of change in PEPFAR-supported programs, effectively identifying potential savings requires more timely and comprehensive information on treatment costs than PEPFAR’s approaches currently provide. The 2008 Leadership Act has required PEPFAR to spend half of the funds appropriated to PEPFAR on specific HIV treatment and care services and has also set a major policy goal of promoting country ownership. Using OGAC’s budgetary formula, PEPFAR has met the current spending requirement. Over the same time frame, PEPFAR funds have been devoted increasingly to building country capacity. However, because OGAC cannot fully account for the contributions that its country capacity building activities have made to the HIV treatment and care services identified in the treatment spending requirement, it cannot provide complete information on how PEPFAR funds are being allocated to meet both the treatment spending requirement and the goal of promoting country ownership. The current treatment spending requirement, however, is in effect only until September 30, 2013, when it expires. To improve PEPFAR’s ability to help countries expand their HIV treatment programs to address unmet need, and do so through the efficient allocation of resources and effective program planning, the Secretary of State should direct PEPFAR to develop a plan to do the following: systematically expand the use of country treatment-cost studies to additional sites and partner countries, where it is cost-effective to do so, to help estimate costs and examine country-specific characteristics of comprehensive HIV treatment that may result in cost savings; and work with partner countries, where feasible, to broaden PEPFAR’s expenditure analysis to capture treatment costs across all partners that support each country program and develop more timely information on the full costs of comprehensive HIV treatment. We provided a draft of this report to State, USAID, and HHS’s CDC for comment. Responding jointly with CDC and USAID, State provided written comments, reproduced in appendix III. In its comments, State agreed with our findings and conclusions and concurred that high-quality information on costs and expenditures is vital for program management. State’s comments also emphasized that, because in-depth cost studies are time- and resource-intensive to conduct, those studies should be complemented with more timely data from expenditure analysis to help ensure that PEPFAR-supported programs have a portfolio of information that can be used to inform program decision making. In response to our first recommendation, State commented that PEPFAR is developing guidance on an optimal schedule for evaluating costs—at the country level and across the program—to balance in-depth analysis with more timely data from expenditure analyses. This approach is consistent with our recommendation that PEPFAR develop a plan to expand country treatment-cost studies where it is cost effective to do so. In response to our second recommendation, State agreed that expenditure analysis would be more valuable if it included non-PEPFAR spending, but noted that PEPFAR cannot compel its partners to routinely report on their spending. However, State said that PEPFAR designed its expenditure analysis approach so that it can be adapted to capture spending from other partners. Moreover, State commented that in the last year PEPFAR has collaborated with multilateral partners in up to three countries to plan expenditure analyses that will capture non-PEPFAR spending. While we recognize that PEPFAR cannot require its partners to report on their spending, because HIV treatment costs are increasingly supported through a mix of funding from PEPFAR, other donors, partner-country governments, and multilateral partners such as the Global Fund, it is critical that PEPFAR continue exploring opportunities to work with partners, where feasible, to broaden the use of expenditure analysis. In addition, State and CDC each provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of State and the U.S. Global AIDS Coordinator. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov, or contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In this report, we examine 1. changes in per-patient treatment costs and their effect on program 2. how PEPFAR’s cost information supports countries’ efforts to expand 3. how PEPFAR has met the treatment spending requirement. To describe how per-patient costs have changed and their effect on program implementation in treatment programs supported by the President’s Emergency Plan for AIDS Relief (PEPFAR), we focused our work on PEPFAR’s reported trends in cost information relating to fiscal years 2005 through 2011. We also reviewed agency documents on PEPFAR’s detailed cost estimation approach, results from eight country treatment-cost studies, and summary information that PEPFAR has published on available cost estimates and characteristics of HIV treatment programs. These included two PEPFAR reports summarizing estimated per-patient treatment for fiscal years 2010 and 2011, including how the estimates varied across partner countries. Separately, we analyzed data on PEPFAR’s antiretroviral (ARV) drug purchases in fiscal years 2005 through 2011 to identify trends in drug prices across PEPFAR-supported countries. We also reviewed PEPFAR’s estimates for savings attributable to purchasing generic ARV products. To assess the reliability of the ARV drug data used in our analysis, we interviewed PEPFAR officials and officials from a supply chain contractor that manages the bulk of PEPFAR’s ARV drug purchases and collects data annually on almost all ARV purchases by PEPFAR implementing partners. We also reviewed documentation on their data collection processes. Finally, we performed checks, such as examining the data for missing values and discussing the results of our analyses with officials responsible for the data. On the basis of these steps, we determined that the ARV drug data were sufficiently reliable for our purposes. In addition, we conducted field work in three PEPFAR partner-countries—Kenya, South Africa, and Uganda—in June 2012 to obtain information costing activities and challenges faced in implementing treatment programs. We selected these countries on the basis of program size, estimates of HIV disease burden, travel logistics, and other factors. We interviewed key implementing partners, technical experts in costing methodology, and in- country officials and reviewed documentation from the selected countries. Finally, we examined trends in the number of patients treated in PEPFAR-supported country treatment programs, including PEPFAR data reported by its country teams as well as global figures from the Joint United Nations Programme on HIV/AIDS (UNAIDS). On the basis of our reviews of documentation for these data as well as interviews with PEPFAR officials, we determined that the data were sufficiently reliable for our purposes. To describe how PEPFAR’s cost information supports countries’ efforts to expand treatment, we assessed the timeliness and completeness of information generated through PEPFAR’s cost estimation and expenditure analysis approaches. Specifically, we assessed PEPFAR’s cost estimation approach and eight country treatment-cost studies for their ability to provide key information for program planning and resource allocation. We assessed PEPFAR’s expenditure analysis approach by examining PEPFAR documentation on expenditure analysis and results to date. We also interviewed PEPFAR officials about the strengths and weaknesses of the cost estimation and expenditure analysis approaches, and any plans to revise these approaches. In addition, we reviewed PEPFAR country operational plans and country treatment-cost studies for information on expected cost trends and country goals for expanding treatment programs. Last, we reviewed World Health Organization (WHO) HIV treatment guidelines and their impact on the estimated number of people requiring treatment as country programs expand. See: Department of State, Office of the U.S. Global AIDS Coordinator, PEPFAR Blueprint: Creating an AIDS-free Generation (Washington, D.C.: November 2012); The U.S. President’s Emergency Plan for AIDS Relief: 5-year Strategy (Washington, D.C.: December 2009); PEPFAR Fiscal Year 2012 Country Operational Plan (COP) Guidance (Washington, D.C.: August 2011); PEPFAR Fiscal Year 2013 Country Operational Plan (COP) Guidance, Version 2.0 (Washington, D.C.: October 2012). budget data for fiscal years 2008 through 2012. We interviewed PEPFAR budget officials about the budget data to ensure the completeness of the data and discuss any changes in budget methodology over time. We also interviewed OGAC officials regarding the budgetary formula that OGAC uses to implement the treatment spending requirement. PEPFAR support for country programs is categorized into four broad program areas—Treatment, Care, Prevention, and Other—each comprising multiple budget codes. The types of services captured within each program area and the associated budget codes are shown in table 3 below. Section 403 of the 2008 Leadership Act required that, in each fiscal year, more than half of the funds appropriated pursuant to section 401 of the act shall be expended for the following: (1) ARV treatment; (2) clinical monitoring of HIV-positive people not in need of ARV treatment; (3) care for associated opportunistic infections; (4) nutrition and food support for people living with HIV; and (5) other essential HIV-related medical care for people living with HIV. Budgets for “treatment and care for people living with HIV” (Budgets for Treatment + Care + Prevention program areas) To determine the amount of the PEPFAR budget that constitutes “treatment and care for people living with HIV,” OGAC sums the amounts allocated by all country teams each year within six of the seven budget codes within the Treatment and Care program areas: adult treatment, adult care and support, ARV drugs, pediatric treatment, pediatric care and support, and TB/HIV. In addition to the contact named above, Jim Michels, Assistant Director; Chad Davenport; E. Jane Whipple; David Dayton; Fang He; Todd M. Anderson; Kay Halpern; Brian Hackney; Erika Navarro; Katy Forsyth; Grace Lui; and Etana Finkler made key contributions to this report. President’s Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning, and Dissemination. GAO-12-673. Washington, D.C.: May 31, 2012. President’s Emergency Plan for AIDS Relief: Program Planning and Reporting. GAO-11-785. Washington, D.C.: July 29, 2011. Global Health: Trends in U.S. Spending for Global HIV/AIDS and Other Health Assistance in Fiscal Years 2001-2008. GAO-11-64. Washington, D.C.: October 8, 2010. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Partner Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4, 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 12, 2004. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
Through PEPFAR--first authorized in 2003--the United States has supported major advances in HIV/AIDS treatment, care, and prevention in more than 30 countries, including directly supporting treatment for almost 5.1 million people. However, millions more people still need treatment. Congress reauthorized PEPFAR in 2008--authorizing up to $48 billion over 5 years--making it a major policy goal to help partner countries develop independent, sustainable HIV programs. Congress also set spending and treatment targets. OGAC leads PEPFAR by allocating funding and providing guidance to implementing agencies. As requested GAO reviewed PEPFAR-supported treatment programs. GAO examined (1) how perpatient treatment costs have changed and affected program implementation, (2) how PEPFAR cost information supports efforts to expand treatment, and (3) how PEPFAR has met a legislated treatment spending requirement. GAO reviewed cost analyses and reports and analyzed ARV drug data relating to fiscal years 2005 through 2011; conducted fieldwork in three countries selected on the basis of program size and other factors; and interviewed PEPFAR officials and implementing partners. The Department of State's (State) Office of the U.S. Global AIDS Coordinator (OGAC) has reported that per-patient treatment costs declined from about $1,053 to $339 from 2005 to 2011. Purchasing generic antiretroviral (ARV) drugs, together with declining drug prices, has led to substantial savings. OGAC estimates that the President's Emergency Plan for AIDS Relief (PEPFAR) has saved $934 million since fiscal year 2005 by buying generic instead of branded products. PEPFAR's analyses of data from eight country treatment-cost studies indicate that per-patient costs also declined as programs realized economies of scale while taking on new patients. Furthermore, the analyses suggest that costs decreased as countries' treatment programs matured, particularly in the first year after programs expanded, and reduced one-time investments. Per-patient cost savings have facilitated substantial increases in the number of people on ARV treatment. In September 2012, an estimated 8 million were on treatment in lowand middle-income countries, of which PEPFAR directly supported 5.1 million-- an increase of 125 percent since 2008, the year the program was reauthorized. Despite substantial declines in per-patient treatment costs, it is important that countries continue to improve the efficiency of their programs to expand to meet the needs of the estimated 23 million people eligible for ARV treatment under recent international guidelines. PEPFAR's cost estimation and expenditure analysis approaches provide complementary information that can help partner countries expand treatment and identify potential cost savings. However, as currently applied, these approaches do not capture the full costs of treatment. Cost estimation provides in-depth information, but data are limited because detailed cost studies have been done in only eight partner countries, at a small number of sites. Moreover, although treatment programs are changing rapidly, key data for most of the studies are no longer timely, since they were collected in 2006 and 2007. PEPFAR does not have a plan for systematically conducting or repeating cost studies in partner countries. Data from expenditure analyses, while more timely, are limited because they do not include non-PEPFAR costs. Without more timely and comprehensive information on treatment costs, PEFPAR may be missing opportunities to identify potential savings, which are critical for expanding HIV treatment programs to those in need. Using an OGAC-developed budgetary formula, PEPFAR has met the legislative requirement that more than half of its funds be spent each year to provide specific treatment and care services for people living with HIV. From fiscal year 2008 to fiscal year 2012, PEPFAR funds allocated to capacity building--to strengthen health systems, laboratory capacity, and strategic information systems--increased from 15 percent to 21 percent of PEPFAR's total funds to support country programs. However, the current formula does not include the capacity building funds. These funds--which support PEPFAR country teams' efforts to meet another legislative goal of promoting sustainable country-owned programs--and other PEPFAR activities also contribute to HIV treatment and care services. PEPFAR does not currently have a methodology to account for those contributions. Without such a methodology, it is not possible to determine the full amount of PEPFAR funds that are allocated to support the HIV treatment and care services identified in the spending requirement. However, the treatment spending requirement expires at the end of September 2013. GAO recommends that State develop a plan for (1) expanding the use of indepth cost studies to additional countries and sites, where appropriate, and (2) broadening expenditure analysis to include non-PEPFAR costs, as feasible. State generally agreed with the report's recommendations.
Title IV-B of the Social Security Act, established in 1935, authorizes funds to states to provide a wide array of services to prevent the occurrence of abuse, neglect, and foster care placements. In 1993, the Congress created a new program as subpart 2 of Title IV-B (now known as Promoting Safe and Stable Families), which funds similar types of services but is more prescriptive in how states can spend the funds. No federal eligibility criteria apply to the children and families receiving services funded by Title IV-B. The amount of subpart 1 funds a state receives is based on its population under the age of 21 and the state per capita income, while subpart 2 funding is determined by the percentage of children in a state who receive food stamps. In fiscal year 2003, the Congress appropriated $292 million for subpart 1 and $405 million for subpart 2. These federal funds cover 75 percent of states’ total Title IV-B expenditures because states must provide an additional 25 percent using nonfederal dollars. Title IV-B funding is relatively small compared with the other federal and state funds used for child welfare services. According to the most recent data available, states spent an estimated $10.1 billion in state and local funds for child welfare services in state fiscal year 2000, while federal Title IV-E expenditures in federal fiscal year 2000 were $5.3 billion. In comparison, Title IV-B appropriations in federal fiscal year 2000 were $587 million. Title IV-E provides an open-ended individual entitlement for foster care maintenance payments to cover a portion of the food, housing, and incidental expenses for all foster children whose parents meet certain federal eligibility criteria. Title IV-E also provides payments to adoptive parents of eligible foster children with special needs. States may choose to use Title IV-B funds to provide foster care maintenance or adoption assistance payments for children without regard to their eligibility for these payments under Title IV-E. The Administration for Children and Families within HHS is responsible for the administration and oversight of federal funding to states for child welfare services under Titles IV-B and IV-E. HHS headquarters staff are responsible for developing appropriate policies and procedures for states to follow in terms of obtaining and using federal child welfare funds, while staff in HHS’s 10 regional offices are responsible for providing direct oversight of state child welfare systems. In 2000, HHS established a new federal review system to monitor state compliance with federal child welfare laws. One component of this system is the CFSR, which assesses state performance in achieving safety and permanency for children, along with well-being for children and families. The CFSR process includes a self-assessment by the state, an analysis of state performance in meeting national standards established by HHS, and an on-site review by a joint team of federal and state officials. Based on a review of statewide data, interviews with community stakeholders and some families engaged in services, and a review of a sample of cases, HHS determines whether a state achieved substantial conformity with (1) outcomes related to safety, permanency, and well-being, such as keeping children protected from abuse and neglect and achieving permanent and stable living situations for children and (2) key systemic factors, such as having an adequate case review system and an adequate array of services. States are required to develop program improvement plans to address all areas of nonconformity. Subpart 1 provides grants to states for child welfare services, which are broadly defined. Subpart 1 funds are intended for services that are directed toward the accomplishment of the following purposes: protect and promote the welfare of all children; prevent or remedy problems that may result in the abuse or neglect of children; prevent the unnecessary separation of children from their families by helping families address problems that can lead to out-of-home placements; reunite children with their families; place children in appropriate adoptive homes when reunification is not ensure adequate care to children away from their homes in cases in which the child cannot be returned home or cannot be placed for adoption. When the Congress enacted the Adoption Assistance and Child Welfare Act of 1980, it established a dollar cap on the amount of subpart 1 funds that states could use for certain services and created Title IV-E of the Social Security Act. This legislation limited the total subpart 1 funds states could use for three categories of services: foster care maintenance payments, adoption assistance payments, and child care related to a parent’s employment or training. While appropriations for subpart 1 increased from $56.5 million in 1979 to $163.6 million in 1981, the law requires that the total of subpart 1 funds used for foster care maintenance, adoption assistance, and child care payments cannot exceed a state’s total 1979 subpart 1 expenditures for all types of services. The intent of this restriction, according to a congressional document, was to encourage states to devote increases in subpart 1 funding as much as possible to supportive services that could prevent the need for out-of-home placements. However, this restriction applies only to the federal portion of subpart 1 expenditures, as the law notes that states may use any or all of their state matching funds for foster care maintenance payments. In 1993, the Congress established the family preservation and family support program under Title IV-B subpart 2, authorizing grants to states to provide two categories of services: family preservation and community- based family support services. The Adoption and Safe Families Act of 1997 reauthorized the program, renaming it Promoting Safe and Stable Families and adding two new service categories: adoption promotion and support services and time-limited family reunification services. Through fiscal year 2006, the Congress has authorized $305 million in mandatory funding for subpart 2 and up to $200 million annually in additional discretionary funding. In fiscal year 2002, the Congress appropriated $70 million in discretionary funding for the program. The definitions of the four subpart 2 service categories are: Family preservation services: Services designed to help families at risk or in crisis, including services to (1) help reunify children with their families when safe and appropriate; (2) place children in permanent homes through adoption, guardianship, or some other permanent living arrangement; (3) help children at risk of foster care placement remain safely with their families; (4) provide follow-up assistance to families when a child has been returned after a foster care placement; (5) provide temporary respite care; and (6) improve parenting skills. Family support services: Community-based services to promote the safety and well-being of children and families designed to increase the strength and stability of families, to increase parental competence, to provide children a safe and supportive family environment, to strengthen parental relationships, and to enhance child development. Examples of such services include parenting skills training and home visiting programs for first time parents of newborns. Time-limited family reunification services: Services provided to a child placed in foster care and to the parents of the child in order to facilitate the safe reunification of the child within 15 months of placement. These services include: counseling, substance abuse treatment services, mental health services, and assistance to address domestic violence. Adoption promotion and support services: Services designed to encourage more adoptions of children in foster care when adoption is in the best interest of the child, including services to expedite the adoption process and support adoptive families. These services are similar to those allowed under subpart 1, although the range of services allowed under subpart 2 is more limited in some cases. For example, time-limited family reunification services can only be provided during a child’s first 15 months in foster care, while no such restriction is placed on the use of subpart 1 funds. In addition, states must spend a “significant portion” of their subpart 2 funds on each of the four service categories. HHS program instructions require states to spend at least 20 percent of their subpart 2 funds on each of the four service categories, unless a state has a strong rationale for some other spending pattern. By statute, states can spend no more than 10 percent of subpart 2 funds on administrative costs. A congressional document notes that states already had the flexibility to use subpart 1 funds for family support and family preservation services, but that few states used a significant share of these funds for these services. In creating subpart 2, the Congress did not revise any components of subpart 1. To receive Title IV-B funds, states are required to submit a 5-year child and family services plan to HHS. These plans have a number of specific reporting and procedural requirements. While several of the requirements are similar for subparts 1 and 2, states are required to provide information about more aspects of their child welfare systems under subpart 1. Some of the major requirements are outlined in table 1. Federal child welfare funding has long been criticized for entitling states to reimbursement for foster care placements, while providing little funding for services to prevent such placements. HHS is currently developing a legislative proposal to give states more flexibility in using Title IV-E foster care funds for such preventive services. Under this new proposal, states could voluntarily choose to receive a fixed IV-E foster care allocation (based on historic expenditure rates) over a 5-year period, rather than receiving a per child allocation. The fixed allocation would be an estimate of how much a state would have received in Title IV-E foster care maintenance funds. States could use this allocation for any services provided under Titles IV-B and IV-E, but would also have to fund any foster care maintenance payments and associated administrative costs from this fixed grant or use state funds. Since 1994, HHS has also been authorized to establish child welfare demonstrations that waive certain restrictions in Titles IV-B and IV-E and allow states a broader use of federal funds. States with an approved waiver must conduct a formal evaluation of the project’s effectiveness and must demonstrate the waiver’s cost neutrality—that is, a state cannot spend more in Title IV-B and IV-E funds than it would have without the waiver. Projects generally are to last no more than 5 years. HHS’s authority to approve these waivers is scheduled to expire at the end of fiscal year 2003. On a national level, our survey showed that the primary emphases of subparts 1 and 2 vary somewhat, but the range of services offered and the types of families served overlap significantly. According to our survey data for fiscal year 2002, states spent subpart 1 funds most frequently on the salaries of child welfare agency staff—primarily social work staff who can provide a variety of services, such as CPS investigations, recruiting foster parents, and referring families for needed services. The next three largest categories—administration and management expenses, CPS services, and foster care maintenance payments—accounted for about 43 percent of subpart 1 funding. Subpart 2 funds, in comparison, were used primarily to fund programs within its required service categories—family support, family preservation, family reunification, and adoption promotion and support services. Some social work staff whose salaries were funded with subpart 1 may provide similar services to families as the staff in these programs funded by subpart 2. On a national basis, however, no service category was solely funded by either subparts 1 or 2. The programs funded by subpart 1 and 2 dollars served similar types of children and families. States used the majority of funds from each subpart to provide services to children at risk of abuse and neglect and their parents, as well as foster children and their parents. Officials in most HHS regional offices said that they believe that the current structure of Title IV-B offers a good balance in allowing states some flexibility to address state needs and targeting some federal funds toward services to keep families together and prevent children from entering foster care. Although no category of service is funded solely by either subparts 1 or 2 dollars, somewhat different spending patterns emerged with regard to the distribution of these funds among the categories. The states responding to our survey reported spending about 28 percent of subpart 1 funds in fiscal year 2002 on the salaries of child welfare agency staff, with an additional 43 percent used for administration and management expenses, foster care maintenance payments, and direct CPS services (see table 2). In comparison, states used over 80 percent of subpart 2 dollars to fund services in its mandated service categories—family support, family preservation, family reunification, and adoption promotion and support services. However, neither subparts 1 nor 2 funded a unique category of service at the national level. For example, states typically reported using subpart 1 to fund CPS programs; however, 5 states used subpart 2 dollars to fund programs in this category. Subpart 1 dollars were most frequently used to fund staff salaries, with almost half of these funds designated for the salaries of CPS social workers. Another 20 percent of these funds were used for the salaries of other social workers (see fig. 1). During our site visit, Washington child welfare officials told us that they used over 50 percent of the state’s subpart 1 funds for salaries of staff providing direct services, including CPS social workers, social workers who provide ongoing case management and support services to families involved with the child welfare agency due to concerns about abuse or neglect, social work supervisors, and clerical support staff. While states also reported using subpart 2 funds for staff salaries, only 2 percent of subpart 2 dollars were used for this purpose. This comparison may underestimate the overlap in services funded by subparts 1 and 2, however, because much of the costs of programs funded by subpart 2 is likely attributable to staff salaries. Similarly, some social work staff whose salaries are funded by subpart 1 likely provide a variety of services, such as family preservation services, recruiting foster families, and referring families for needed services, some of which may be similar to services funded by subpart 2. Percentages do not total to 100 due to rounding. Administration and management comprised the second largest category of service, accounting for almost 17 percent of subpart 1 dollars. These services included rent and utilities for office space, travel expenses for agency staff, and staff training. Ohio, for example, used most of its subpart 1 dollars to fund state and county child welfare agency administrative expenses. In contrast, states spent less than 5 percent of their subpart 2 funds on administration and management. CPS represents the third largest category of services that states funded with subpart 1. States used about 16 percent of their subpart 1 funds to provide a variety of CPS services, such as telephone hotlines for the public to report instances of child abuse and neglect, emergency shelters for children who needed to be removed from their homes, and investigative services. During our site visit to California, for example, officials reported using about 40 percent of their subpart 1 dollars to fund staff salaries and operating expenses associated with a variety of shelter care services provided by counties, such as emergency shelters and foster homes. A child is placed in one of these shelters when no other placement option is immediately available—for example, when an investigation in the middle of the night determines that the child is at immediate risk of harm or when a child runs away from a foster home. In comparison to states’ use of subpart 1 funds, states reported using less than 1 percent of their subpart 2 dollars to fund programs within this service category. States used nearly 11 percent of their subpart 1 funds to make recurring payments for the room and board of foster children who are not eligible for reimbursement through Title IV-E. For instance, New Jersey officials reported spending over 50 percent of the state’s subpart 1 funds on foster care maintenance payments. Seventeen states spend subpart 1 funds on foster care maintenance payments, while only 2 states reported using subpart 2 funds for this purpose, accounting for less than 1 percent of total subpart 2 expenditures. States reported using half of their subpart 2 dollars to fund family support services. These services included mentoring programs to help pregnant adolescents learn to be self-sufficient, financial assistance to low-income families to help with rent and utility payments, and parenting classes, child care, and support groups provided by a community-based resource center. One California county we visited used subpart 2 to fund a network of family support services with the goal of strengthening communities and keeping families from becoming involved with the child welfare system. Funds were granted to community groups to provide support and improve the healthy development of families for different populations, such as grandparent caregivers and adolescent mothers. Washington funded a network of public health nurses and social service agencies to provide support services to families that are the subject of a report of abuse or neglect—these services are provided in lieu of, or following, a formal investigation when the level of risk to the child is not considered high. Over one-third of the states responding to our survey also reported using subpart 1 funds to provide family support services similar to those funded by subpart 2, although family support services only accounted for 8 percent of subpart 1 expenditures. For example, New Jersey transferred about 27 percent of its subpart 1 funds to local child welfare agencies to provide family support services, which included parent education classes, transportation, and mentoring for children. Family preservation services—designed to keep families together and prevent the need to place a child in foster care—represented the second largest service category funded by subpart 2. Washington used subpart 2 funds for its statewide family preservation program, which offers counseling and parent training services for up to 6 months to families with children who are at risk of being placed in foster care. In some cases, services provided in this category were similar to those in the family support category, but were intended to help keep families together. For example, Florida funded several neighborhood resource centers, which provide child care, parenting classes, adult education and training opportunities, mental health services, transportation services, and a food pantry. Although states primarily used subpart 2 dollars to provide these services, states also reported using approximately 2 percent of subpart 1 funds on family preservation services. In addition, states reported using about 11 percent of their subpart 2 funds for adoption support and preservation services. With these funds, states provided services such as counseling for children who are going to be adopted, family preservation services to adoptive families, and respite care for adoptive parents. Officials in Ohio reported using almost half of its subpart 2 dollars for adoption services, including post adoption services and services to recruit families for children in need of adoptive homes. Similarly, Florida funded adoption support services for children with special needs who are awaiting adoption, including counseling, behavior modification, tutoring, and other services to expedite the adoption process. In contrast, less than 1 percent of subpart 1 dollars were used to provide adoption support and preservation services. Finally, states spent about 9 percent of their subpart 2 dollars on family reunification services. States funded a diverse array of family reunification programs, such as supervised visitation centers for parents to visit with their children and coordinators for alcohol and drug treatment services for families whose primary barrier to reunification is substance abuse. New Jersey funded a supervised visitation program that offers parenting education, counseling, transportation, and support groups and is located in a private home, allowing families to visit together in a homelike setting and engage in more natural interactions. One county we visited in California used subpart 2 funds for a shared family care program, in which the parent and child are placed together in a mentor home. The mentor provides a role model for good parenting behavior and provides hands-on parenting guidance to keep the family together, while a case manager ensures that family members receive services to address problems that could lead to the removal of the child, such as substance abuse or homelessness. Subpart 1 funds were used much less frequently for family reunification services; states reported using 1 percent of subpart 1 funds for these services. Significant overlap exists among the types of children and families served by these subparts, although certain populations are more closely associated with a particular subpart. Services funded by each subpart predominantly targeted children at risk of abuse or neglect and their parents, as well as children in foster care and their parents. States responding to our survey reported that services funded by subpart 1 in fiscal year 2002 most frequently served children living in foster care and/or their parents, while 9 percent of subpart 2 funds are used for services that target the same population (see table 3). Similarly, while subpart 2 services most commonly targeted children at risk of abuse and neglect and/or their parents, about 17 percent of subpart 1 funds were also used for services aimed at this population. In addition, 9 percent of subpart 1 funds and 11 percent of subpart 2 funds were used to fund services intended for both of these types of families. The overlap in populations observed at the national level can also be seen when looking at the children and families targeted by individual states. We found that individual states frequently funded programs with each subpart that served the same types of children and families. For example, all 20 states that used subpart 1 dollars to fund services for children at risk of abuse or neglect and/or their parents also used subpart 2 dollars to fund a program serving this same population type (see table 4). Alaska, for instance, used subpart 1 dollars to fund a broad family support program, which provided services to children at risk of abuse and neglect and their parents. The state also used subpart 2 funds to provide another family support program, which provides similar services to the same types of children and families. In addition, 17 states funded one or more individual services with funds from both subparts, so that subparts 1 and 2 were serving the same children and families. In our second survey, we requested more detailed information about the populations served by programs funded by subparts 1 and 2, such as demographic and socioeconomic characteristics. However, few of the 17 states responding to the second survey were able to provide this kind of data. When asked about selected subpart 1 services, 10 of the 17 states were able to estimate the extent to which the same children and families receiving the identified service funded by subpart 1 also received services funded by subpart 2. Of children and families receiving the identified subpart 1 service, four states reported that generally none or almost none of the recipients also received a service funded by subpart 2, three states reported that generally less than half of the recipients received subpart 2 services, one state reported that all or almost all recipients received subpart 2 two states provided varying estimates for different subpart 1 services. While none of the states we visited were able to provide data about the extent to which the same children and families were receiving services funded by both subparts 1 and 2, state officials in each of these states recognized some overlap among the types of populations participating in these services. Officials in California and New Jersey told us that they use subpart 1 for services to families that are involved with the child welfare agency due to a report of abuse or neglect, while services funded by subpart 2 target a broader population, including families who are at risk of abusing their children. However, while some of the subpart 2 programs these officials described focused on this at risk population, many of them were targeted to families who were already involved with the agency. Officials at a California child welfare agency told us that all of the services provided by subparts 1 and 2 are targeted toward the same high-risk communities in which many people are involved with the agency, and they considered it likely that families receiving subpart 1 services have also received subpart 2 services in the past or will at some time in the future. Washington officials noted that children and families involved with the child welfare agency may receive multiple services, some of which may be funded by subpart 1 and some of which may be funded by subpart 2. Finally, although Ohio does not track clients served, one state official estimated that the types of children and families served by the programs funded by subparts 1 and 2 overlap by 100 percent. One New Jersey state official described the services funded by subparts 1 and 2 as part of a continuum of child welfare services, such that some population overlap is to be expected. In New Jersey, services funded by subpart 1 target families who are experiencing difficulties that may jeopardize the safety and well-being of their children. Programs funded by subpart 2 may also serve these families. However, they also target families who are not currently having difficulties, but who could become involved with the child welfare agency in the future. In addition, some subpart 2 programs serve adopted children, many of whom were previously involved with the child welfare agency and received services funded by subpart 1. None of the states we visited could provide data on the numbers of children and families who participated in services funded by subparts 1 and 2 dollars. Given the overlap observed between the two subparts, we discussed the potential advantages and disadvantages of consolidation with HHS regional officials and asked states for their perspective on our survey. Officials in almost all of HHS’s regional offices said that Title IV-B should maintain its current balance between allowing states some flexibility and targeting some resources toward prevention. Officials in all regional offices told us that they believe states need some flexibility to use Title IV- B funds to address state specific child welfare needs as is currently the case under subpart 1. One regional office noted that subpart 1 gives states the flexibility to address unexpected circumstances affecting the child welfare system—for example, by developing substance abuse treatment programs to address the needs of parents affected by the cocaine epidemic of the 1980s. Similarly, officials in three states we visited felt strongly that the flexibility to direct the use of subpart 1 funds for state priorities was important and they would not want to lose this flexibility in any consolidated program. Our survey results also indicate that the flexibility to use subpart 1 to meet the needs of their child welfare systems is important to states. For example, when asked about their preference between subparts 1 and 2 with regard to different program components, 24 and 26 states, respectively, reported that they preferred subpart 1 when considering (1) spending restrictions on the percentage of funds that can be used for specific services and (2) allowable spending categories (see fig. 2). When asked about the advantages and disadvantages of Title IV-B’s current structure, several states cited the spending restrictions of subpart 2 as a disadvantage, while a couple of states mentioned the flexibility associated with subpart 1 as an advantage. At the same time, officials in 8 of HHS’s 10 regional offices also stressed the importance of subpart 2 to ensure that states use some funds on family support services and prevention activities to help preserve families and keep children from entering foster care. Several regional offices expressed concern that, in the absence of the minimum spending requirements outlined in subpart 2, states would neglect preventive services, while using Title IV-B funds for more urgent services, such as CPS or foster care. One state we visited expressed opposition to consolidation for this reason, arguing that keeping a separate subpart 2 was important to ensure that states fund some prevention services. State and county officials in this state noted that subpart 2 represents an important federal investment in prevention services and expressed concern that states would use all available funds to provide services to families already involved with the child welfare agency unless funds were specifically targeted for services to support families at risk of abusing or neglecting their children. In addition, on our survey, several states cited the prevention focus of subpart 2 as an advantage of Title IV-B’s current structure. Officials in 8 of HHS’s regional offices said that they believe that the current structure of Title IV-B offers a good balance between flexibility and targeting resources toward prevention. Officials in the other 2 regional offices told us that Title IV-B provides a good mix of flexibility and a focus on services considered to be federal priorities. One regional office noted that a consolidated Title IV-B program could be structured to offer this balance. For example, a consolidated program could require some minimum spending levels for the current subpart 2 categories, but also set aside some funds that states could use for a broader array of child welfare services. In addition, most of the regional offices did not believe that consolidation would lead to any significant administrative savings. For example, several regional offices explained that consolidating the subparts would not reduce HHS’s oversight responsibilities, while another noted that consolidation would have little impact on HHS regional or state offices, which are staffed and organized to manage multiple sources of funding. Another regional office noted that the planning and reporting requirements for the two subparts are already consolidated in the planning documents states submit to HHS. State and local child welfare officials in one state, along with officials at 2 HHS regional offices, commented that increasing the funds available for service provision was more critical than consolidating the two subparts. They believe that states need more federal funds to provide services to prevent foster care placements, such as an increase in funds available under Title IV-B or more flexibility to use Title IV-E funds to provide services, rather than paying primarily for foster care maintenance payments as it currently does. Since 1994, states have been able to apply for demonstration waivers to use federal child welfare funds to test innovative foster care and adoption practices without regard to certain restrictions in Titles IV-B and IV-E. For example, four states are using demonstration waivers to create fixed Title IV-E budgets for counties within the state in which funds can be used more flexibly for prevention and community-based services not traditionally reimbursed by Title IV-E. However, HHS’s authority to approve such waivers is scheduled to expire at the end of fiscal year 2003. States may soon have another mechanism to use Title IV-E funds to provide preventive services through the child welfare option HHS is currently proposing. HHS’s oversight focuses primarily on states’ overall child welfare systems and outcomes, but the agency provides relatively little oversight specific to subpart 1. For example, HHS regional offices work with states to establish overall goals to improve the safety, permanency, and well-being of children and measure progress toward those goals. However, HHS has limited knowledge about how states use their subpart 1 funds. HHS does not collect data on subpart 1 expenditures and instead requires states to submit annual estimates about how they plan to use their subpart 1 funds in the upcoming year. HHS regional offices reported that they review these estimates for relatively limited purposes, with several HHS officials noting that they do not review the spending plans for subpart 1 as closely as subpart 2 because subpart 1 has few restrictions as to how these funds can be used. We also found that HHS regional offices pay little attention to statutory limits on the use of subpart 1 funds for foster care maintenance and adoption assistance payments. As a result, HHS approved projected 2002 spending plans for 15 states that reported planned spending amounts that exceeded these spending limits. In response to our survey, 10 states reported actual 2002 subpart 1 expenditures that exceeded the spending limits by over $15 million in total. HHS focuses much of its programmatic oversight on the overall child welfare system in each state, rather than focusing specifically on subpart 1 or any other federal funding source. In discussing their oversight of subpart 1, several HHS officials at headquarters and in the regional offices emphasized the importance of reviewing the overall child welfare system and the outcomes achieved, rather than scrutinizing individual programs outside of that context. A major component of HHS’s subpart 1 oversight is having the regional offices actively work with states to develop appropriate goals for their child welfare systems and ensure that available funds, including subpart 1, are used to support those goals. To receive Title IV-B funding, HHS requires states to submit a Child and Family Services Plan, which covers a 5-year period and describes the state’s goals and objectives toward improving outcomes related to the safety, permanency, and well-being of children and families. This 5-year plan includes a description of services and programs the state will pursue to achieve these goals. In addition to the 5-year plan, HHS requires states to submit an Annual Progress and Services Report (APSR) each year to discuss their progress in meeting the goals outlined in their plans and revise the goals as necessary. Regional HHS staff review this planning document to ensure that they meet all the technical requirements outlined in the annual program instructions issued by HHS. For example, states must certify that, in administering and conducting services under the 5-year plan, the safety of the children to be served shall be of paramount concern. In addition, some regional offices reported that they review the state’s objectives and goals to determine if they are reasonable, assess the progress the state has made in achieving these goals and objectives, and determine whether child welfare services are coordinated with the efforts of other agencies serving children. Some regional officials noted that states are still struggling to use these documents appropriately for planning purposes. These officials told us that instead of focusing on outcomes and collecting data to measure progress toward those outcomes, frequently states simply describe their current programs. In addition to reviewing planning documents, all of the regional offices consult regularly with states to discuss child welfare issues and provide technical assistance. For example, the regional office may provide guidance on how to comply with specific program regulations or how to develop a 5-year plan that will function as a strategic plan for the state’s child welfare agency. Two regional offices told us that they also conduct site visits to states as part of their oversight. One regional office reported visiting states in its region to gain a better understanding of each state’s child welfare services. This allows the regional office to share good ideas with other states and to ensure that states are working on areas the regional office has identified as in need of improvement. Other regional offices reported that they would like to conduct site visits to states under their purview, but a lack of travel funds prevented them from doing so. The CFSR process is an additional tool HHS uses to ensure that states conform with federal child welfare requirements and to help states improve their child welfare services. Staff at one regional office described the CFSR as a thorough review of the services funded by different federal programs, such as Title IV-B. They consider the CFSR an important complement to a state’s planning documents—it gives them an opportunity to determine whether states are providing the services they report in their planning documents and whether those services are adequate and appropriate to meet the needs of the state’s children and families. CFSR results for the past 2 years indicate that states have not performed strongly in terms of assessing families to determine what services they need and providing those services. While 21 of the 32 states that underwent a CFSR in 2001 or 2002 were considered to have an appropriate array of services for families, HHS found that the accessibility of services was a particular weakness in that many services were either not available statewide or had long waiting lists or other barriers to accessibility. When HHS reviewed case files, however, it determined that 31 of these states needed improvement in terms of assessing family needs and providing services to meet those needs. When asked about HHS’s role in guiding states’ use of subpart 1 funds to address weaknesses identified by the CFSRs, an HHS official told us that the agency provides technical assistance to states to help them determine the most effective use of their resources. However, the official also pointed out that HHS gives states a lot of latitude to determine the most appropriate use of their subpart 1 funds and that the agency cannot become too involved in state budget decisions given the complexities of the budget processes for states. HHS has little information about states’ use of subpart 1 funds. Each year, HHS requires states to submit a form CFS-101, which includes state estimates of the amount of subpart 1, subpart 2, and other federal funds the state plans to spend in the upcoming year on different categories of services (such as family support or CPS). The descriptions provided by regional office staff of their review of these estimates indicate that they review them for relatively limited purposes. Officials in 4 of the regional offices told us that they generally use the CFS-101 data to ensure that states request the total amount of subpart 1 funds to which they are entitled and that they comply with the requirement to match 25 percent of subpart 1 funds with state funds. Most regional offices indicated that their reviews of the CFS-101s focus more on subpart 2 than subpart 1. For example, they reported that they review states’ planned subpart 2 spending more closely to ensure that states are meeting the requirement that they spend at least 20 percent of funds on each of the service categories and spend no more than 10 percent of funds for administrative purposes. Several HHS officials reported that they do not monitor the use of subpart 1 funds as closely as other federal child welfare funds due to the relatively small funding amount and the lack of detailed requirements about how the funds can be used. Moreover, the CFS-101 estimates may not provide reliable data as to how states are using subpart 1 funds. HHS officials explained that the CFS-101 data are estimates and that states’ actual expenditures may vary from these estimates, as they address unforeseen circumstances. The timing for submitting the CFS-101 also affects how well states can estimate their planned subpart 1 spending. HHS requires states to submit their initial CFS-101 for the upcoming fiscal year by June 30, which forces states to estimate their planned spending before the final spending amounts for Title IV-B and other federal funds have been appropriated. Some regional officials indicated that they did not know how well states’ CFS-101 estimates reflect their actual subpart 1 spending. We did not conduct a review of the reasonableness of the data states submit on their CFS-101s, but we did identify a few instances that suggested that the data are not always accurate. Two states with county-administered child welfare systems told us that they do not have reliable data to allow them to accurately estimate planned spending. A child welfare official in one of these states told us that its CFS-101 data represented its “best guess” as to how subpart 1 funds will be used, because the state distributes these funds to county child welfare agencies and does not collect any data on how the counties use these funds. The other state told us that its current CFS-101 data are most likely based on county data from several years ago and that counties may now be spending subpart 1 funds on different services. HHS does not require states to provide any additional data about their use of subpart 1 funds, such as their subpart 1 expenditures for specific services. As a result, several regional offices noted that they have no way of knowing how states actually spend their subpart 1 funds. An official from one regional office explained that the only way to determine how a state actually uses its Title IV-B funds is to review its financial accounts, which HHS does not do. Some regional officials suggested that it would be helpful to have actual expenditure data for both Title IV-B subparts, especially to determine if states were actually using 20 percent of their subpart 2 funds for each of the four required service categories. Three regional offices indicated that they have begun asking states to provide Title IV-B expenditure data. Given that HHS’s subpart 1 oversight focuses primarily on a state’s overall child welfare goals and outcomes, the regional offices pay little attention to the statutory limits on the use of federal subpart 1 funds for foster care maintenance and adoption assistance payments. Most HHS regional offices do not review the CFS-101s for compliance with the statutory limits. In addition, HHS’s annual program instruction, which details what information states must include in their CFS-101 submittals and serves as the basis for the regional offices’ review of subpart 1 spending, does not mention the subpart 1 limits. Only 1 of HHS’s 10 regional offices told us that it compares states’ planned subpart 1 spending reported on the CFS-101 with the actual dollar limit for each state to ensure that states observe the statutory limits. This office used an HHS program instruction for 1979 listing each state’s subpart 1 allocation to determine the ceiling on foster care maintenance and adoption assistance payments. In contrast, 5 regional offices were unaware that any limits on the use of subpart 1 funds existed, although 1 of these offices indicated that it generally did not consider it appropriate for states to use subpart 1 funds for foster care maintenance payments because subpart 1 should be used to fund services for families. Nonetheless, this office approved a CFS-101 for 1 state that exceeded the statutory limits. Four other regional offices were aware that some limitations with regard to foster care maintenance and adoption assistance payments existed, but did not ensure that states complied with the limits. These 4 regional offices provided several reasons for why they did not monitor states’ planned spending for compliance with the subpart 1 limits. Two regional offices indicated that HHS had provided no guidance as to how such limits should be enforced or that no data were available to calculate subpart 1 limits for each state. The third regional office reported that it did not have the specific ceiling amounts for each state. However, officials in this office said they reviewed planned subpart 1 spending for foster care maintenance and adoption assistance payments on the CFS-101 to determine if they had increased from the previous year. If the amounts had not increased, the regional office assumed that someone had checked the amounts previously and that they were within the limits. This regional office approved CFS-101s for 2 states in the region that reported planned subpart 1 spending for foster care maintenance and adoption assistance payments in excess of the limits. The fourth regional office told us that, in the past, it had a list of the maximum spending limits for each state in its region and that it had previously checked states’ CFS-101s to ensure that planned spending did not exceed the limits. However, the regional office no longer conducts such reviews; regional officials said that they consider the limits to be meaningless because state funds spent on child welfare services greatly exceed subpart 1 funds. In other words, any attempt to enforce the limits would only lead to changes in how states accounted for their funds—if a state was spending $1 million in state funds on CPS investigations and $1 million in subpart 1 funds for foster care maintenance and adoption assistance payments, the state could simply switch state and subpart 1 funding so that state funds paid for the foster care maintenance and adoption assistance payments, while subpart 1 funding paid for CPS investigations. This lack of review led HHS to approve CFS-101s for 15 states that reported fiscal year 2002 planned subpart 1 expenditures for foster care maintenance and adoption assistance payments that exceeded the statutory limits (see fig. 3). The dollar amounts by which the subpart 1 spending estimates surpassed the limits were small in some cases, but large in others. For example, Georgia reported that it planned to spend $1,497,000 of subpart 1 funds for these purposes in 2002, which would exceed its statutory limit by $1,558. At the other extreme, Florida’s CFS- 101 indicated that it planned to spend over $9 million, which was more than $7 million over the maximum allowable spending of $1.9 million. In total, these 15 states submitted planned subpart 1 spending estimates for foster care maintenance and adoption assistance payments that would exceed the statutory limits by over $30 million. Moreover, 13 of these 15 states submitted fiscal year 2003 CFS-101s with planned subpart 1 spending above the statutory ceiling, which were approved by HHS. Several regional offices noted that they judge the appropriateness of subpart 1 spending on foster care maintenance and adoption assistance payments in the context of a state’s overall child welfare system. For example, these regional offices said that they are not concerned about a state planning to spend significant proportions of its subpart 1 funds on foster care maintenance and adoption assistance payments if they believed the state had a strong child welfare system with an appropriate array of services. Regional office staff said they would, however, ask a state to reconsider its funding strategy if the state were performing poorly. However, many of the states with approved CFS-101 subpart 1 estimates above the statutory ceilings did not achieve strong outcomes on their CFSR evaluations with regard to providing needed services and having an appropriate array of services. HHS has conducted CFSRs on 12 of the 15 states with approved CFS-101s over the subpart 1 spending limits and determined that appropriately assessing family needs and providing services to address those needs was an area needing improvement in 11 of the 12 states. In addition, 6 of the 12 states were also determined to need improvement in terms of having an appropriate array of services to meet the needs of families in the state. We also compared our survey data on states’ fiscal year 2002 subpart 1 expenditures for foster care maintenance and adoption assistance payments with the statutory limits and found that 10 states reported spending subpart 1 funds on these payments that exceeded the legal limits (see fig. 4). As with their planned spending estimates, states’ subpart 1 actual spending for foster care maintenance and adoption assistance payments exceeded the statutory limits by varying amounts. Michigan, for example, reported on our survey that it spent over $6 million on foster care maintenance payments in fiscal year 2002—well over its $2.2 million limit for such payments—while New Hampshire’s use of subpart 1 for foster care maintenance and adoption assistance payments was only about $27,000 above its limit. In total, these 10 states reported subpart 1 expenditures for foster care maintenance and adoption assistance payments that exceeded the statutory limits by over $15 million. Our survey results may underestimate the number of states with subpart 1 spending over the statutory limits, because several states reported on our survey that they used subpart 1 for foster care maintenance or adoption assistance payments, but were not able to identify the specific dollar amount of subpart 1 funds used for these purposes. Four of these 10 states with subpart 1 expenditures over the statutory limits were also part of the 15 states with CFS-101s that indicated planned spending above the limits. The remaining 6 states did not report estimated subpart 1 spending over these limits. For example, Colorado did not report any planned subpart 1 spending for foster care maintenance or adoption assistance payments. On our survey, however, the state reported using over $3 million in subpart 1 funds for these purposes, well over its $700,000 limit. Little research is available on the effectiveness of unique services funded by subpart 1 because few states have evaluated these services. While our survey data revealed no unique categories of services funded by subpart 1 on a national level, 37 states reported categories of services that were uniquely funded by subpart 1—that is, the individual state used subpart 1, but not subpart 2, to fund services in a particular category. For example, Delaware funded two CPS programs with subpart 1—assessments of a caregiver’s parenting ability and legal services to represent the child welfare agency in court cases—but did not use any subpart 2 funds for this service category. We contacted the states with unique service categories in their states (other than administration, staff salaries, adoption assistance payments, or foster care maintenance payments) and none of these states had conducted rigorous evaluations of these services, although several states provided some data on the effectiveness of services included in these categories. Our literature review on the effectiveness of child welfare practices identified research for some of these unique service categories, such as certain types of family preservation programs. With two exceptions, however, it did not identify any evaluations of the specific services included in these categories. The most common service categories for which individual states used only subpart 1 funds were CPS, foster care maintenance payments, and staff salaries. The 37 states generally reported 1 or 2 unique categories, with 14 states reporting 1 unique category and 1 state reporting a high of 6 categories. Examples of unique subpart 1 services in the CPS category include specialized investigations of reports of child abuse or neglect, telephone hotlines to report incidents of child abuse or neglect, and temporary shelter services for children removed from their homes at times when no other placement option is available, such as evenings and weekends. States also provided other types of services that were funded uniquely by subpart 1. For example, Minnesota provided intensive in-home services to prevent children from being placed in foster care, North Carolina contracted for legal services with the state’s Attorney General’s office, and Maine helped adopted youth pay for post-secondary education costs. Our review of child welfare literature and Internet sites that identified promising child welfare practices found few studies that evaluated the effectiveness of the specific services that states funded uniquely with subpart 1 funds. Based on the information provided on our survey, we identified evaluation research on two of these services. Texas used subpart 1 to fund its Home Instruction For Parents of Preschool Youngsters (HIPPY) Program. The goal of HIPPY is to prevent academic underachievement of children when they enter school. HIPPY works with parents in their homes or in parent group meetings to increase the degree and variety of literacy experiences in the home. The program also seeks to prevent child abuse by enhancing parent-child interactions and focuses on economically disadvantaged parents who may not be involved in parenting programs. While Texas has not formally evaluated this program, the model has been evaluated in other states. Strengthening America’s Families, a Web site about effective family programs to prevent juvenile delinquency funded by the Office of Juvenile Justice and Delinquency Prevention, cites HIPPY as a model program for which evaluations have shown positive effects on children’s measured competence and classroom behavior at the end of second grade for children who participated in HIPPY compared with children with no formal preschool experience. In addition, a 1999 article summarizing research on the HIPPY program found mixed results. For example, an evaluation in New York found that children whose parents participated in HIPPY in 1990 outperformed control group children on measures of classroom adaptation and reading scores 1 year later, but children whose parents participated in HIPPY in 1991 had similar outcomes as children in the control group. The article suggested that variability in how the program is implemented and in parental commitment to the program may explain the mixed results. Missouri funded an alternative response system with subpart 1 funds, which offers assessment services (rather than an investigation) for families that are the subject of a report of abuse or neglect when the risk to the child is not considered high to determine if the family needs services to reduce the risk of harm to the child. By responding to low risk reports of abuse or neglect in a nonaccusatory manner, the goal is to encourage families to collaborate in identifying their needs and cooperate with supportive services. A 1998 evaluation of Missouri counties testing the state’s alternative response system found that the safety of children was not compromised by the lack of an investigation and that, compared to counties that were not using the alternative response system, needed services were delivered more quickly, subsequent reports of abuse or neglect decreased, and the cooperation of families improved. An evaluation of Minnesota’s alternative response systems has also shown promising results. For example, initial results from the randomized experimental evaluation showed an increase in the use of community services with no increase in subsequent reports of abuse or neglect. Of the 37 states that reported unique subpart 1 service categories, we asked the 22 states with unique subpart 1 service categories other than foster care maintenance payments, adoption assistance payments, staff salaries, or administration, whether they had evaluated the effectiveness of the programs included in their unique categories. None of these states had conducted rigorous evaluations of the effectiveness of these services using randomly selected control groups. One official explained that few states can afford to divert resources from providing direct services to conducting formal evaluations of programs, given the tremendous service needs of families involved with the child welfare system. However, 5 states provided some information on the outcomes of the services they funded uniquely with subpart 1. North Dakota used subpart 1 dollars to uniquely fund a component of its family preservation program— family focused services—which the state characterized as a family reunification service. The state provided us with a draft evaluation report of its family preservation program, which includes this specific service. The family preservation program is intended for families with children at risk of being placed in foster care and offers a range of services, including parent aides who provide hands-on parenting education and therapists who are available 24 hours a day to work with the family in the home to address the issues that may result in the children being removed from the home. The evaluation of its total family preservation program found that families receiving services and the social worker involved with the families both reported improved family functioning upon completion of the services compared to their functioning prior to the services. The study also found that fewer children were at risk of being placed in foster care upon completion of services. However, the evaluation did not include any control group to determine if these results would have been achieved if families had not received these services. Massachusetts used some of its subpart 1 funds to pay for a contractor to operate a telephone service for reports of child abuse or neglect that are received in the evenings and on weekends. Officials from Massachusetts provided an internal study conducted in February 2000 that discussed problems with this telephone service, most notably limited staff and resources to handle an increasing volume of calls. The report recommended several actions to improve the operation of the telephone service, including an increase in staff to field telephone calls, upgrading the telephone system so that fewer people receive a busy signal, and increasing the number of beds available for emergency placements in the evenings and on weekends. Arizona also funded its child abuse telephone hotline uniquely with subpart 1 funds and provided the following statistics. In fiscal year 2003, 69 percent of calls to the hotline were answered without any wait. Of the calls that were not answered directly, the average wait time was 3.5 minutes and about 13 percent of calls were abandoned. In addition, quality assurance staff reviewed over 17,000 calls for which it was determined that the report did not meet the state’s criteria for a CPS report requiring investigation and changed only 15 of these to a CPS report. Missouri funded several CPS services with subpart 1 funds, including intensive in-home services for children at imminent risk of removal from the home- and family-centered services for families for whom an investigation determined services are needed to eliminate the risk of harm to the child. Missouri provided two annual reports for fiscal year 2002 that provide some data on the outcomes of these services. Consumer surveys indicated that many families found the intensive in-home services useful, and the annual report on the intensive in-home services indicated that 88 percent of at-risk children were still with their families when services ended after approximately 6 weeks. In addition, 79 percent of children who exited the program in 2001 were still at home 1 year after services ended. With regard to family centered services, the annual report indicated that over 70 percent of families had achieved their goals at the time their case was closed. Wisconsin used subpart 1 to fund a Youth Aids Program, in which the state provides grants to counties to provide services to prevent the placement of children in correctional facilities and other out- of-home care. The state has not evaluated services provided by the counties, but a 1995 report notes that in the first several years of operation, this program produced major reductions in institutional placements and helped encourage the development of community-based resources. Over time, however, an increase in youth crime has led to large increases in institutional and out-of-home care, so that much of Youth Aids funding at the time was reported to be used for out-of-home placements. Despite its relatively small funding level compared to other funding sources for child welfare services, Title IV-B represents an important federal commitment to providing supportive services to help preserve and reunify families. The primary emphases of the two subparts vary somewhat, but the range of services offered and the types of families served overlap significantly. In part because of the relatively small funding involved and the flexible nature of the funding, HHS does not provide in- depth oversight specific to Title IV-B subpart 1. Instead, HHS focuses much of its oversight efforts on states’ progress toward the overall goals of their child welfare systems and the outcomes achieved by these systems. While this type of oversight is appropriate, HHS could provide valuable assistance to states by obtaining more concrete data about states’ use of these funds and synthesizing these data with CFSR data on states’ outcomes with respect to properly identifying the service needs of children and families and providing needed services. Such analyses could allow HHS to develop information on how investments in certain types of services correlate to improved outcomes for children, which could be shared with states to help them more effectively target their spending. HHS could also use this enhanced knowledge of Title IV-B to help develop an appropriate accountability strategy for its newly proposed child welfare option. If enacted, the additional spending flexibility proposed—given the size of the Title IV-E allocations that would become available for spending on a variety of child welfare services—could have a significant impact on a state’s child welfare system. Given the limited information available about the services funded with subpart 1 and the effectiveness of these services, as well as HHS’s findings about the ability of states’ to meet families’ needs, ensuring that states use this flexibility to provide effective services will be critical to the success of this option. Opportunities also exist for HHS to continue to encourage states to conduct evaluations of the programs the states implement. We recommend that the Secretary of HHS provide the necessary guidance to ensure that HHS regional offices monitor states’ use of Title IV-B subpart 1 funds for compliance with statutory restrictions on the use of these funds. In addition, we recommend that the Secretary consider the feasibility of collecting basic data on states’ use of these funds to facilitate its oversight of the program and to provide guidance to help states determine appropriate services to fund. For example, an analysis of how states’ spending patterns correlate to outcomes—both positive and negative—from the CFSRs could yield useful information for this purpose. Given that HHS is currently developing the new child welfare option that would allow states to use Title IV-E dollars for services similar to those provided under Title IV-B subpart 1, we further recommend that the Secretary use the information gained through enhanced oversight of subpart 1—as well as information it may have on states’ use of subpart 2 funds—to inform its design of this option. For example, HHS could use this information to help states determine the most appropriate services to provide under this option. We provided a draft of this report to HHS for comment. The Department’s Administration for Children and Families provided comments. These comments are reproduced in appendix II. ACF also provided technical clarifications, which we incorporated when appropriate. ACF agreed with our recommendation that the Secretary of HHS provide the necessary guidance to ensure that HHS regional offices monitor states’ use of Title IV-B subpart 1 funds for compliance with statutory restrictions on the use of these funds. ACF agreed to provide guidance to the regional offices to enable them to enforce the statutory limits on subpart 1 funds. However, ACF also noted that this limitation no longer serves a useful purpose and is incompatible with the current proposal to offer states much more flexibility in using other federal child welfare dollars. ACF said that it plans to explore ways to provide states flexibility with respect to the subpart 1 limits. ACF disagreed with our recommendation to consider the feasibility of collecting basic data on states’ use of subpart 1 funds. ACF said that it believes that its level of oversight is commensurate with the scope and intent of the program and minimizes states’ reporting requirements. Rather than using information on Title IV-B expenditures to help states most effectively use their resources, ACF believes that its oversight is more appropriately focused on the CFSR process, which requires states to develop actions in response to weaknesses identified by the CFSR and which measures the impact of these actions on actual outcomes. In ACF’s opinion, analyzing how states’ spending patterns correlate to CFSR results is not useful, given the lack of a direct relationship between the relatively small Title IV-B funding levels and the broad outcome areas of safety, permanency, and well-being. In addition, ACF noted that any data collected on subpart 1 expenditures would be outdated because states have 2 years to spend Title IV-B expenditures and are not required to report final expenditures until 90 days after the 2-year period has ended. We believe, however, that assessing the feasibility of collecting some basic data on states’ subpart 1 expenditures could enhance ACF’s overall oversight of states’ child welfare operations and outcomes. While the impact of states’ program improvement efforts under the CFSR process is unknown because states are just getting these efforts underway, the service deficiencies identified by the CFSRs suggest that states could benefit from some guidance on the services that are associated with positive CFSR outcomes. An analysis of how states’ spending patterns correlate to CFSR outcomes need not be limited to subpart 1 spending; such an analysis could help to identify effective services (regardless of funding source) that are associated with positive CFSR outcomes and help states target their subpart 1 and other funding sources more effectively. Furthermore, we do not believe that 2-year old data on subpart 1 expenditures are necessarily outdated; rather, we believe such data would provide better information on states’ use of subpart 1 funds than states’ current estimates of planned spending. In addition, ACF could request expenditure data for a shorter period, such as a year or a quarter or whatever time period best fits states’ other reporting requirements. ACF did not comment on our recommendation that it use the information gained through enhanced oversight to inform its design of its child welfare option. However, we believe that guidance on services associated with positive CFSR outcomes could also help states that choose to participate in the proposed child welfare option to manage their fixed Title IV-E funding. ACF also commented on our finding that the services provided and families served under subparts 1 and 2 overlap to some extent. Specifically, ACF noted that by not permitting the funds, services, and families to overlap, ACF would significantly impede the functionality of the continuum of child welfare services funded by Title IV-B and other federal funding streams and possibly lead to families not receiving needed services. While we described the overlap in services provided and families served, we did not state or imply that such overlap was inappropriate or unnecessary. We also provided a draft of this report to child welfare officials in the 4 states we visited (California, New Jersey, Ohio, and Washington). Officials from California and Washington provided a few technical clarifications that we incorporated, while New Jersey and Ohio did not have any comments. In addition, Washington expressed concern that our recommendations for HHS to (1) ensure that the regional offices monitor states’ use of subpart 1 funds for compliance with the statutory limits and (2) consider collecting data on states’ use of these funds will add to the reporting burden of states without providing additional funds to offset that burden. We recommended that HHS consider the feasibility of collecting such data and would expect HHS to take into account the burden placed on states in making this decision. We are sending copies of this report to the Secretary of HHS, appropriate congressional committees, state child welfare directors, selected county child welfare directors, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions, or wish to discuss this report further, please call me at (202) 512-8403 or Diana Pietrowiak at (202) 512-6239. Key contributors to this report are listed in appendix III. To determine how the services provided and populations served under subpart 1 compare with those under subpart 2, we surveyed child welfare directors in all 50 states and the District of Columbia. We sent a survey to all states to obtain information on how they use Title IV-B funds. We also sent a second survey to certain states that responded to the first survey. We pretested both survey instruments in New Hampshire, Rhode Island, and Wisconsin and obtained input from several other states and from a Department of Health and Human Services (HHS) official. In January 2003, we mailed a copy of the first survey to the states, asking for specific data on state spending and populations served for subparts 1 and 2, as well as their opinions about the current structure of Title IV-B. To address differences in the administrative structure and reporting systems of state child welfare agencies, a different version of this survey was sent to states with county-administered child welfare systems. We received responses from 47 states, although some states were unable to provide complete information. To encourage as many states as possible to complete the survey, we conducted follow-up telephone calls to states that did not respond to our survey by the initial deadline. After a state responded to the first survey, we mailed the second survey, requesting more detailed information on the three services receiving the largest portions of subpart 1 funding and the three services receiving the largest portions of subpart 2 funding. The second survey also asked for copies of any existing evaluations of the effectiveness of these services. We sent the second survey to the 30 states that provided sufficient data on their first survey by mid-April 2003 and received responses from 17 states. We did not independently verify the information obtained through either survey. The responses of the 47 states to the first survey can be used to explain how the 50 states and the District of Columbia in general used Title IV-B funds. Since we received responses from only 17 states for our second survey, they may not be representative of all states. Consequently, we have used these data only as examples or for illustrative purposes. As a result, we based our analyses of the populations of children and families served on data from our first survey. However, states that completed the county-administered version of the survey did not provide data on the types of children and families who received services funded by Title IV-B and were not included in these analyses. As a result, the data on populations served by subparts 1 and 2 cannot be generalized to states with county-administered child welfare systems. Data from both surveys were double-keyed to ensure data entry accuracy, and the information was analyzed using statistical software. On the first survey, we asked states to describe the nature of each service and select one service category that best characterized each program funded by Title IV-B, using the following choices: child protective services (CPS), family support/prevention programs, parent training programs, health programs, educational programs, substance abuse programs, counseling and mental health services, domestic violence programs, formal family preservation programs, family reunification programs, recruitment and training for foster/adoptive parents, adoption preservation services, administration and management, foster care maintenance payments, adoption subsidy payments, and other. The data were analyzed using states’ self-identified categories except in the following situations: (1) if a state clearly described a program as funding salaries for staff at the child welfare agency, we included these data under the staff category; (2) if a state used the “other” category for a service that clearly fell into one of the existing categories (writing in “foster care maintenance payments,” for example), we revised the survey response to reflect the actual category; (3) if it appeared that a state mistakenly checked the wrong box; for example, we changed the category from CPS to family reunification if the program was described as a family reunification service; (4) if a state checked multiple categories, we reported these programs separately under “multiple responses;” (5) if a state did not check any categories, we selected a service category that best fit the description of the program and used “other” if the description did not clearly fall into one of our categories; and finally (6) if a state clearly described the use of Title IV-B funds as administrative, but categorized it in another category, we revised the survey to indicate that the funds were used for administration and management. Some states explained that Title IV-B funds were used to cover administrative expenses for a particular program and characterized the use of these funds based on the nature of the program. For example, a state might have selected family preservation program when Title IV-B funds were used for administrative expenses for that program. As noted earlier in the report, we recognize that some states may not have separately identified administration or management expenses associated with a program and may have included these expenses in the program costs. For reporting purposes, we combined several service categories for which states reported spending small percentages of Title IV-B funds, such as parent training and substance abuse services, and reported these dollars in the “other” category. We recognize that the service categories used are not necessarily mutually exclusive. For example, several HHS officials told us that the delineation between family support and family preservation services is not clear, so that 2 states providing the same services to the same types of children and families may report them in different categories. In addition, because the survey for states with state-administered child welfare systems asked them to choose one service category for each program, the reported service categories may not fully capture all relevant programs that fall into more than one service category. Inconsistencies in how states categorized services could have an effect on any measured differences between service categories. To obtain more in-depth information on the services provided and the types of children and families served under Title IV-B, we conducted site visits in California, New Jersey, Ohio, and Washington. We selected these states to represent a range of geographic locations and subpart 1 spending patterns. In addition, because preliminary data indicated that significant subpart 1 funds were devoted to CPS, we selected states that used innovative CPS tools or processes. However, the experiences of these states are not necessarily representative of the experiences of any other state. During these site visits, we interviewed state and local child welfare officials and service providers and reviewed relevant documentation. To learn about the federal government’s role in overseeing subpart 1, we reviewed applicable laws and regulations and interviewed HHS central office officials. We also conducted interviews with HHS officials in all 10 HHS regional offices to discuss their oversight activities and reviewed results from HHS’s CFSR reports. In addition, we reviewed states’ CFS- 101s for fiscal year 2002 and compared states’ planned subpart 1 spending for foster care maintenance, adoption assistance, and child care payments with states’ final subpart 1 allocations for fiscal year 1979 as reported on an HHS program instruction from that year. States are required to submit the CFS-101 by June 30 of the preceding year—June 30, 2001, for fiscal year 2002. At that time, federal appropriations for Title IV-B and other federal child welfare funds often are not yet finalized, so states base their estimates on the previous year’s allocation. States must submit a revised CFS-101 by June 30, 2002, to request any additional fiscal year 2002 Title IV-B funds that might be available to them once appropriations are finalized. In addition, states can request additional Title IV-B funds if other states do not use the total funds to which they are entitled. In most cases, we reviewed the final revised CFS-101s approved by HHS. For one state, we used the initial CFS-101 approved by HHS because it included planned subpart 1 expenditures that exceeded the limits for foster care maintenance and adoption assistance payments, but the revised CFS-101 did not. Although the revised CFS-101 did not show the state planned to exceed the limit, we used the initial CFS-101 to show that HHS had previously approved a spending plan that did not comply with the statutory limits. We used our survey results to identify services unique to subpart 1—that is, categories of services funded by subpart 1 that are not funded by subpart 2. While no category of service was unique to subpart 1 at the national level, some states funded unique categories of services within their state with subpart 1. In our second survey, we asked states to provide a copy of any evaluations they had conducted of the three largest services funded by subpart 1. If we did not have survey data for one of the identified services, either because we did not send a second survey to the state or because the second survey did not ask for data on the particular service, we contacted the state directly to ask if any evaluation had been conducted. In addition, to identify other evaluations on the effectiveness of the services in these unique categories, we conducted a literature review and interviewed child welfare research experts. The reports and Internet sites we reviewed included the following: Strengthening America’s Families: Effective Family Programs for the Prevention of Delinquency (http://www.strengtheningfamilies.org/html/programs_1999/programs_list_ 1999.html). Child Welfare League of America’s Research to Practice Initiative (http://www.cwla.org/programs/r2p/). Casey Family Programs: Promising Approaches to Working with Youth and Families (http://www.casey.org/whatworks/). Promising Practices Network on Children, Families, and Communities (http://www.promisingpractices.net/). U. S. Department of Health and Human Services, “Emerging Practices In the Prevention of Child Abuse and Neglect” (Washington, D.C.: n.d.). We conducted our work between August 2002 and July 2003 in accordance with generally accepted government auditing standards. In addition to those named above, Melissa Mink and J. Bryan Rasmussen made key contributions to the report. Anne Rhodes-Kline, Alison Martin, Luann Moy, and George Quinn, Jr., provided key technical assistance. Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could be Improved. GAO-03-809. Washington, D.C.: July 31, 2003. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care Cases. GAO-03-758T. Washington, D.C.: May 16, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Foster Care: States Focusing on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-03-626T. Washington, D.C.: April 8, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Foster Care: HHS Should Ensure That Juvenile Justice Placements Are Reviewed. GAO/HEHS-00-42. Washington, D.C.: June 9, 2000. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 5, 1999. Child Welfare: States’ Progress in Implementing Family Preservation and Support Services. GAO/HEHS-97-34. Washington, D.C.: February 18, 1997. Child Welfare: Opportunities to Further Enhance Family Preservation and Support Activities. GAO/HEHS-95-112. Washington, D.C.: June 15, 1995.
In 2001, states determined that over 900,000 children were the victims of abuse or neglect. In fiscal year 2003, subparts 1 and 2 of Title IV-B of the Social Security Act provided $697 million in federal funding for services to help families address problems that lead to child abuse and neglect. This report describes (1) the services provided and populations served under subparts 1 and 2; (2) federal oversight of subpart 1; and (3) existing research on the effectiveness of services unique to subpart 1--that is, when states used subpart 1, but not subpart 2, to fund programs in a particular service category. The report focuses primarily on subpart 1 because little research exists on this subpart, while studies have been conducted on subpart 2. On a national level, GAO's survey showed that the primary emphases of subparts 1 and 2 vary somewhat, but the range of services offered and the types of families served overlap significantly. No single category of service was funded solely by either subpart. In fiscal year 2002, states used subpart 1 funds most frequently for the salaries of child welfare agency staff, administrative and managerial expenses, child protective services, and foster care maintenance payments. Subpart 2 primarily funded family support, family preservation, family reunification, and adoption support services. Programs funded by the two subparts served similar types of populations--predominantly children at risk of being abused or neglected and their parents, as well as children in foster care and their parents. HHS's oversight focuses primarily on states' overall child welfare systems and outcomes, but the agency provides relatively little oversight specific to subpart 1. For example, HHS works with states to establish goals to improve the safety and well-being of children and measure progress toward those goals. However, HHS has limited knowledge about how states spend subpart 1 funds. States submit an annual estimate about how they plan to use their subpart 1 funds in the upcoming year, but provide no data on actual expenditures. HHS reports that it reviews these estimates for relatively limited purposes. We also found that HHS regional offices pay little attention to statutory limits on the use of subpart 1 funds for foster care maintenance and adoption assistance payments. For example, 9 of the 10 HHS regional offices do not monitor states' compliance with these limits. As a result, HHS approved projected 2002 spending plans for 15 states that reported estimated spending amounts that exceeded the limits by over $30 million in total. While GAO's survey data revealed no unique service categories funded by subpart 1 on a national level, 37 states reported unique subpart 1 service categories within their state. Little research is available on the effectiveness of the services in these categories, such as hotlines to report child abuse and emergency shelter services. No states conducted rigorous evaluations of these services, although several states provided some information on outcomes.
While insurers assume some risk when they write policies, they employ various strategies to manage overall risks so that they may earn profits, limit potential financial exposures, and build capacity—generally, equity capital that would be used to pay claims. For example, they charge premiums for the coverage provided and establish underwriting standards such as (1) refusing coverage to customers who may represent unacceptable levels of risk or (2) limiting coverage offered in particular areas. Establishing underwriting standards also allows insurers to minimize the adverse consequences of “moral hazard,” which is “the incentive created by insurance that induces those insured to undertake greater risk than if they were uninsured, because the negative consequences are passed to the insurer.” To manage potential financial exposures and also enhance their capacity, insurance companies may also purchase reinsurance. Reinsurers generally cover specific portions of the risk the primary insurer carries. For example, a reinsurance contract could cover 50 percent of all claims associated with a single event up to $100 million from a hurricane over a specified time period in a specified geographic area. This type of contract, which specifies payments based on the insurer’s actual incurred claims, is called indemnity coverage. In turn, reinsurers act to limit their risks and moral hazard on the part of primary insurers by charging premiums, establishing underwriting standards, and maintaining close business relationships with insurers that generally have been maintained over a long period. In contrast to other types of insurance risks, catastrophic risk poses unique challenges for primary insurers and reinsurers. To establish their exposures and price insurance and reinsurance premiums, insurance companies need to be able to predict with some reliability the frequency and severity of insured losses. For example, the incidence of most property insurance claims, such as automobile insurance claims, is fairly predictable, and losses generally do not occur to large numbers of policyholders at the same time. However, catastrophes are infrequent events that may affect many households, businesses, and public infrastructure across large areas and thereby result in substantial losses that can impair insurer capital levels. Given the higher levels of capital that reinsurers must hold to address major catastrophic events (for example, hurricanes or earthquakes with expected annual occurrences of no more than 1 percent), reinsurers generally charge higher premiums and restrict coverage for such events. Further, as previously noted, in the wake of catastrophic events reinsurers and insurers may sharply increase premiums and significantly restrict coverage. The reinsurance market disruptions associated with the Andrew and Northridge catastrophes provided an impetus for insurance companies and others to find different ways of raising capital to help cover catastrophic risk. The mid-1990s saw the development of catastrophe bonds, a capital market alternative to reinsurance (in the sense that other parties assume some of the insurer’s risks). Catastrophe bonds generally (1) are sold to qualified institutional investors such as pension or mutual funds; (2) provide coverage for relatively severe types of events such as hurricanes with an annual expected occurrence of 1 percent; and (3) pay relatively high rates of interest and have less than investment-grade ratings (because in some cases, investors may risk all of their principal if a specified catastrophe occurs). Catastrophe bonds also potentially expose investors to moral hazard because, absent the business relationships that typically characterize primary insurers and reinsurers, investors may lack information on insurer underwriting standards or the claims payment process. That is, an insurer that has issued a catastrophe bond may have incentives to lower its underwriting standards and offer coverage to riskier insureds because investors have less ability to monitor the insurers’ risk- taking than would a reinsurer with whom the insurer has done business for years. To minimize moral hazard, most catastrophe bonds are triggered by objective measures (also referred to as “nonindemnity” based coverage) such as wind speed during a hurricane or ground movement during an earthquake rather than insurer loss experience (indemnity-based). However, nonindemnity based coverage exposes insurers to “basis risk,” which is the risk that the proceeds from the catastrophe bond will not be related to the insurer’s loss experience. For example, if a hurricane with a specified wind speed occurs, the insurer would automatically receive the proceeds of the catastrophe bond, which may be either higher or lower than its actual losses. See appendix III for additional information on the structure of catastrophe bonds. Because insurance markets have been severely disrupted by catastrophic events, state and federal governments also have taken a variety of steps to enhance the capacity of insurers to address catastrophic risk. For example, Florida established FHCF to address hurricane risk, and California established CEA to address earthquake risk. Although these programs cover different risks and use different strategies as described in this report, they share a similar goal in ensuring that insurers can withstand catastrophic events and continue to make coverage available. Similarly, Congress enacted TRIA in 2002 to ensure the continued availability of terrorism insurance subsequent to the September 11 attacks. TRIA was designed as a temporary program that would remain in place until the end of 2005, when it was expected that insurers and reinsurers would have had time to establish a market for terrorism insurance. However, Congress is currently considering extending the 2005 deadline due to concerns about whether insurers will offer terrorism insurance after the act’s expiration. See appendix II for more information about TRIA. Despite steps taken in recent years to strengthen insurer capacity for catastrophic risk, the industry has not yet been tested by a major catastrophic event or series of events. Overall, insurers increased their equity capital—financial resources available to cover catastrophic and other types of claims that exceed premium and investment income—from 1990 through 2003, but this measure of capacity has limitations, and therefore, the extent to which capacity has increased is not clear. For example, insurers’ exposures in risk-prone coastal and other areas have also increased over time, which could partially offset the increase in equity capital. However, state governments and insurers have taken other steps to enhance industry capacity for catastrophic risk such as establishing state authorities, implementing stronger building codes, and reportedly implementing stronger underwriting standards. Several of these changes appear to have facilitated the industry’s ability to withstand the 2004 hurricanes better than the impacts of Hurricane Andrew in 1992, but a more severe catastrophe or catastrophes could have significant financial consequences for insurers and their customers. The insurance industry’s equity capital levels commonly are used to assess capacity to cover catastrophic risk. As shown in figure 1, the Insurance Services Office, Inc. (ISO) found that from 1990 through 2003 industry equity capital increased from $194.8 billion to $347 billion on an inflation- adjusted basis. After steadily increasing for 18 years, insurers’ equity capital actually declined from 1999 to 2002 before rebounding in 2003. Capital levels declined for a variety of reasons including a series of natural catastrophes in the late 1990s, declining stock prices that particularly affected the investments of large European reinsurers, and the losses associated with the September 11 attacks. Insurer capital increased in 2003 for several reasons that include lower losses associated with natural catastrophes. According to information from ISO, the industry’s capital level did not decline in 2004 even though insurers experienced significant losses associated with the 2004 hurricane season. Although insurers’ equity capital has generally increased over time, it is difficult to determine whether the growth in insurer equity capital has resulted in a material increase in the industry’s relative capacity to pay claims. Insurers may also face significant financial exposure in areas prone to natural catastrophes such as the southeastern United States, which could partially offset the increase in insurer capital over the years. However, individual insurers do not make publicly available specific information about the extent to which they write policies in risk-prone areas, the terms offered on these policies, or the level of reinsurance that they purchase to help cover these risks, which complicates assessments of insurer capacity. We have also identified other limitations to using equity capital as a measure of insurance industry capacity. First, in any given catastrophe, only a portion of the industry’s capital (and its other resources, such as catastrophe reinsurance) is available to pay disaster claims because the insurance industry as a whole does not pay catastrophe claims. Instead, individual insurance companies pay claims on the basis of the damage that particular catastrophes inflict on the properties they insure. An insurer writing policies only in one state would not have to pay any claims if a catastrophe occurred in another state. Second, only a portion of equity capital would be available to cover catastrophe claims because the capital may also be needed to pay claims from all of the other types of risk that insurers have assumed should the experience of those risks prove unfavorable. To better understand insurers’ capacity to address natural catastrophe risks, we contacted two rating agencies that monitor the insurance industry. According to one rating agency official, most insurance companies the agency rated in 2003 were financially secure. The rating agency determines the financial strength of insurance companies and their ability to meet ongoing obligations to policyholders by analyzing companies’ balance sheets, operating performance, and business profiles. According to officials from one rating agency, when establishing an insurance company’s rating, the agency considers an insurer secure if the company would have enough capital after a catastrophic event to maintain the same rating. In other words, to maintain a secure rating, insurers must demonstrate that they are able to absorb losses from a hurricane with a 1 percent chance of occurring annually or an earthquake with a .4 percent chance of occurring annually. Officials from one rating agency told us that of the 1,058 ratings it issued in 2003, 904 companies obtained secure ratings, meaning that they would be able to meet ongoing obligations to policyholders and withstand adverse economic conditions, such as major catastrophes, over a long period of time. Conversely, 164 insurance companies obtained vulnerable ratings, meaning that they might have only a current ability to pay claims or not be able to meet the current obligations of policyholders at all. Although this rating agency’s analysis concludes that nearly 90 percent of insurers would remain financially secure under major catastrophe scenarios, other information suggests that such events could result in significant insurance market disruptions and the inability of insurers to meet their financial obligations to policyholders. This information is discussed in a later section. While independently assessing insurer capacity for catastrophic risk is challenging due to limitations associated with the equity capital measure and the lack of key data—such as insurers’ reinsurance purchases—state governments and insurance companies have taken steps that have the potential to mitigate insurer losses and enhance industry capacity. We discuss several of the measures that were initiated to strengthen the insurance industry’s capacity to respond to catastrophic events, including the creation of state-run programs, changes to building codes, shifts in underwriting, and market innovations. After Hurricane Andrew, the State of Florida established FHCF to act as a reinsurance company for insurers that offer property-casualty insurance in the state. According to officials from FHCF, Florida insurance regulators, and insurance companies that offer coverage in the state, FHCF enhances industry capacity by (1) offering reinsurance at lower rates than private reinsurers for catastrophic risk, thereby increasing the number of primary companies willing to write policies in the state; (2) ensuring that primary companies will be compensated up to specified levels when a catastrophic hurricane occurs; and (3) continuing to offer reinsurance at relatively stable rates in the immediate aftermath of hurricanes. Residential property insurers are required by state law to participate in the FHCF program. Coverage from FHCF is triggered when participating companies’ losses meet their share of an aggregate industry retention level of $4.5 billion, and coverage is capped at $15 billion. FHCF is financed from three sources: actuarially-based premiums charged to participating insurers, investment earnings, and emergency assessments on Florida insurance companies if needed. FHCF may also issue bonds to meet its obligations. In 2002, Florida also established Citizens Property Insurance Corporation (Citizens), a state-run, tax-exempt primary insurer that offers coverage for a premium to homeowners who cannot obtain property insurance from private companies. Citizens writes full residential coverage in all 67 Florida counties and wind-only coverage in the coastal areas of 29 counties. Citizens’ claims paying resources include premiums, assessments on the industry if its financial resources fall to specified levels, and reinsurance from FHCF. After the Northridge earthquake, the State of California established CEA to provide residential earthquake insurance. Insurers that sell residential property insurance in California must offer their policyholders separate earthquake insurance. Companies can offer a private earthquake policy or a CEA policy, but most choose the CEA policy. Only insurance companies that participate in CEA can sell CEA policies. The funds to pay claims come from premiums, contributions from and assessments on member insurance companies, borrowed funds, reinsurance, and the return on invested funds. As discussed in appendix II, about 15 percent of eligible customers in California purchase earthquake insurance in part because apparently many potential customers believe that premiums and deductibles are too high. In 1994, in the wake of Hurricane Andrew, Miami-Dade and Broward counties enacted a revised South Florida Building Code to ensure that buildings would be designed to withstand both the strong wind pressures and impact of wind-borne debris experienced during a hurricane. In March 2002, Florida instituted a statewide building code that implemented similar requirements and replaced a complex system of 400 local codes. The Florida Building Code was based on a national model code, which was amended where necessary to address Florida’s specific needs for added hurricane protection requirements. The code also created a High Velocity Hurricane Zone to continue use of the South Florida Building Code’s design and construction measures for the highly vulnerable Miami-Dade and Broward counties. Local jurisdictions may amend the code to make it more stringent when justified and are responsible for administering and enforcing it. According to a 2002 study, building codes have the potential to significantly reduce the damage caused by hurricanes. The study found that residential losses from Hurricane Andrew would have been about $8.1 billion lower if all South Florida homes had met the current Miami-Dade and Broward code. In California, there is no statewide building code, but certain counties did implement stronger building codes after the Northridge earthquake in 1994. For example, Los Angeles County made its building code stronger after Northridge and has implemented several updates since then. According to a CEA official, the California legislature has tried to enact a statewide building code since 1996, but has been unable to reach a consensus. Florida and California officials we contacted said that while stronger building codes have been implemented, many older structures that have not been retrofitted remain vulnerable to hurricane or earthquake damage. According to insurance market participants, many, if not all, insurance companies and state authorities currently use computer programs offered by several modeling firms to estimate the financial consequences of various natural catastrophe scenarios and manage their financial exposures. To generate the loss estimates, the computer programs use large databases that catalog the past incidence and severity of natural catastrophes as well as proprietary insurance company data on policies written in particular states or areas. Using the estimates provided by these computer programs, insurers can attempt to manage their exposures in particularly high-risk areas. For example, an insurer could estimate the impact to the company of a hurricane with specified wind speeds striking Miami, given the number of policies that the insurer has written in the city as well as the value of insured property. Based on these types of estimates, companies can manage their risk and control their exposures (for example, by limiting the number and volume of policies written in a particular area or purchasing reinsurance if available on favorable terms) so that their losses are not expected to exceed a particular threshold, such as a specified percentage of their existing equity capital (a commonly used measure is from 10 to 20 percent of capital). According to industry officials we contacted, insurance and reinsurance companies generally use the computer programs to have greater confidence that they would have sufficient capital remaining to meet their obligations to customers and remain in business even in the aftermath of a major event. Whether individual companies are successful in managing their losses should such an event occur will depend in part on the accuracy of the estimates and the quality of the company’s risk management practices. Although the use of models and other revised underwriting standards may enhance insurers’ ability to control the financial consequences they experience from natural catastrophes, an effect may be reduced insurance availability. To the extent that private insurers reduce their exposures in risk-prone areas, consumers only may be able to obtain property insurance offered by state authorities. For example, according to Citizens officials, the organization provides 70 percent or more of the wind coverage in sections of Palm Beach, Broward, and Dade counties. Although state authorities can ensure that coverage is available in risk-prone areas, such insurers are generally not able to diversify their insurance portfolios and may suffer disproportionate losses when catastrophes occur. Insurers have increased policyholder deductibles for certain natural catastrophe risks in risk-prone areas. For example, prior to Hurricane Andrew in 1992, insurers in Florida generally required homeowners to pay a standard deductible of $500 for wind-related damage and would cover remaining losses to specified limits. After Hurricane Andrew, the Florida legislature instituted percentage hurricane deductibles. For homes valued at $100,000 or more, insurers may now establish deductibles from 2 to 5 percent of the policy limits for hurricane damage. According to an insurance association, 2 percent is the most common deductible level, although 5 percent deductibles are widespread on higher-priced dwellings. The new deductible is much higher than the previous deductible and to some extent limits insurers’ financial exposures due to increases in property values resulting from inflation, since the dollar value of the 2 percent deductible increases as property values increase. General deductibles—usually $500—still apply to all homeowner policies for nonhurricane losses, including tornadoes, severe thunderstorms, and fire. Moreover, according to information from insurance market participants, percentage deductibles are now standard in risk-prone areas throughout the United States. Insurers and analysts we contacted said that the growth of the Bermuda reinsurance market over the past 15 years has enhanced the industry’s capacity to withstand natural catastrophes. According to an industry report, many reinsurance companies were incorporated in Bermuda after Hurricane Andrew in 1992 and the September 11 attacks to take advantage of the high global premium rates for catastrophic coverage, and many specialize in catastrophe risk. Additionally, regulatory and industry officials we contacted said that Bermuda’s favorable tax environment (no corporate income or capital gains taxes), a flexible regulatory environment that permits companies to be created more quickly than in other jurisdictions, and a concentration of individuals with insurance expertise have contributed to the growth of the Bermuda insurance market. According to a Bermuda insurance industry association, Bermuda reinsurers currently provide a total of 50 percent of all Florida reinsurance. One large primary company we contacted said that Bermuda companies are of critical importance to its overall risk management strategy. In addition, one state authority official reported buying reinsurance from companies in Bermuda. Other industry participants noted that Bermuda companies have diversified the worldwide reinsurance market. Moreover, some Bermuda companies specialize in providing reinsurance to about 30 primary companies that were established to “take out” policies from Citizens. Citizens pays bonuses to primary companies, called take out companies, as an incentive to assume the liability on polices that are taken out for 3 years. The bonuses are based on a percentage of the premiums for the policies taken out of Citizens. According to Florida insurance regulators, many of the take out companies, therefore, have substantial exposure to hurricane risk. We note that some analysts have questioned the extent to which the Bermuda market has enhanced insurer capacity since some of the capital raised by Bermuda insurers may represent funds invested by existing insurance companies. The four hurricanes that struck within a 6-week period in 2004 provided the first test of the steps the state and the insurance industry have taken to enhance industry capacity since Hurricane Andrew (see fig. 2). As of the end of 2004, they had generated an estimated 1.5 million claims from property owners with over $20 billion in insured losses in Florida— equating to losses with an expected annual occurrence from 2 to 5 percent (that is, a 1-in-20 to a 1-in-45 year loss). Although many insurers incurred significant losses, 1 take out company failed, and some insurers are restricting coverage and requesting rate increases, industry participants and state officials generally agreed that the steps taken after Hurricane Andrew in 1992 helped the industry better absorb the hurricane losses and provided stability in the insurance markets. For example, only 1 company failed in 2004 in contrast to 11 that failed after Andrew. According to one modeling firm official, while the hurricane losses are significant, insurers typically plan to absorb more than double the losses experienced in these four events. However, some of the steps taken after Andrew were designed to manage losses from a single storm similar to Andrew, rather than the unusual occurrence of four hurricanes making landfall in the United States and causing major damage in the same general area. Therefore, state officials and insurers are considering further changes to better address the potential for a future hurricane season with similar events. FHCF’s payments to its members were limited due to the fact that four relatively mid-sized hurricanes struck Florida rather than one major storm such as Andrew. As previously discussed, FHCF payments to its members are generally triggered when members’ losses from a particular storm reach $4.5 billion (a company may receive FHCF payments if its losses exceed its individual retention level—or deductible—even if overall industry losses are less than $4.5 billion). According to an FHCF official, all four storms are expected to trigger FHCF recoveries totaling about $2 billion in payments to 123 of about 230 participating insurers. FHCF members that did not receive payments, including Citizens, did not have losses that reached their individual retention levels (see fig. 3). As a result of the 2004 hurricanes, Florida officials are considering changes to FHCF, such as lowering the industry retention level from the current $4.5 billion, lowering the retention after the second hurricane in a season, or applying a single hurricane season retention, rather than the per hurricane retentions currently in place. Reinsurance company officials, except for Bermuda companies described in a subsequent section, said that their losses from the 2004 hurricanes were also limited for the same general reasons as FHCF. That is, reinsurance contracts typically require primary companies to retain a specified percentage of the losses associated with hurricanes and are written on a per occurrence basis. The reinsurance company officials said that each of the four hurricanes generally did not result in losses that exceeded the primary companies’ retention levels. Additionally, reinsurers’ exposures may have been limited because primary companies only purchased reinsurance for one or two storms and may not have purchased reinsurance coverage for a third or fourth storm. Because, in general, many reinsurance companies were not significantly affected by the 2004 hurricane season, insurance market analysts generally do not expect significant increases in reinsurance premiums similar to those that took place after Hurricane Andrew in 1992. Although it is too early for definitive conclusions, insurers, a Florida regulatory official, and a consumer representative we contacted said that the state’s revised building codes may have mitigated insurer losses from the 2004 hurricanes. For example, a recent study of damage caused by Hurricanes Charley, Frances, and Ivan found that structures built according to the new building codes fared better than structures built under older building codes. However, in some cases, insurance market participants said that newer structures sustained damage despite the revised building codes. For example, the officials said that materials blown off of older structures struck newer buildings causing damage such as shattered windows. In addition, Florida officials reported that some builders of structures subject to revised codes did not use proper materials or techniques, which resulted in damage and losses. Overall, insurance companies and other industry participants reported that steps insurers took based on information generated by computer models of exposures mitigated their losses during the 2004 hurricane season; however, some insurers noted that the models did not accurately estimate their actual losses. According to two modeling firm representatives, the purpose of catastrophe modeling is not to predict exact losses from specific storms but to anticipate the likelihood and severity of potential future events so that companies can prepare accordingly. Insurers and other industry participants also reported some aspects of the models that could be improved. Insurance industry officials noted that the models did not take into account the increased cost of labor and construction materials after the hurricanes, or demand surge. In addition, companies noted that the models did not take into account the impact of damage caused to the same properties by storms with overlapping tracks. Officials from the modeling firms told us that since the models are based on historical data, they do factor in the possibility of multiple events in 1 year. However, one firm noted that the models assume that the damage caused by each event is independent. Representatives from three modeling firms told us that the companies will incorporate meteorological and claims data from the 2004 hurricane season into their models and consider other improvements in future upgrades. Insurance company and other industry officials we contacted said that using percentage-based deductibles mitigated losses associated with the 2004 hurricanes. However, Florida insurance regulatory officials told us that some consumers complained that they were surprised by the high amount of their deductibles. In addition, with multiple storms sometimes crossing the same paths, paying multiple deductibles became an issue of consumer fairness. According to state regulatory officials, some insurance companies have decided to apply a single deductible to all their policies. Some insurers we interviewed said that they are deciding on a case-by-case basis whether multiple deductibles should apply. For example, one insurer told us that if the claims adjuster could not determine what damage was caused by what storm, generally only one deductible would be applied. According to state regulatory officials, there are approximately 29,000 cases of multiple deductibles. On December 16, 2004, the state legislature passed legislation to reimburse policyholders who had to pay multiple deductibles. According to the new law, up to $150 million will be borrowed from FHCF to provide grants of up to $10,000 to policyholders subject to two deductibles and up to $20,000 for policyholders subject to three or more deductibles. Funds borrowed from FHCF will be repaid by increasing insurers’ FHCF premiums beginning in 2006. For policies issued or renewed on or after May 1, 2005, the new law also permits insurers to apply a single deductible for each hurricane season. When the deductible is exhausted, the deductible for other perils—generally $500—will be applied to claims for damage from subsequent storms. Bermuda reinsurers are expected to pay a significant amount of reinsurance losses compared with other reinsurance companies because of their specialization in catastrophe risk (such as providing reinsurance to take out companies). A Bermuda insurance industry association representative estimated that Bermuda reinsurers will pay about $2.6 billion in losses from the four hurricanes, or about 10 percent of the total losses. These losses could exhaust from 25 to 40 percent of companies’ earnings for 2004. The Bermuda insurance industry association official noted that no Bermuda companies are expected to fail as a result of these losses and that the ratings of Bermuda companies have not been affected by the hurricane losses. The association official also said that these companies are well capitalized and have had several years with low catastrophe losses. While state government and insurer measures initiated since the 1990s likely facilitated insurers’ ability to respond to the 2004 hurricane season, an event with losses representing an expected annual occurrence of no more than 1 percent to .4 percent could have major consequences for insurers and insurance availability. Neither the 2004 hurricane season, as discussed previously, nor Hurricane Andrew or the Northridge earthquake qualified as an event with losses representing a 1 percent expected annual occurrence, yet many insurers experienced significant losses and some restricted coverage as a result of these catastrophes. It follows that a more severe hurricane (or series of hurricanes) or earthquake with estimated losses of $50 billion or more would have even more severe consequences. For example, FHCF’s total available financial resources of $15 billion are intended to cover losses from a hurricane with an estimated occurrence of about 2 percent annually (approximately a 1-in-50 year event). If a more severe hurricane or series of hurricanes struck Florida, FHCF would likely impose assessments on the insurance industry to cover the costs of bonds issued to meet its obligations and its financial resources would be exhausted. Insurers, in turn, might impose higher premiums on policyholders to cover the cost of these assessments. Moreover, a severe hurricane would likely impose much higher losses on the reinsurance industry than did the 2004 hurricane season, particularly because primary insurers’ losses may exceed the retention levels specified in their reinsurance contracts. Our previous work, as well as recent discussions with NAIC officials, also indicates that a catastrophe with an expected occurrence of no more than 1 percent annually would likely cause a significant number of insurer insolvencies among companies with high exposures to such events and inadequate risk management practices. Several assessments by state catastrophe authorities, such as FHCF and Citizens, and state guaranty funds (described next) could reduce insurers’ equity capital, which would already be strained by significant losses. Insurers that experience substantial losses and declines in equity capital would likely face rating downgrades from the rating agencies. Consequently, such companies might no longer be able to meet their obligations to their customers and state authorities could intervene to ensure that some claims were paid. All states have established so-called guaranty funds, which are financed by assessments on the insurance industry for this purpose. However, it is not clear that the state guaranty funds would have sufficient resources to withstand the failures of many insurers associated with a major catastrophic event or series of events. Insurers’ reactions to past catastrophic events—for example, restrictions on reinsurance coverage and higher reinsurance premiums—and the potential consequences for insurers from an even more severe catastrophe have generated financial instruments and proposals designed to enhance industry capacity for both natural events and terrorist attacks. Catastrophe bonds serve as a potential means for insurers to tap the large financial resources of the capital markets to cover the large exposures associated with potential catastrophes. In fact, several insurance and reinsurance companies currently use catastrophe bonds to enhance their capacity to cover low probability, high severity natural events, although catastrophe bonds have not been issued yet to cover terrorism risk in the United States. However, catastrophe bonds are not widely used in the insurance industry due to their relatively high cost compared with reinsurance, among other factors. Some insurance market analysts have also advocated changing U.S. tax laws and accounting standards to permit insurers to set aside reserves on a tax-deductible basis to increase their capacity for both natural catastrophes and terrorist attacks. However, tax-deductible reserves involve tradeoffs such as lower federal revenues and some analysts believe that the reserves would not materially enhance capacity because insurers might substitute reserves for existing reinsurance coverage, the cost of which is tax deductible. According to private-sector data, the value of outstanding catastrophe bonds increased substantially from 1997 through 2004 (see fig. 4). The value of outstanding catastrophe bonds worldwide increased about 50 percent from year-end 2002 to year-end 2004 to $4.3 billion. However, at $4.3 billion, the value of outstanding catastrophe bonds was small compared with industry catastrophe exposures. For example, a major hurricane striking densely populated regions of Florida alone could cause more than an estimated $50 billion in insured losses. As discussed in our previous reports, some insurance and reinsurance companies view catastrophe bonds as an important means of diversifying their overall strategy for transferring catastrophe risks, which traditionally involves purchasing reinsurance or retrocessional coverage. By raising funds from investors through the issuance of catastrophe bonds, insurers can expand the pool of capital available to cover the transfer of catastrophic risk. In addition, most of the catastrophe bonds issued provide coverage for catastrophic risk with high financial severity and low probability (such as events with an expected occurrence of no more than 1 percent annually). Consequently, none of the bonds issued to date that include coverage of U.S. wind risk were triggered by the 2004 hurricane season. According to various financial market representatives, because of the larger amount of capital that traditional reinsurers need to hold for high severity and lower-probability events, reinsurers limit their coverage and charge increasingly higher premiums for these risks. Representatives from one insurance company said that the company cannot obtain the amount of reinsurance it needs for the highest risks at reasonable prices and has obtained some of its reinsurance coverage in this risk category from catastrophe bonds as a result. This firm and other market participants said that the presence of catastrophe bonds as an alternative means of transferring risk may have moderated reinsurance premium increases over the years. Some insurers also find catastrophe bonds beneficial because they pose little or no credit risk. That is, financial market participants told us that insurers can be exposed to the credit risk of reinsurers not being able to honor their reinsurance contracts if a natural catastrophe were to occur. Catastrophe bonds, on the other hand, create little or no credit risk for insurers because the funds are immediately deposited into a trust account upon bond issuance to investors. Representatives from some insurers we contacted said that while they recognize that some reinsurers’ credit quality had declined in recent years, they guarded against credit risks by establishing credit standards for the companies with whom they do business and continually monitoring their financial condition. Some institutional investors we contacted also expressed positive views about catastrophe bonds. Some investors said that the bonds offered an attractive yield compared with traditional investments. These institutional investors also said that they purchased catastrophe bonds because they were uncorrelated with other risks in bond portfolios and helped diversify their portfolios. Although catastrophe bonds benefit some insurers and institutional investors, others we contacted said they do not issue or purchase catastrophe bonds for a number of reasons, which may have limited the expansion of the market. Some state authorities we contacted and many insurers view the total costs of catastrophe bonds—including transaction costs such as legal fees—as significantly exceeding the costs of traditional reinsurance. Insurer and state authority officials also said that they were not attracted to catastrophe bonds because they generally covered events with the lowest frequency and the highest severity. Rather, the officials said that they would prefer to obtain coverage for less severe events expected to take place more frequently. In addition, a recent study concluded that the fact that most catastrophe bonds are issued on a nonindemnity basis has limited the growth of the market because such bonds expose insurers to basis risk (the risk that the provisions that trigger the catastrophe bond will not be highly correlated with the insurer’s loss experience). Representatives from some institutional investors said that the risks associated with catastrophe bonds were too high or not worth the costs associated with assessing the risks. Some institutional investors also said that they decided not to purchase catastrophe bonds because they were considered illiquid. However, capital market participants we contacted said that the liquidity of the catastrophe bond market has improved. Moreover, the catastrophe bond market has generally been limited to coverage of natural disasters because the general consensus of insurance and financial market participants we contacted was that developing catastrophe bonds to cover potential targets against terrorism attacks in the United States was not feasible at this time. In contrast to natural catastrophes, where a substantial amount of historical data on the frequency and severity of events exists, terrorism risk poses challenges because it is extremely difficult to reliably model the frequency and severity of terrorist acts. Although several modeling firms are developing terrorism models that are being used by insurance companies to assist in their pricing of terrorism exposure, most experts we contacted said these models were too new and untested to be used in conjunction with a bond covering risks in the United States. Furthermore, potential investor concerns—such as a lack of information about issuer underwriting practices or the fear that terrorists would attack targets covered by catastrophe bonds—could make the costs associated with issuing terrorism-related securities prohibitive. Our previous work also identified certain tax, regulatory, and accounting issues that might have affected the use of catastrophe bonds. We have updated this work and discuss it in detail in appendix III. Tax-deductible reserves could confer several potential benefits, according to advocates of the proposal, but others argue that reserves would not bring about a meaningful increase in industry capacity. First, supporters of tax-deductible reserves argue they would provide insurers with financial incentives to increase their capital and thereby expand their capacity to cover catastrophic risks and avoid insolvency. Supporters also argue that they would lower the costs associated with providing catastrophic coverage and encourage insurers to charge lower premiums, which would increase catastrophic coverage among policyholders. Moreover, as mentioned in our discussion of catastrophe bonds, the risk exists that reinsurers might not be able to honor their reinsurance contracts if a natural catastrophe were to occur. Allowing insurers to establish tax- deductible reserves could help ensure that funds are available to pay claims if a catastrophe were to take place. Finally, information from NAIC states that under current accounting rules, insurers are not required to fully disclose the financial risks that they face from natural catastrophes and that these risks are not accounted for on insurers’ balance sheets. By requiring insurers to establish a mandatory reserve on their balance sheets and disclose it in the footnotes of the financial statements, the NAIC officials argue that the insurers’ financial statements would be more transparent and provide better information about the potential catastrophic risks that they face. An NAIC committee has made a catastrophe reserve proposal—which the NAIC has not officially endorsed—that would require insurers to gradually build up industrywide catastrophe reserves of a total of $40 billion over a 20-year period, or not more than $2 billion per year. The NAIC committee’s proposal would make such reserves mandatory to promote the safety and soundness of the insurance industry. The committee’s proposal would also stipulate that specified events—such as an earthquake, wind, hail, or volcanic eruption—could trigger a drawdown from the reserves— and that the President of the United States or Property Claim Services would have to declare that a catastrophe had occurred. The proposal would specify that either insurers’ losses reach a certain level or that industry catastrophe losses exceed $10 billion for insurers to make a drawdown on the reserve. However, there are potential tradeoffs associated with allowing insurers to establish tax-deductible reserves for potential catastrophes. In particular, permitting tax-deductible reserves would result in lower federal tax receipts according to industry analysts we contacted. Although supporters counter that permitting reserves would enhance industry capacity and thereby reduce the federal government’s catastrophe-related costs over the long term, the size of any such benefit is unknown. In addition, Treasury staff said that there would be no guarantee that insurance companies would actually increase the capital available to cover catastrophic risks. Rather, the officials said that insurers might use the reserves to shield a portion of their existing capital (or retained earnings) from the corporate income tax. Furthermore, reinsurance association officials said that insurance companies could inappropriately use tax-deductible reserves to manage their financial statements. That is, insurers could increase the reserves during good economic times and decrease them in bad economic times. In addition, Treasury staff expressed skepticism about the reliability of models used to predict the frequency and severity of catastrophes. Without reliable models, Treasury staff said that it would be difficult to determine the appropriate size of the catastrophe reserves. We note that insurers have developed sophisticated models to predict the frequency and severity of natural catastrophes such as hurricanes and that these models are currently considered more reliable than terrorism models. Finally, reinsurance association officials and an insurance industry analyst who supports tax-deductible reserves said that some insurers might reduce the amount of reinsurance coverage that they purchased if they were allowed to establish reserves. Because reserving would also convey tax advantages, some insurers might feel that they could limit the expense of purchasing reinsurance. To the extent that insurers reduced their reinsurance coverage in favor of tax-deductible reserves, the industry’s overall capacity would not necessarily increase. We also note that reinsurance is a global business and that reinsurers in other countries, particularly European countries and Bermuda, provide a significant amount of reinsurance for U.S. insurers. Since many European insurers in the countries we studied are already permitted to establish tax-deductible reserves (as described in the next section) and Bermuda reinsurers are not subject to an income tax, any potential enhancement of insurer capacity associated with granting U.S. insurers the authority to establish such reserves may be limited. European countries also face significant risks associated with natural catastrophes and terrorist attacks, and have developed a range of approaches to enhance insurers’ capacity to address catastrophic risks. For example, the six European countries we studied—France, Germany, Italy, Spain, Switzerland, and the United Kingdom—have developed a mix of government and private-sector approaches to covering natural catastrophe risk. In three of the countries, standard homeowner policies include mandatory coverage for natural catastrophes, and the government provides an explicit financial guarantee to pay claims in two of these three countries. The other three countries generally rely on insurance markets to provide natural catastrophe coverage. Concerning terrorism coverage, four of the six countries have established national terrorism programs, two of which are mandatory, wherein the national governments provide explicit financial guarantees to address the financial risks associated with terrorist attacks while the two remaining countries generally rely on insurance markets. As of the time of our review, all six countries allowed insurers to establish tax-deductible reserves to cover the costs associated with potential catastrophes, but there are significant variations in each country’s approach. Further, a new international accounting standard designed to prohibit the use of such catastrophe reserves may have a limited effect due to the way it is being implemented in Europe. Insurance for natural catastrophes in the six European countries we studied encompass a range of structures—from mandatory coverage with state-backed guarantees to wholly private-sector coverage. Figure 5 provides an overview of how natural catastrophes are insured in the six selected European countries. In summary, France and Spain have developed national programs with mandatory coverage and unlimited state guarantees. Switzerland mandates natural catastrophe coverage, but the government does not provide an explicit financial commitment. Germany, Italy, and the United Kingdom do not offer national insurance programs for natural catastrophes. In France, the Catastrophes Naturelles (CatNat) program was started in 1982 in response to serious flooding in southern France. French law requires standard property insurance policies to include coverage for natural catastrophes. According to information from the French government, between 95 and 98 percent of the population has taken out this comprehensive insurance and thus benefits from CatNat coverage. To cover natural catastrophe risk, insurers collect a government-determined 12 percent premium surcharge from policyholders. Insurers may then choose to forgo reinsurance for natural catastrophes or purchase reinsurance from the private market or the Caisse Centrale de Réassurance (CCR), a state-backed company authorized by law to reinsure natural catastrophe risk. CCR offers unlimited reinsurance coverage that is guaranteed by the French government in the event that CCR exhausts its resources. However, a CCR official noted that insurance companies must transfer half of their natural catastrophe risk to CCR in order to be covered under the state guarantee. According to one insurance broker and a French Treasury official, most insurers in France reinsure their natural catastrophe risk through CCR to obtain the state guarantee coverage. Under the French program, the government must declare that an event qualifies as a natural disaster. According to information from the French government and a CCR official, the program is set up so that insurers manage policyholders’ claims because they have the best claims-paying experience and expertise. Coverage from CCR takes effect after insureds pay a certain deductible. Since the program was started in 1982, France has declared 110,000 natural disasters and paid €6.4 billion (about $8.6 billion) in compensation, over half of which was for floods. In 2001, the government introduced a program to encourage cities to implement loss prevention measures by increasing deductibles in the event of repeated natural disasters, such as floods, for cities without a prevention plan. In Spain, a state-owned entity called the Consorcio de Compensación de Seguros (Consorcio) provides coverage for natural catastrophe risks. Originally established to provide indemnity to victims from the Spanish Civil War, the Consorcio now provides coverage for catastrophic risks not specifically covered under private-sector insurance policies or when an insurance company cannot fulfill its obligations. According to a Consorcio official, natural catastrophe coverage is mandatory and automatically included in standard policies, and although Spanish law does not require the purchase of standard property insurance policies, most people do have insurance because banks require it as a condition of mortgages. As a result, most property is covered for natural catastrophes. The Consorcio uses data from private insurers and its own claims data to calculate the standard surcharge rate for different types of properties (such as housing, offices, industrial sites, and public works). As in France, insurers collect this surcharge from all policyholders’ property insurance premiums. Unlike in France, where insurers may use the surcharge collected to purchase reinsurance coverage from CCR or private reinsurers (or to cover the costs associated with retaining natural catastrophe risk), Spanish insurers must transfer the surcharge to the Consorcio on a monthly basis and in return receive a 5 percent collection commission that is tax deductible. The Consorcio’s catastrophe coverage protects the same property or persons to at least the same level as risks covered under the primary insurance policy from the private insurer. The Spanish government provides an unlimited guarantee in the event that the Consorcio’s resources are exhausted; however, the government guarantee has never been triggered. According to Consorcio and Spanish insurance industry officials, the Consorcio provides nearly all the natural catastrophe coverage in Spain. Even though private insurers have been allowed to provide natural catastrophe coverage since 1990 few, if any, do so. Because their risks would not be as geographically diversified as the Consorcio’s (since it provides coverage to policyholders across the country), private insurers would not be able to charge rates competitive with the Consorcio. In addition, a Consorcio official said that even if insurers provided policyholders with natural catastrophe coverage, the insurers would still have to pay the Consorcio surcharge. Unlike France, no official government declaration of a disaster is required for this coverage to take effect. Coverage from the Consorcio is automatic whenever any of the specified catastrophes occurs. The Spanish system also differs from the French system in that, according to a Consorcio official, the Consorcio compensates policyholders directly for their losses. In 2003, the Consorcio paid about €143 million (about $192 million) in catastrophe losses. As in France, floods represent the highest percentage of the total natural catastrophe claims. Swiss law requires insurers to include coverage for natural catastrophes as an extension to all fire insurance contracts on buildings and contents. Insurers first integrated natural catastrophe coverage into fire insurance policies on a voluntary basis in 1953 after severe damage caused by avalanches. Since it was too expensive to insure those who lived in areas at high risk for avalanches, the insurance industry packaged all natural catastrophe risks together and attached this package to fire insurance policies. The natural catastrophe coverage became a requirement in law in 1992. In addition, Switzerland now has regulations controlling building in areas such as avalanche zones and flood plains. As in France and Spain, all policyholders pay a uniform premium rate for natural catastrophe coverage, which is part of the fire insurance premium. The standard premium amount, calculated by an actuarially based methodology, is also written into law but has not been revised or adjusted since 1993. Most property owners in Switzerland are required to have building insurance for fire and natural catastrophes. As a result of this mandatory coverage, most buildings in Switzerland are covered for these events. Coverage for building contents is generally optional in Switzerland, but according to Swiss insurance industry and government officials, most people also have this coverage. An insurance association official told us that earthquake risk was not originally included in the natural catastrophe package because at that time, earthquakes were considered uninsurable. According to a Swiss Insurance Association official, coverage for earthquakes is available from insurers in Switzerland as an additional optional policy, but not many people buy it. Although the Swiss government does not provide a state guarantee to cover losses from a major catastrophe, as is the case in France and Spain, Swiss insurers have developed programs to share catastrophe losses. In some areas of Switzerland, state-run insurers provide building insurance. These state-run insurers have established a specialized reinsurance company to manage their natural catastrophe risk. According to Swiss government officials, the state-run insurers may purchase reinsurance coverage from the private market or this specialized reinsurance company. Providing coverage to only the state-run insurers, an insurance industry official said that this company retains some of the risk and also purchases retrocessional coverage from the private market. Similarly, private insurers created the Elementarschadenpool, or Swiss Elemental Pool, to spread their natural catastrophe risk. A Swiss insurance association official said that the pool has also obtained reinsurance coverage for losses that exceed specified levels. As in France and Spain, the pool’s flood losses have exceeded the losses for other natural perils, according to an industry report. The governments in Italy, Germany, and the United Kingdom do not mandate, provide, or financially guarantee natural catastrophe insurance. In Italy and Germany, coverage for natural catastrophes, such as floods, is optional and only available from private insurers for additional premiums. According to an Italian insurance supervisory official, the property of private citizens is generally not covered by any kind of natural catastrophe insurance. The official also said that some medium and large-sized businesses and, to a lesser extent, small businesses are covered against this risk in Italy. In Germany, regulatory and insurance officials said that coverage for a wide variety of natural catastrophes is generally available from private insurers in additional policies. However, the officials also said that few policyholders choose to purchase it and it may be difficult to obtain flood insurance, particularly in areas prone to repeated flooding. In the United Kingdom, coverage for a range of natural perils, including flood insurance, is generally included in standard property insurance policies; however, the premiums and terms of the policy reflect the property’s flood risk. According to British insurance association officials, insurance for natural perils is generally available from the private market and 99 percent of homeowners have coverage, including coverage for flood. Although Italy, Germany, and the United Kingdom do not have national catastrophe programs, according to industry and government officials, each country has discussed developing such programs in recent years largely in the context of providing enhanced flood coverage. However, no final decisions had been reached at the time of our review. Four of the six European countries we studied provide terrorism insurance that is backed by government guarantees (see fig. 6). Specifically, France, Spain, Germany, and the United Kingdom have established national programs in conjunction with the insurance industry to provide terrorism coverage. In contrast, Italy and Switzerland do not have national terrorism insurance programs and private companies provide the limited coverage that is available. In France, primary insurers that offer property insurance are required by law to provide terrorism insurance and coverage is generally included in standard insurance policies, which means that all commercial properties are covered. However, after the September 11 attacks, reinsurers cancelled terrorism coverage and many primary insurers that could not obtain reinsurance chose to stop offering commercial property insurance to avoid the mandatory terrorism coverage. According to French insurance industry officials, the French government responded to this situation by temporarily requiring the extension of all contracts, but immediately began negotiations with the insurance industry to develop a more permanent solution. The Gestion de l’Assurance et de la Réassurance des Risques Attentats et Actes de Terrorisme (GAREAT) pool, a nonprofit organization, was created based on the existing administrative structures of the insurance associations and the natural catastrophe program already in place in France. Completed on December 28, 2001, GAREAT was the first national terrorism pool organized with state support after the September 11 attacks. In 2002, GAREAT paid two regional terrorism claims resulting from attacks on buildings to influence state policy totaling €7 million (about $9.4 million). Claims in 2003 amounted to €0.25 million (about $336,000). GAREAT reinsures terrorism and business interruption risks for commercial properties that exceed €6 million (about $8 million) in insured value. The two insurance associations in France require their members to participate in GAREAT. Over 100 companies participate in the pool. Members of GAREAT must transfer a certain percentage of their terrorism risk into the pool. Insurers may charge policyholders whatever premium they consider appropriate, then the insurers pay 6, 12, or 18 percent of this premium depending on the size of the risks insured to obtain reinsurance coverage from the pool. In 2003, GAREAT earned €210 million (about $282 million) in premiums on 80,000 policies. In the event of a terrorist act that meets the definition in the French Criminal Code, the French state has agreed to provide an unlimited state guarantee after a certain industry retention level through the end of 2006 (see fig. 7). The unlimited state guarantee is provided through the same government-backed reinsurer that guarantees natural catastrophe claims, CCR. In Spain, coverage for terrorism risk is handled in the same way as natural catastrophe risk—it is included in standard property insurance policies and all policyholders pay a premium surcharge on their primary insurance contracts to fund coverage for both risks. Spain’s state-owned company, the Consorcio, provides policyholders direct compensation for terrorism losses as well as natural catastrophe losses. The state offers an unlimited guarantee, which has never gone into effect, if claims exceed the Consorcio’s resources. Between 1987 and 2003, terrorism claims represented 9.9 percent of all losses paid by the Consorcio. The Consorcio is in the process of paying claims resulting from the terrorist attack on a Madrid commuter train on March 11, 2004. According to information from the Consorcio, as of January 2005, €35 million (about $47 million) in claims had been paid, including benefits for deaths, permanent disability, and property damage. Germany also has a national terrorism insurance program with a state guarantee, although it differs from the Spanish and French programs in that insureds have the option of purchasing the coverage and the state guarantee is limited. After the September 11 attacks, most insurance companies excluded terrorism coverage from their commercial policies and the German government came under pressure from businesses as well as insurance companies to find a solution to the lack of terrorism insurance, according to insurance officials we contacted. One official said that industry representatives feared that German businesses were at a competitive disadvantage because terrorism insurance was available in other European countries. As a result, the German government, insurance industry, and business groups collaborated to form Extremus Versicherungs-AG (Extremus), a specialized insurance company that covers only terrorism risk. Extremus provides voluntary coverage for commercial and industrial properties and business interruption losses in Germany with an insured value above €25 million (about $34 million). The premium rate for coverage from Extremus is a standard rate based on the value of the property insured, with no differentiation according to risk or location of the property. Unlike the French and Spanish programs, the guarantee from the German government is capped at €8 billion (about $10.7 billion) and would take effect after insurers and reinsurers had absorbed €2.0 billion (about $2.7 billion) in losses (see fig. 8). The total capacity of the program therefore is €10 billion (about $13 billion). According to an Extremus official, the state guarantee was limited to 3 years, and the government will have to decide whether to continue the guarantee after 2005. Demand for terrorism coverage from Extremus has been much lower than expected, according to Extremus officials. In the first year of business, Extremus had a goal of collecting €300 million (about $403 million) in premiums, which was increased to €500 million (about $671 million) in the following years, but collected only €105 million in premiums (about $141 million). In addition, many of the contracts were from smaller businesses. As a result, Extremus renegotiated its reinsurance contracts and the level of the state guarantee was reduced in March 2004. Extremus originally planned to phase out the state guarantee by building up sufficient reserves to handle potential claims. However, premium income has been too low to build a substantial reserve. Extremus continues to struggle to meet its goals, as five large clients did not renew their policies in 2004. Representatives from an organization representing German businesses told us that several factors may have contributed to low demand, including the perception of many insureds that they were at a low risk of a terrorism attack and that Extremus coverage would not be cost- effective; gaps in Extremus coverage (for example, Extremus only covers properties within Germany and excludes liability coverage); and competition from other international insurers and reinsurers that could offer coverage similar to Extremus. An official from Extremus told us that the company is considering making changes to its underwriting based on these concerns—such as covering business interruption risks for subsidiaries of German companies located in other European Union countries if an attack occurred in one of these countries. In the United Kingdom, the Pool Reinsurance Company, Limited (Pool Re) provides terrorism coverage, which is similar to the French and Spanish programs in that the state provides an unlimited guarantee but also similar to the German system in that participation by insureds is voluntary. Pool Re was established in 1993 by the insurance market with support from the British government in response to restrictions on the availability of reinsurance following several terrorism incidents in London related to the situation in Northern Ireland at that time. Pool Re is a mutual insurance company that operates to provide reinsurance coverage for only commercial property damage and business interruption as a result of a terrorist act. While terrorism coverage is optional in the United Kingdom and membership in Pool Re is voluntary, Pool Re members are required to provide terrorism coverage to policyholders if requested, and members must reinsure all of their terrorism coverage with Pool Re. Similarly, insureds cannot select which properties in the United Kingdom are insured for terrorism. If they choose to purchase terrorism insurance, they must insure either all of their properties or none of them. According to one Pool Re official, this policy prevents adverse selection from occurring (that is, the risk that Pool Re’s portfolio would include only the riskiest properties and not be diversified). Pool Re’s rates are determined by geographic zone in the United Kingdom. For example, rates are higher for properties located in London than for properties in other parts of the country. Business interruption coverage is offered at a standard rate throughout the country. Members are free to set their own terrorism premiums for their underlying policies. Prior to the September 11 attacks, Pool Re coverage was limited to acts of terrorism resulting in fire and explosion, according to a Pool Re official. However, after the September 11 attacks, reinsurers began excluding damage caused by perils other than fire and explosion. As a result, Pool Re agreed, in consultation with the U.K. Treasury, members, and insurance industry participants, to expand its coverage to include other conventional perils beyond fire and explosion and also the risk of nuclear, biological, and chemical attacks. In the event of an attack, the British government issues a certificate determining the event to be an act of terrorism. Coverage from Pool Re takes effect after members pay individual retention levels, which are calculated as proportions of an industrywide figure based on the degree of members’ participation in Pool Re. For 2004, the industrywide retention level is £100 million (about $194 million). If the resources of Pool Re are exhausted, the British government provides an unlimited guarantee. Pool Re pays the government a premium for this guarantee and would have to repay the Treasury any amount received from the guarantee. This guarantee has never been triggered. Since 1993, Pool Re has paid a total of £612 million (about $1.2 billion) and currently has about £1.5 billion in reserves (about $2.9 billion). The largest event for which Pool Re paid claims occurred in 1993, and resulted in payments totaling £262 million (about $509 million). Italy and Switzerland do not have national terrorism programs, and the availability of terrorism insurance is limited. According to a study commissioned by the Organization for Economic Cooperation and Development (OECD), the majority of insurance policies covering damage to high-value properties in Italy exclude terrorism risk. The OECD report also noted that additional terrorism insurance is fairly restricted and very expensive. According to a Swiss insurance association official, terrorism risk is excluded from standard fire insurance policies above a certain value in Switzerland (set at 10 million Swiss francs or about $8.8 million). Each of these countries has considered the necessity for a national terrorism insurance program. For example, the Italian National Insurance Companies Association submitted a proposal to the government in 2003 to create an insurance/reinsurance pool, but it was later withdrawn. As of 2004, regulations, tax law, and accounting standards in the six European countries we reviewed allowed insurance companies to establish tax-deductible reserves for potential losses associated with catastrophic events. These tax-deductible reserves are often called catastrophe or equalization reserves. However, each country differs in the way it allows reserves to be set-up and used (see fig. 9). Following are brief descriptions of each European country’s approach for establishing and maintaining catastrophe and equalization reserves: According to an insurance industry official, French accounting standards and tax law allow insurance companies to establish both catastrophe and equalization reserves. A French insurance industry participant told us that these reserves can be used for natural events such as storms and hail, but also for nuclear, pollution, aviation, and terrorism risks. The industry officials also said that under French accounting standards and tax law, the maximum limit on the tax- deductible amount that can be put into these reserves is 75 percent of the income for each year, provided that the total amount of the reserve does not exceed 300 percent of annual income. The funds reserved each year are released after 10 years if not used. However, neither the regulator nor the French accounting standards provides guidance on when money can be withdrawn from the reserves. German commercial law requires insurance companies to establish catastrophe and equalization reserves for catastrophic risk, according to German accounting firm officials. These officials said that catastrophe reserves cover losses from nuclear, pharmaceutical liability, and terrorism risks but cannot be used for natural catastrophes. Instead, insurance companies can use equalization reserves to manage losses from natural catastrophes. To prevent abuse of the reserves, the accounting firm officials said that German accounting standards contain specific guidance for calculating the additions, withdrawals, and limits on both catastrophe and equalization reserves for different lines of businesses. The officials also said that under German tax law, these reserves are tax deductible. According to an Italian government official, the insurance supervisory authority in Italy requires insurance companies to establish catastrophe reserves for nuclear risk and natural catastrophes such as earthquakes and volcanic eruptions, but reserves are not permitted for terrorism risk. The official also said that equalization reserves are required for hail and other climate risks. Under Italian accounting standards and tax law, the government official said that catastrophe and equalization reserves are built through tax-deductible contributions. In addition, the official noted that although there are specific limits on the total amount companies can hold in reserve for each type of risk, currently there are no regulations for determining the amounts of additions and withdrawals for these reserves. According to Spanish government and insurance industry officials, Spanish insurance regulators allow the state-owned insurer, the Consorcio, and private insurance companies to establish catastrophe reserves for catastrophic events and equalization reserves for other liability risks such as automobile. However, as previously discussed, the Consorcio effectively handles all natural catastrophe and terrorism risks, and therefore, insurance industry officials told us that private insurers do not need catastrophe reserves. According to Spanish tax law and accounting standards, catastrophe reserves are tax deductible and are accrued in the liability accounts on the balance sheet. Spanish accounting firm officials said that the funds in the Consorcio’s catastrophe reserve are tax deductible to a certain limit. Once the reserved funds exceed this limit, they are taxed. The accounting firm officials also said that there is no regulation controlling the amount of funds the Consorcio has to maintain in its reserve and no formula for contributions to and withdrawals from the reserve. However, a Consorcio official told us that the Consorcio’s general practice is to maintain an amount in reserve equal to three times the highest amount of claims it had ever paid in a year. According to a Swiss accounting firm official, under Swiss tax and accounting standards, insurance companies are allowed to establish tax- deductible catastrophe reserves provided the Federal Office of Private Insurance (the Swiss insurance supervisory body) approves a justification of the reserve. The official said that currently, there are no explicit regulations on how the contributions, withdrawals, or total amount of reserves should be calculated. Instead, the Swiss supervisory body provides guidance on a case-by-case basis on how to increase and withdraw reserves. According to government officials, the insurance supervisory authority is currently developing new solvency standards, which include more explicit rules to ensure consistency and standardization in calculating contributions and balances of the reserves. Although Swiss tax and accounting standards generally allow catastrophe reserves and Swiss insurance companies could establish these reserves on the individual company level, insurance industry officials said that not many companies that are organized into insurance groups have them on a consolidated level (for example, the reserves are not included in the combined financial statements of an insurance group, which may have individual affiliates or subsidiaries in many different countries). According to the accounting firm official, these reserves would be eliminated on the consolidated level if Swiss GAAP FER or another internationally accepted accounting framework that prohibits such reserves is used. In the United Kingdom, the Financial Services Authority (FSA), the regulatory body for the financial services industry, requires insurance companies to establish equalization reserves for property and other types of insurance, according to a British accounting firm official. This official said that under U.K. accounting standards and tax law, these reserves are tax deductible and are accrued in the liability accounts of the balance sheet. The Interim Prudential Sourcebook for Insurers, published by FSA, contains detailed accounting rules for the calculation of the reserve, including the contributions, withdrawals, and maximum balances of the equalization reserves. However, the accounting firm official said that U.K. accounting standards do not permit a separate catastrophe reserve. In March 2004, as part of an effort to achieve global convergence of accounting standards, the International Accounting Standards Board (IASB) issued International Financial Reporting Standard 4 Insurance Contracts (IFRS 4), which includes guidance that effectively prohibits the use of catastrophe and equalization reserves. Under the new international accounting standards, loss reserves can only be accrued if the event has occurred and the related losses are estimable. IFRS 4 presents several arguments in favor of prohibiting the use of reserves for future catastrophic events. For example, provisions for such reserves do not necessarily qualify as liabilities because the losses have not occurred yet and treating them as if they had could diminish the relevance and reliability of an insurer’s financial statements. As previously mentioned, some analysts argue that reserves would ensure funds were available to pay claims in the event of a catastrophe. However, IASB argues that the general purpose of financial reporting is not to enhance solvency, but to provide information that is useful to a wide range of users for economic decisions. In November 2004, the European Union (EU) endorsed IFRS 4, and specified that only companies listed on their respective national stock exchanges, as well as companies with listed debt, be required to prepare their consolidated financial statements (for example, the combined financial statements of an insurance group, which may have individual affiliates or subsidiaries in many different countries) in accordance with IFRS 4. However, the EU gives member states the option of permitting or requiring these individual affiliates or subsidiaries to follow IFRS 4 requirements in preparing their individual financial statements. EU countries also have the option of allowing unlisted companies to follow these standards. For example, according to government and Consorcio officials, Spanish insurance regulators have decided to exercise this option and prohibit the Consorcio—an unlisted company—from following IFRS 4. According to the EU regulation, the designated insurance companies are required to follow IFRS 4, starting with financial statements prepared on or after January 1, 2005. European officials we contacted in some cases expressed differing views on the elimination of catastrophe and equalization reserves under IFRS 4. A European Commission official indicated that European insurance companies should be able to cope with the elimination of catastrophe and equalization reserves because individual companies could still establish and maintain the reserves for tax purposes, but the reserves would be eliminated in the financial statements on a consolidated level. In the consolidation for financial reporting, the reserves would be moved from liabilities to equity. Representatives from a large German accounting firm said that German insurance companies would most likely prepare two sets of financial statements. One would exclude reserves and comply with the international accounting standards, and the other would include the reserves and be submitted to the taxation authorities, similar to U.S. practices. However, insurance industry participants in some of the European countries that we reviewed expressed the following concerns about the provision eliminating reserves: Insurance industry officials in France stated that reserving is essential as a precaution for coverage of natural catastrophe risks. In addition, representatives from a large German accounting firm said that reserves provide transparency in financial reporting and help users of financial statements to better understand insurers’ risk management practices. One insurance industry representative expressed concern that having two sets of financial statements would result in complexities and ambiguities in financial reporting and national tax regulations and policies. Other officials said they are concerned that the local taxation authorities might follow IFRS 4 and change their policies to discontinue the use of tax-deductible reserves. Insurers might have to respond by purchasing reinsurance in order to obtain coverage for catastrophic risks, which the reserves would have provided. As of the time of this review we were not aware of any changes in these countries’ regulations or tax laws regarding the use of catastrophe reserves for tax purposes. The insurance industry may not be able to withstand major catastrophic events without federal government intervention. Although the industry has improved its ability to respond to the losses associated with natural catastrophes—at least those on the scale of the 2004 hurricane season— without widespread market disruptions, industry capacity has not yet been tested by a major catastrophe (such as an event with an expected annual occurrence of no more than 1 percent to .4 percent). Such a catastrophe or series of catastrophes could result in significant disruptions to insurance markets. In addition, it is not clear how state governments and insurers would react to such a scenario, restore stability to insurance markets, and ensure the continued availability of critical insurance coverage, or whether they would have the capacity to do so. Moreover, because of the federal government’s size and financial resources, it could be called upon to provide financial assistance to insurers and policyholders in addition to traditional obligations, such as repairing public facilities and providing temporary assistance to affected individuals. It is also not yet clear the extent to which the catastrophe bond market or authorizing insurers to establish tax-deductible reserves has the potential to materially enhance industry capacity and thereby mitigate financial risks to the federal government and others. Although several insurers use catastrophe bonds to address the most severe types of catastrophic risk, the bonds are not yet widely accepted in the insurance industry due to cost and other factors. In addition, some industry participants question the viability of the catastrophe bond market because no catastrophe bond has ever been triggered, even by the 2004 hurricane season. Further, industry participants do not consider catastrophe bonds feasible for terrorism risks at this time. Although supporters believe that authorizing tax-deductible reserves could enhance industry capacity, such a policy change would also reduce federal tax revenue and may not materially enhance capacity since the reserves may substitute for reinsurance. In response to the financial and market risks associated with natural catastrophes and terrorism attacks, major European countries have, with important exceptions, generally adopted policies that rely on national government intervention to enhance industry capacity to a greater extent than is the case in the United States. France, Spain, and to some extent Switzerland (but not Germany, the United Kingdom, and Italy) have adopted national programs to address a range of natural catastrophe risk, whereas the United States government does not have a comparable program (although it does have a flood insurance program as discussed in app. II). Further, all six countries we studied use their tax codes to encourage insurers to establish reserves for potential catastrophic events. A key similarity between Europe and the United States is that four of the six countries we reviewed have adopted national programs to address terrorism risk similar in many respects to TRIA. One important difference is that TRIA was designed as a temporary program that was expected to be discontinued when a private market for terrorism insurance could be established, whereas the European programs are generally not expected to be discontinued. European approaches to addressing natural catastrophe and terrorism risks illustrate benefits and drawbacks that may be useful for consideration by policymakers. The mandatory national programs for natural catastrophe risk in Spain and France, for example, help ensure that coverage is widely available for such risks, particularly in the wake of catastrophic events. However, such programs also involve significant government intervention in insurance markets, such as setting premium rates, which may not be actuarially based. Consequently, the capability of governments and insurers to control risk-taking by policyholders and minimize potential government liabilities may be limited, although some governments have tried to minimize this liability by implementing loss prevention programs. Concerning terrorism insurance, the mandatory national programs in France and Spain ensure that most policyholders have such coverage, although these programs also involve government intervention in setting premium rates and in monitoring risk-taking as is the case for natural catastrophe risk. In contrast, the purely voluntary national terrorism program in Germany and the private sector approaches in Switzerland and Italy have not yet been successful in ensuring that policyholders have terrorism coverage. Many policyholders choose not to purchase terrorism coverage because they view their risks as acceptably low or the premiums for terrorism coverage as too high (see app. II for a similar discussion regarding TRIA). We provided a draft of this report to the Department of the Treasury and the National Association of Insurance Commissioners. Treasury provided technical comments on the report that were incorporated as appropriate. NAIC’s Chief Financial Officer commented that the report was informative and accurate. In addition, we provided the relevant sections of a draft of this report to government and industry contacts in each of the European countries we studied and incorporated their comments where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after the report date. At that time, we will provide copies of this report to the Department of the Treasury, the National Association of Insurance Commissioners, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov or Wesley M. Phillips, Assistant Director, at phillipsw@gao.gov. GAO staff who made major contributions to this report are listed in appendix IV. This report provides information on a range of issues to assist the committee in its oversight of the insurance industry, particularly in light of the Terrorism Risk Insurance Act’s (TRIA) pending expiration. Our objectives were to (1) provide an overview of the property-casualty insurance industry’s current capacity to cover natural catastrophic risk and discuss the impacts that four hurricanes in 2004 had on the industry; (2) analyze the potential of catastrophe bonds and permitting insurance companies to establish tax-deductible reserves to cover catastrophic risk to enhance private-sector capacity; and (3) describe the approaches six selected European countries—France, Germany, Italy, Spain, Switzerland, and the United Kingdom—have taken to address natural and terrorist catastrophe risk, including whether these countries permit insurers to use tax-deductible reserves for such events. We also provide information on insurers’ financial exposure to terrorist attacks under TRIA and the extent to which catastrophe risks are not covered in the United States. These issues are discussed in appendix II. Our general methodology involved meeting with a range of private-sector and regulatory officials to obtain diverse viewpoints on the capacity of the insurance industry, status of efforts to securitize catastrophe risks, and the approaches taken in European countries to address catastrophe risk. We met with or received written responses from representatives of (1) the U.S. Department of Treasury; (2) the National Association of Insurance Commissioners (NAIC); (3) a state insurance regulator; (4) state catastrophe insurance and fund authorities including the California Earthquake Authority, Florida Hurricane Catastrophe Fund, and the Texas Windstorm Insurance Association; (5) national finance or economic ministries in Europe; (6) national insurance regulators in Europe; (7) the European Commission; (8) the Bermuda Monetary Authority; (9) the International Accounting Standards Board; (10) large insurers and reinsurers based in the United States, Europe, and Bermuda; (11) Citizens Property Insurance Corporation, (12) ratings agencies; (13) modeling firms; (14) law firms; (15) academics; (16) the American Academy of Actuaries; (17) the Insurance Services Office; (18) U.S. insurance and reinsurance trade associations; (19) global accounting firms; (20) European insurance associations and a Bermuda insurance association; (21) European business or property associations; (22) European catastrophe insurance programs; (23) the Organization for Economic Cooperation and Development; (24) the International Chamber of Commerce; (25) Lloyd’s; and (26) a consumer group. We also reviewed our previous work on insurance and catastrophe bonds and data and reports provided by private-sector and European government sources. Even though we did not have audit or access-to- records authority for the private-sector entities or foreign organizations and governments, we obtained extensive testimonial and documentary evidence. We also obtained estimates of the insured losses and claims resulting from the 2004 hurricanes from the Florida Office of Insurance Regulation. We obtained data on the issuance and outstanding value of the catastrophe bond market from Swiss Re Capital Markets. We did not verify the accuracy of data obtained from these organizations, but corroborated the information where possible with other sources. The information on foreign law in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. To respond to the first objective, we obtained data on insurance industry capacity from the Insurance Services Office and A.M. Best, the leading sources for data on the insurance industry. We asked these organizations and U.S. insurance companies, reinsurance companies, domestic and foreign insurance trade associations, rating agencies, state catastrophe authorities, and academic experts their views on insurance industry capacity, the difficulties of measuring insurance industry capacity, the implications and limitations of industry surplus data, the role of the Bermuda insurance market and state insurance funds and authorities in providing catastrophic insurance coverage, and the impact the 2004 hurricanes had on the insurance industry in Florida, and other issues. We also reviewed our previous report on insurance industry capacity. To respond to the second objective, we asked a reinsurance company and an insurance broker for the latest numbers on the kinds and amounts of catastrophe bonds issued and outstanding. We also talked to various organizations about the extent to which they use or do not use catastrophe bonds and why, the portion of the market for catastrophe risk that is covered by catastrophe bonds, and other methods of transferring catastrophe risk. Further, we obtained information about developing catastrophe bonds to cover terrorism risk; regulatory, tax, and accounting influences on catastrophe bonds; and views on the advantages and disadvantages of tax-deductible catastrophe reserves. We also reviewed our previous reports on catastrophe bonds. To respond to the third objective, we interviewed representatives of various national, regional, international, private, and public-sector organizations in the six countries we studied. We gathered documentary and testimonial evidence on laws, regulations, and practices related to catastrophe insurance and catastrophe reserving in each country and compared and contrasted information obtained from each country. We also interviewed international and regional organizations and asked representatives to assess the impact of International Accounting Standards on European countries’ reserving policies. We did not determine the effect of tax-deductibility on the overall tax burden imposed on insurance companies in these countries, or whether the deductibility provided incentives to create reserves. We conducted our work between February 2004 and January 2005 in Florida, New York, Washington, D.C., Belgium, France, Germany, Spain, Switzerland, and the United Kingdom. Our work was done in accordance with generally accepted government auditing standards. This appendix provides information from our previous reports and other sources on (1) insurers’ financial exposures to terrorist attacks under the Terrorism Risk Insurance Act (TRIA) and (2) the extent to which natural catastrophe and terrorism risks may be uncovered in the United States. Congress enacted TRIA in 2002 to ensure the continued availability of terrorism insurance in the United States after the September 11 attacks. Under TRIA, the Department of the Treasury (Treasury) would reimburse insurers for a large share of the losses associated with certain acts of foreign terrorism that occur during the term of the act. TRIA caps the federal government’s and the industry’s exposure to terrorist attacks at $100 billion annually. TRIA also requires that all insurers selling commercial lines of property-casualty insurance make available coverage for certain terrorist events and defines make available to mean that the coverage must be offered for insured losses arising from certified terrorist events and not differ materially from the terms, amounts, and limitations applicable to coverage for other insured losses. The act’s provisions are set to expire on December 31, 2005, but Congress is currently considering proposals to extend that date. Under TRIA, primary insurers have assumed responsibility for the financial consequences of terrorist attacks up to the levels specified in the act while the federal government is responsible for 90 percent of losses above those levels up to $100 billion annually. In 2005, primary insurers’ financial exposure is limited to 15 percent of their direct earned premiums (DEP), and they are responsible for 10 percent of losses above that amount while the federal government is responsible for the remaining 90 percent. Determining individual insurer’s financial exposures depends upon varying scenarios of the potential costs associated with terrorist attacks (for example, to what extent the cost of the attack would exceed 15 percent of an insurer’s DEP and the insurer’s 10 percent share of any losses beyond that amount). Since TRIA’s make available provisions do not apply to reinsurers, these companies have discretion in deciding how much terrorism coverage to offer to primary companies. As we have previously reported, available evidence indicates that reinsurers have cautiously reentered the market for terrorism insurance and are offering coverage up to the deductible (percentage of DEP) limits and 10 percent share specified in TRIA. However, we have previously reported that available evidence also suggests that few primary companies are buying this reinsurance to cover deductibles and co-pays because—as discussed next—many of their customers choose not to buy terrorism insurance or the primary companies consider reinsurance premiums to be too high. In the absence of TRIA, we have reported that reinsurers may not return to the terrorism insurance market, thereby further limiting their liability. Insurers we contacted stated that they cannot estimate potential losses from terrorism without a pricing model that can estimate both the frequency and severity of terrorist attacks. Reinsurance officials said that current models of risks for terrorist events do not have enough historical data to dependably forecast timing and severity, and therefore, are not reliable. A significant percentage of individuals and businesses lack coverage for some catastrophic events, even though protection is available from a variety of sources. For example, the California Earthquake Authority (CEA) estimates that about 15 percent of California residents purchase earthquake insurance. As shown in figure 10, an Insurance Services Office (ISO) study found that consumers have expressed a number of reasons for deciding not to purchase earthquake insurance in California, including the beliefs that they are not at risk, premiums and deductibles are too high, and the federal government would provide financial assistance in the event of a disaster. Insurers with whom we spoke expressed similar views on why their customers do not purchase certain types of catastrophic coverage. We note that earthquake insurance is voluntary in California, whereas participation in the Florida Hurricane Catastrophe Fund (FHCF) is mandatory for Florida insurers and mortgage lenders require that homeowners and businesses purchase wind protection. Consequently, most homeowners and businesses in Florida have wind coverage. Further, a significant percentage of flood risk in the United States remains uncovered, although, the National Flood Insurance Program was enacted to increase the availability of insurance for homeowners in areas at high risk for floods. The Federal Emergency Management Administration (FEMA), which administers the program, estimates that one-half to two- thirds of structures in special flood hazard areas do not have flood insurance coverage because the uninsured owners either are unaware that homeowners insurance does not cover flood damage or do not perceive the flood risk to which they are exposed as serious. Flood insurance is required for some of these properties, but the level of noncompliance with this requirement is unknown. However, as we have previously reported, there are indications that some level of noncompliance exists. For example, an August 2000 study by FEMA’s Office of Inspector General examined noncompliance for 4,195 residences in coastal areas of 10 states and found that 416—10 percent—were required to have flood insurance but did not. Finally, despite availability of terrorism coverage due to TRIA, limited industry data suggest that a significant percentage of commercial policyholders are not buying terrorism insurance, perhaps because they perceive their risk of losses from a terrorist act as being relatively low. Limited, but consistent results from industry surveys suggest from 10 to 30 percent of commercial policyholders are purchasing terrorism insurance. However, a more recent study estimates that nearly 50 percent of commercial property owners purchased terrorism insurance mid-2004. According to industry experts, many policyholders with businesses or properties not located near major urban centers or in possible high-risk locations are not buying terrorism insurance because they perceive themselves at low risk for terrorism and thus view any price for terrorism insurance as high relative to their risk exposure. Some industry experts are concerned that adverse selection—where those most at risk from terrorism are generally the only ones buying terrorism insurance—may be occurring. The potential negative effects of low purchase rates would become evident only in the aftermath of a terrorist attack and could include more difficult economic recovery for affected businesses without terrorism coverage. This appendix describes the structure of catastrophe bonds and certain tax, regulatory, and accounting issues that might have affected the use of catastrophe bonds as described in our previous reports. We have also updated some of the information from those reports. As discussed in our previous reports, a catastrophe bond offering is typically made through a special purpose reinsurance vehicle (SPRV) that may be sponsored by an insurance or reinsurance company (see fig. 11). The SPRV issues bonds or debt securities for purchase by investors. The catastrophe bond offering defines a catastrophe that would trigger a loss of investor principal and, if triggered, a formula to specify the compensation level from the investor to the SPRV. The SPRV holds the funds from the catastrophe bond offering in a trust in the form of Treasury securities and other highly rated assets. The SPRV then deposits the payments from the investors as well as the premium income from the company into a trust account. The premium paid by the insurance or reinsurance company and the investment income on the trust account provide the funding for the interest payments to investors and the costs of running the SPRV. If no event occurs that triggers the bond’s provisions and it matures, the SPRV pays investors the principal and interest that they are owed. typically are offered only to qualified institutional investors under Securities and Exchange Commission (SEC) Rule 144A; produce relatively high returns, either equaling or exceeding the returns on some comparable fixed-rate investments such as high-yield corporate debt; typically do not receive investment-grade ratings because bondholders face potentially large losses on the securities; and typically cover event risks that are considered the lowest probability and highest severity. Most catastrophe bonds are issued through SPRVs located offshore—in jurisdictions such as Bermuda—rather than in the United States. Unlike the United States, several of these jurisdictions exempt SPRVs from income or other taxes, which provides financial incentives for insurers to issue catastrophe bonds offshore. The National Association of Insurance Commissioners (NAIC) and some insurance industry groups have argued that insurers should be encouraged to issue catastrophe bonds onshore to lessen transaction costs and afford regulators greater scrutiny of SPRV activities. Some insurance industry groups have advocated that Congress change U.S. tax laws so that SPRVs would not be subject to income tax but instead receive “pass-through” treatment similar to that afforded mortgage- backed securities. In other words, the SPRV would not be taxed on the investment income from the trust account, and the tax would be passed on to the investor. Eliminating taxation at the SPRV level with pass-through treatment might facilitate expanded use of catastrophe bonds, but such legislative actions might also create pressure from other industries for similar tax treatment. In addition, to the extent that domestic SPRVs gained business at the expense of taxable entities, the federal government could lose tax revenue. Our previous reports also stated that NAIC’s current statutory accounting requirements might affect insurers’ use of nonindemnity-based financial instruments such as many catastrophe bonds. Under statutory accounting, an insurance company that buys traditional indemnity-based reinsurance or issues an indemnity-based catastrophe bond can reflect the transfer of risk (effected by the purchase of reinsurance) on the financial statements that it files with state regulators. As a result of the risk transfer, the insurance company can improve its stated financial condition and may be willing to write additional insurance policies. However, statutory accounting rules currently do not allow insurance companies to obtain a similar credit for using nonindemnity-based financial instruments that hedge insurance risk—which can include nonindemnity-based catastrophe bonds—and may therefore limit the appeal of these types of catastrophe bonds to potential issuers. Statutory accounting standards treat indemnity- and nonindemnity-based products differently because instruments that are nonindemnity-based have not been viewed as providing a true risk transfer. Although NAIC’s Securitization Working Group has approved a proposal that would allow reinsurance-like accounting treatment for such instruments, NAIC’s Statutory Accounting Committee must give final approval. The committee met in June 2004, but has not yet made a decision on this issue. Finally, we reported in 2003 that the Financial Accounting Standards Board (FASB) had issued guidance under GAAP that had the potential to limit the appeal of catastrophe bonds. Specifically, under the provisions of FASB Interpretation No. 46, Consolidation of Variable Interest Entities (FIN 46), variable interest entities, which include most catastrophe bond structures, were subject to consolidation on issuers’ financial statements. This provision had the potential to raise the costs associated with issuing catastrophe bonds and make them less attractive to issuers. Our September 2003 report stated that the impact of FIN 46 on the use of catastrophe bonds was unclear because insurers and financial market participants were not certain whether it would require insurers or investors to consolidate catastrophe bond assets and liabilities on their financial statements. In December 2003, FASB issued FIN 46R, revised guidance that eliminated some of the requirements for consolidation. One large issuer of catastrophe bonds we contacted consolidated some of its SPRVs in its financial statements under the criteria set in FIN 46R. However, another large issuer decided not to consolidate any of its SPRVs after evaluation of the criteria set in FIN 46R. In addition to those named above, Patrick S. Dynes, Jill M. Johnson, Matthew Keeler, Wing Lam, Marc Molino, and Barbara Roesmann made key contributions to this report. A catastrophic event with a 1 percent chance of occurring annually. The tendency of those exposed to a higher risk to seek more insurance coverage than those at a lower risk. Provides a snapshot of a company’s financial condition at one point in time. It shows assets, including investments and reinsurance, and liabilities, such as loss reserves to pay claims in the future, as of a certain date. It also states a company’s equity, which for insurance companies is known as policyholder surplus. Changes in that surplus are one indicator of an insurer’s financial standing. The risk that the proceeds from a financial instrument—such as a nonindemnity based catastrophe bond—will not be related to the insurer’s loss experience. The ability of property-casualty insurers to pay customer claims in the event of a catastrophic event and their willingness to make catastrophic coverage available to their customers, particularly subsequent to catastrophes. Term used for statistical recording purposes to refer to a single incident or a series of closely related incidents causing severe insured property losses totaling more than a given amount. Risk-based securities that pay relatively high interest rates and provide insurance companies with a form of reinsurance to pay losses from a catastrophe such as those caused by a major hurricane. They allow insurance risk to be sold to institutional investors in the form of bonds, thus spreading the risk. Using computers, a method to mesh long-term disaster information with current demographic, building, and other data to determine the potential cost of natural disasters and other catastrophic losses for a given geographic area. The amount of loss paid by the policyholder. Either a specified dollar amount, a percentage of the claim amount, or a specified amount of time that must elapse before benefits are paid. The bigger the deductible, the lower the premium charged for the same coverage. Equity capital, or insurers' surplus, is defined as net worth under the Statutory Accounting Principles (SAP) promulgated by the National Association of Insurance Commissioners. As such, surplus is the difference between assets valued according to SAP and liabilities valued according to SAP. Generally accepted accounting principles (GAAP) refers to the conventions, rules, and procedures that define acceptable accounting practices at a particular time. These practices form the framework for financial statement preparation. The mechanism by which solvent insurers ensure that some of the policyholder and third-party claims against insurance companies that fail are paid. Such funds are required in all 50 states, the District of Columbia, and Puerto Rico, but the type and amount of claims covered by the fund varies from state to state. Some states pay policyholders’ unearned premiums—the portion of the premium for which no coverage was provided because the company was insolvent. Some have deductibles. Most states have no limits on workers compensation payments. Guaranty funds are supported by assessments on insurers doing business in the state. The typical homeowners insurance policy covers the house, the garage, and other structures on the property, as well as personal possessions inside the house such as furniture, appliances, and clothing, against a wide variety of perils including windstorms, fire, and theft. The extent of the perils covered depends on the type of policy. An all-risk policy offers the broadest coverage. This covers all perils except those specifically excluded in the policy. Homeowners insurance also covers additional living expenses. Known as “loss of use,” this provision in the policy reimburses the policyholder for the extra cost of living elsewhere while the house is being restored after a disaster. Coverage for flood and earthquake damage is excluded and must be purchased separately. Coverage with a simple relationship that is based on the insurer’s actual incurred claims. For example, an insurer could contract with a reinsurer to cover half of all claims—up to $100 million in claims—from a hurricane over a specified time period in a specified geographic area. If a hurricane occurs where the insurer incurs $100 million or more in claims, the reinsurer would pay the insurer $50 million. Insurer’s inability to pay debts. Insurance insolvency standards and the regulatory actions taken vary from state to state. When regulators deem an insurance company is in danger of becoming insolvent, they can take one of three actions: place a company in conservatorship or rehabilitation if the company can be saved or liquidation if salvage is deemed impossible. The difference between the first two options is one of degree—regulators guide companies in conservatorship but direct those in rehabilitation. Typically the first sign of problems is an inability to pass the financial tests regulators administer as a routine procedure. An organization such as a bank or insurance company that buys and sells large quantities of securities. Insurers that join together to provide coverage for a particular type of risk or size of exposure, when there are difficulties in obtaining coverage in the regular market, and share in the profits and losses associated with the program. The incentive created by insurance that induces those insured to undertake greater risk than if they were uninsured, because the negative consequences are passed to the insurer. Coverage that specifies a specific event that triggers payment and payment formulas that are not directly related to the insurer’s actual incurred losses. Payment could be tied to industry loss indexes, parametric measures such as wind speed during a hurricane or ground movement during an earthquake, or models of claims payments rather than actual claims. A specific risk or cause of loss covered by an insurance policy, such as a fire, windstorm, flood, or theft. A named-peril policy covers the policyholder only for the risks named in the policy in contrast to an all-risk policy, which covers all causes of loss except those specifically excluded. The price of an insurance policy typically charged annually or semiannually. Covers damage to or loss of policyholders’ property and legal liability for damages caused to other people or their property. Property-casualty insurance, which includes auto, homeowners, and commercial insurance, is one segment of the insurance industry. The other sector is life/health. Outside the United States, property-casualty insurance is referred to as nonlife or general insurance. Six major credit agencies determine insurers’ financial strength and viability to meet claims obligations. They are A.M. Best Co.; Duff & Phelps Inc.; Fitch, Inc.; Moody’s Investors Services; Standard & Poor’s Corp.; and Weiss Ratings, Inc. Ratings agencies consider factors such as company earnings, capital adequacy, operating leverage, liquidity, investment performance, reinsurance programs, and management ability, integrity, and experience. Reinsurance is insurance for insurers. A reinsurer assumes part of the risk and part of the premium originally taken by the primary insurer. Reinsurers reimburse insurers for claims paid. The business is global and some of the largest reinsurers are based abroad. A company’s best estimate of what it will pay for claims. The amount of risk retained by an insurance company that is not reinsured. The reinsurance bought by reinsurers to protect their financial stability. The chance of loss of the person or entity that is insured. Management of the varied risks to which a business firm or association might be subject. It includes analyzing all exposures to gauge the likelihood of loss and choosing options to better manage or minimize loss. These options typically include reducing and eliminating the risk with safety measures, buying insurance, and self-insurance. Using the capital markets to expand and diversify the assumption of insurance risk. The issuance of bonds or notes to third-party investors directly or indirectly by an insurance or reinsurance company as a means of raising money to cover risks. Insurance companies’ ability to pay the claims of policyholders. Regulations to promote solvency include minimum capital and surplus requirements, statutory accounting conventions, limits to insurance company investment and corporate activities, financial ratio tests, and financial data disclosure. Accounting principles that are required by law. In the insurance industry, these standards are more conservative than GAAP and are intended to emphasize the present solvency of insurance companies. SAP is directed toward measuring whether the company will have sufficient funds readily available to meet anticipated insurance obligations by recognizing liabilities earlier or at a higher value than GAAP and assets later or at a lower value. For example, SAP requires that selling expenses be recorded immediately rather than amortized over the life of the policy.
Natural catastrophes and terrorist attacks can place enormous financial demands on the insurance industry, result in sharply higher premiums and substantially reduced coverage. As a result, interest has been raised in mechanisms to increase the capacity of the insurance industry to manage these types of events. In this report, GAO (1) provides an overview of the insurance industry's current capacity to cover natural catastrophic risk and discusses the impacts of the 2004 hurricanes; (2) analyzes the potential of catastrophe bonds--a type of security issued by insurers and reinsurers (companies that offer insurance to insurance companies) and sold to institutional investors--and tax-deductible reserves to enhance private-sector capacity; and (3) describes the approaches that six European countries have taken to address natural and terrorist catastrophe risk, including whether these countries permit insurers to use tax-deductible reserves for such events. We provided a draft of this report to the Department of the Treasury and the National Association of Insurance Commissioners. Treasury provided technical comments that were incorporated as appropriate. Despite steps that governments and insurers have taken in recent years to strengthen insurer capacity for catastrophic risk, the industry has not been tested by a major catastrophic event or series of events (at least $50 billion or more in insured losses). While insurers suffered losses of over $20 billion in Florida from the 2004 hurricanes, steps such as implementing stronger building codes and stricter underwriting standards may have limited market disruptions as compared with the aftermath of Hurricane Andrew in 1992. For example, in 2004, only 1 Florida insurance company failed in contrast to the 11 that failed after Hurricane Andrew in 1992. However, a more severe catastrophic event or series of events could severely disrupt insurance markets and impose recovery costs on governments, businesses, and individuals. Some insurers and reinsurers benefit from catastrophe bonds because the bonds diversify their funding base for catastrophic risk. However, these bonds currently occupy a small niche in the global catastrophe reinsurance market and many insurers view the costs associated with issuing them as significantly exceeding traditional reinsurance. In addition, industry participants do not consider catastrophe bonds for terrorism risk feasible at this time. Authorizing insurers to establish tax-deductible reserves for potential catastrophic events has been advanced as a means to enhance industry capacity, but according to some industry analysts such reserves would lower federal tax receipts and not necessarily bring about a meaningful increase in capacity because insurers may substitute the reserves for other types of capacity. The six European countries GAO studied use a variety of approaches to address catastrophe risk. Some governments require insurers to provide natural catastrophe insurance and provide financial assistance to insurers in the wake of catastrophic events, while others generally rely on the private market. However, the majority of these governments have established national terrorism insurance programs. Although their approaches vary, insurers in all six countries were allowed to establish tax-deductible reserves for potential catastrophic events as of 2004.
Carbon dioxide and certain other gases trap some of the sun’s heat in the earth’s atmosphere and prevent it from returning to space. The trapped energy warms the earth’s climate, much as glass in a greenhouse. Hence, the gases that cause this effect are often referred to as greenhouse gases. In the United States, the most prevalent greenhouse gas is carbon dioxide, which results from the combustion of coal and other fossil fuels in power plants, the burning of gasoline in vehicles, and other sources. The other gases are methane, nitrous oxide, and three synthetic gases. In recent decades, concentrations of these gases have built up in the atmosphere, raising concerns that continuing increases might interfere with the earth’s climate, for example, by increasing temperatures or changing precipitation patterns. In 1997, the United States participated in drafting the Kyoto Protocol, an international agreement to limit greenhouse gas emissions, and in 1998 it signed the Protocol. However, the previous administration did not submit it to the Senate for advice and consent, which are required for ratification. In March 2001, President Bush announced that he opposed the Protocol. In addition to the emissions intensity goal and domestic elements intended to help achieve it, the President’s February 2002 climate initiative includes (1) new and expanded international policies, such as increasing funding for tropical forests, which sequester carbon dioxide, (2) enhanced science and technology, such as developing and deploying advanced energy and sequestration technologies, and (3) an improved registry of reductions in greenhouse gas emissions. According to testimony by the Chairman of the White House Council on Environmental Quality, the President’s climate change strategy was produced by a combined working group of the Domestic Policy Council, National Economic Council, and National Security Council. While U.S. greenhouse gas emissions have increased significantly, the Energy Information Administration reports that U.S. emissions intensity has generally been falling steadily for 50 years. This decline occurred, in part, because the U.S. energy supply became less carbon-intensive in the last half-century, as nuclear, hydropower, and natural gas were increasingly substituted for more carbon-intensive coal and oil to generate electricity. The Administration explained that the Initiative’s general goal is to slow the growth of U.S. greenhouse gas emissions, but it did not explain the basis for its specific goal of reducing emissions intensity 18 percent by 2012 or what a 4-percent reduction is specifically designed to accomplish. Reducing emissions growth by 4 percentage points more than is currently expected would achieve the general goal, but—on the basis of our review of the fact sheets and other documents--we found no specific basis for establishing a 4-percentage-point change, as opposed to a 2- or 6- percentage-point change, for example, relative to the already anticipated reductions. According to the Administration’s analysis, emissions under its Initiative will increase between 2002 and 2012, but at a slower rate than otherwise expected. Specifically, according to Energy Information Administration (EIA) projections cited by the Administration, without the Initiative emissions will increase from 1,917 million metric tons in 2002 to 2,279 million metric tons in 2012. Under the Initiative, emissions will increase to 2,173 million metric tons in 2012, which is 106 million metric tons less than otherwise expected. We calculated that under the Initiative, emissions would be reduced from 23,162 million metric tons to 22,662 million metric tons cumulatively for the period 2002-12. This difference of 500 million metric tons represents a 2-percent decrease for the 11-year period. Because economic output will increase faster than emissions between 2002 and 2012, according to EIA’s projections, emissions intensity is estimated to decline from 183 tons per million dollars of output in 2002 to 158 tons per million dollars in 2012 (a 14-percent decline) without the Initiative, and to 150 tons per million dollars under the Initiative (an 18- percent decline). The Administration identified 30 elements (26 in February 2002 and another 4 later) that it expected would help reduce U.S. emissions by 2012 and, thus, contribute to meeting its 18-percent goal. These 30 elements include regulations, research and development, tax incentives, and other activities. (The elements are listed in Appendix I.) The Administration groups them into four broad categories, as described below. Providing incentives and programs for renewable energy and certain industrial power systems. Six tax credits and seven other elements are expected to increase the use of wind and other renewable resources, combined heat-and-power systems, and other activities. The tax credits cover electricity from wind and new hybrid or fuel-cell vehicles, among other things. Other elements would provide funding for geothermal energy, primarily in the western United States, and advancing the use of hydropower, wind, and other resources on public lands. Still other elements involve research and development on fusion energy and other sources. Improving fuel economy. Three efforts relating to automotive technology and two other elements are expected to improve fuel economy. The technology efforts include advances in hydrogen-based fuel cells and low- cost fuel cells. Two of the five elements are mandatory. First, a regulation requiring the installation of tire pressure monitoring systems in cars and certain other vehicles was finalized in June 2002 and will be phased in between 2003 and 2006. Properly inflated tires improve fuel efficiency. Second, a regulation requiring an increase in the fuel economy of light trucks, from the current 20.7 miles per gallon to 22.2 miles per gallon in 2007, was finalized in April 2003. Promoting domestic carbon sequestration. Four U.S. Department of Agriculture programs were identified as promoting carbon sequestration on farms, forests, and wetlands. Among other things, these programs are intended to accelerate tree planting and converting cropland to grassland or forests. Challenging business to reduce emissions. Voluntary initiatives to reduce greenhouse gases were proposed for U.S. businesses. For major companies that agreed to establish individual goals for reducing their emissions, the Environmental Protection Agency (EPA) launched a new Climate Leaders Program. In addition, certain companies in the aluminum, natural gas, semiconductor, and underground coal mining sectors have joined voluntary partnerships with EPA to reduce their emissions. Finally, certain agricultural companies have joined two voluntary partnerships with EPA and the Department of Agriculture to reduce their emissions. The Administration provided some information for all 30 of the Initiative’s elements, including, in some cases, estimates of previous or anticipated emission reductions. However, inconsistencies in the nature of this information make it difficult to determine how contributions from the individual elements would achieve the total reduction of about 100 million metric tons in 2012. First, estimates were not provided for 19 the Initiative’s elements. Second, for the 11 elements for which estimates were provided, we found that 8 were not clearly attributable to the Initiative because the reductions (1) were related to an activity already included in ongoing programs or (2) were not above previous or current levels. We did find, however, that the estimated reductions for the remaining 3 elements appear attributable to the Initiative. We have concerns about some of the 19 emission reduction elements for which the Administration did not provide savings estimates. At least two of these elements seem unlikely to yield emissions savings by 2012. For example, the April 2003 fact sheet listed hydrogen energy as an additional measure, even though it also stated a goal of commercializing hydrogen vehicles by 2020, beyond the scope of the Initiative. Similarly, the same fact sheet listed a coal-fired, zero-emissions power plant as an additional measure, but described the project as a 10-year demonstration; this means that the power plant would not finish its demonstration phase until the last year of the Initiative, much less be commercialized by then. Of the 11 elements for which estimates were provided, we found that the estimated reductions for 8 were not clearly attributable to the Initiative. In five cases, an estimate is provided for a current or recent savings level, but no information is provided about the expected additional savings to be achieved by 2012. For example, the Administration states that aluminum producers reduced their emissions by 1.8 million metric tons to meet a goal in 2000, but it does not identify future savings, if any. Similarly, it states that Agriculture’s Environmental Quality Incentives Program, which provides assistance to farmers for planning and implementing soil and water conservation practices, reduced emissions by 12 million metric tons in 2002. However, while the Administration sought more funding for the program in fiscal year 2003, it did not project any additional emissions reductions from the program. In two cases, it is not clear how much of the claimed savings will occur by the end of the Initiative in 2012. The requirement that cars and certain other vehicles have tire pressure monitoring systems is expected to yield savings of between 0.3 and 1.3 million metric tons a year when applied to the entire vehicle fleet. However, it will take years for such systems to be incorporated in the entire fleet and it is not clear how much of these savings will be achieved by 2012. Similarly, the required increase in light truck fuel economy is expected to result in savings of 9.4 million metric tons over the lifetime of the vehicles covered. Again, because these vehicles have an estimated lifetime of 25 years, it is not clear how much savings will be achieved by 2012. In one case, savings are counted for an activity that does not appear to be directly attributable to the Initiative. Specifically, in March 2001 (nearly a year before the Initiative was announced), EPA and the Semiconductor Industry Association signed a voluntary agreement to reduce emissions by an estimated 13.7 million metric tons by 2010. Because this agreement was signed before the Initiative was announced, it is not clear that the estimated reductions should be considered as additions to the already anticipated amount. Estimates for the remaining 3 of the 11 elements appear to be attributable to the Initiative in that they represent reductions beyond previous or current levels and are associated with expanded program activities. These are: Agriculture’s Conservation Reserve Program was credited with additional savings of 4 million metric tons a year. This program assists farm owners and operators to conserve and improve soil, water, air, and wildlife resources and results in carbon sequestration. Agriculture’s Wetland Reserve Program was credited with additional savings of 2 million metric tons a year. This program helps convert cropland on wetland soils to grassland or forest and also sequesters carbon emissions. The Environmental Protection Agency’s Natural Gas STAR Program was credited with additional savings of 2 million metric tons a year. This program works with companies in the natural gas industry to reduce losses of methane during production, transmission, distribution, and processing. More current information about certain of these elements and their expected contributions has been made public, but has not been consolidated with earlier information about the Initiative. For example, the Department of Agriculture’s web site includes a June 2003 fact sheet on that agency’s programs that contribute to carbon sequestration. Among other things, the fact sheet estimated that the Environmental Quality Incentives Program, cited above, will reduce emissions 7.1 million metric tons in 2012. However, we did not find that such information had been consolidated with the earlier information, and there appears to be no comprehensive source for information about all of the elements intended to help achieve the Initiative’s goal and their expected contributions. The lack of consistent and comprehensive information makes it difficult for relevant stakeholders and members of the general public to assess the merits of the Initiative. According to the February 2002 fact sheet, progress in meeting the 18- percent goal will be assessed in 2012, the final year of the Initiative. At that point, the fact sheet states that if progress is not sufficient and if science justifies additional action, the United States will respond with further policies; these policies may include additional incentives and voluntary programs. The fact sheets did not indicate whether the Administration plans to check its progress before 2012. Such an interim assessment, for example, after 5 years, would help the Administration determine whether it is on course to meet the goal in 2012 and, if not, whether it should consider additional elements to help meet the goal. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Committee may have. Contacts and Acknowledgments For further information about this testimony, please contact me at (202) 512-3841. John Delicath, Anne K. Johnson, Karen Keegan, David Marwick, and Kevin Tarmann made key contributions to this statement. EPA Climate Leaders Program Semiconductor industry Aluminum producers EPA Natural Gas STAR Program EPA Coal Bed Methane Outreach Program AgSTAR Program Ruminant Livestock Efficiency Program Climate VISION Partnership Data from Global Climate Change Policy Book, Feb. 2002; White House Fact Sheets, July 2002 and April 2003; analysis by GAO. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2002, the Administration announced its Global Climate Change Initiative. It included, among other things, a goal concerning U.S. carbon dioxide and other greenhouse gas emissions, which are widely believed to affect the earth's climate. The Administration's general goal was to reduce the growth rate of emissions, but not total emissions, between 2002 and 2012. Its specific goal was to reduce emissions intensity 18 percent, 4 percentage points more than the 14 percent decline already expected. Emissions intensity measures the amount of greenhouse gases emitted per unit of economic output. In the United States, this ratio has generally decreased for 50 years or more. Under the Initiative, emissions would increase, but less than otherwise expected. GAO was asked to testify on whether the Administration's publicly available documents (1) explain the basis for the Initiative's general and specific goals, (2) identify elements to help reduce emissions and contribute to the 18 percent reduction goal, as well as their specific contributions, and (3) discuss plans to track progress in meeting the goal. This testimony is based on ongoing work, and GAO expects to issue a final report on this work later this year. Because of time constraints, GAO's testimony is based on its analysis of publicly available Administration documents. The Administration stated that the Initiative's general goal is to slow the growth of U.S. greenhouse gas emissions, but it did not provide a basis for its specific goal of reducing emissions intensity 18 percent by 2012. Any reduction in emissions above the 14-percent reduction already anticipated would contribute to this general goal. However, GAO did not find a specific basis or rationale for the Administration's decision to establish a 4-percentage-point reduction goal beyond the already expected reductions. The Administration identified 30 elements that it expected would reduce U.S. emissions and contribute to meeting its 18 percent reduction goal by 2012. The 30 elements include a range of policy tools (such as regulations, research and development, tax incentives, and other activities) that cover four broad areas: (1) improving renewable energy and certain industrial power systems, (2) improving fuel economy, (3) promoting domestic carbon sequestration (for example, the absorption of carbon dioxide by trees to offset emissions), and (4) challenging business to reduce emissions. GAO found that the Administration provided estimates of the reductions associated with 11 of the 30 elements, but not with the remaining 19 elements. Of these 11 estimates, GAO found that 3 estimates represented future emissions reductions related to activities that occurred after the Initiative was announced. However, the other 8 estimates represented past or current emissions reductions or related to activities that were already underway before the Initiative was announced. Specifically, in five cases, an estimate is provided for current or recent reductions, but no information is provided about the expected additional savings to be achieved by 2012, the end of the Initiative. In two cases, the elements are expected to yield savings over many years, but it is not clear what emissions reductions will be achieved by 2012. In one case, savings are counted for an activity that began prior to the announcement of the Initiative. It is, therefore, unclear to what extent the 30 elements will contribute to the goal of reducing emissions and, thus, lowering emissions intensity by 2012. The Administration plans to determine, in 2012, whether the 18-percent reduction goal was met. Unless the Administration conducts one or more interim assessments, it will not be in a position to determine, until a decade after announcing the Initiative, whether its efforts are having the intended effect or whether additional efforts may be warranted.
The ability to accurately and reliably measure pollutant concentrations is vital to successfully implementing GLI water quality criteria. Without this ability, it is difficult for states to determine if a facility’s discharge is exceeding GLI water quality criteria and if a discharge limits are required. For example, because chlordane has a water quality criterion of 0.25 nanograms per liter but can only be measured down to a level of 14 nanograms per liter, it cannot always be determined if the pollutant is exceeding the criterion. As we reported in 2005, developing the analytical methods needed to measure pollutants at the GLI water quality criteria level is a significant challenge to fully achieving GLI goals. Although methods have been developed for the nine BCCs for which GLI water quality criteria have been established, EPA has only approved the methods to measure mercury and lindane below GLI’s stringent criteria levels. Analytical methods for the other BCCs either have not received EPA approval or cannot be used to reliably measure to GLI criteria levels. Once EPA approves an analytical method, Great Lakes states are able to issue point source permits that require facilities to use that method unless the EPA region has approved an alternative procedure. According to EPA officials, specific time frames for developing and approving methods that measure to GLI criteria have not yet been established. EPA officials explained that developing EPA-approved methods can be a time- consuming and costly process. Table 1 shows the status of the methods for the nine BCCs. As we reported in 2005, if pollutant concentrations can be measured at or below the level established by GLI water quality criteria, enforceable permit limits can be established on the basis of these criteria. The Great Lakes states’ experience with mercury illustrates the impact of sufficiently sensitive measurement methods on identifying pollutant discharges from point sources. Methods for measuring mercury at low levels were generally not available until EPA issued a new analytical method in 1999 to measure mercury concentrations below the GLI water quality criterion of 1.3 nanograms per liter of water. This more sensitive method disclosed a more pervasive problem of high mercury levels in the Great Lakes Basin than previously recognized and showed, for the first time, that many facilities had mercury levels in their discharges that were exceeding water quality criteria. Since this method was approved, the number of permits with discharge limits for mercury rose from 185 in May 2005 to 292 in November 2007. Moreover, EPA and state officials are expecting this trend to continue. As EPA officials explained, it may take up to two permit cycles—permits are generally issued for 5-year periods—-to collect the monitoring data needed to support the inclusion of discharge limits in permits. EPA officials are expecting a similar rise in permits with discharge limits for polychlorinated biphenyls (PCBs) when detection methods are approved. Permit flexibilities often allow facilities’ discharges to exceed GLI water quality criteria. These flexibilities can take several forms, including the following: Variance. Allows dischargers to exceed the GLI discharge limit for a particular pollutant specified in their permit. Compliance schedule. Allows dischargers a grace period of up to 5 years in complying with a permitted discharge limit. Pollutant Minimization Program (PMP). Sets forth a series of actions by the discharger to improve water quality when the pollutant concentration cannot be measured down to the water quality criterion. A PMP is often used in conjunction with a variance. Mixing Zone. Allows dischargers to use the areas around a facility’s discharge pipe where pollutants are mixed with cleaner receiving waters to dilute pollutant concentrations. Within the mixing zone, concentrations of pollutants are generally allowed to exceed water quality criteria as long as standards are met at the boundary of the mixing zone. This flexibility expires in November 2010 with some limited exceptions. These flexibilities are generally only available to permit holders that operated before March 23, 1997, and are in effect for 5 years or the length of the permit. GLI allows states to grant such permit flexibilities under certain circumstances, such as when the imposition of water quality standards would result in substantial and widespread economic and social impacts. Table 2 shows the number and type of BCC permit flexibilities being used as of November 2007 in the Great Lakes Basin for mercury, PCBs, and dioxin, as well as BCC discharge limits contained in permits. According to EPA and state officials, in many cases, facilities cannot meet GLI water quality criteria for a number of reasons, such as technology limitations, and the flexibilities are intended to give the facility time to make progress toward meeting the GLI criteria. With the exception of compliance schedules, the GLI allows for the repeated use of these permit flexibilities. As a result, EPA and state officials could not tell us when the GLI criteria will be met. In our 2005 report, we described several factors that were undermining EPA’s ability to ensure progress toward achieving consistent implementation of GLI water quality standards. To help ensure full and consistent implementation of the GLI and to improve measures for monitoring progress toward achieving GLI’s goals, we made a number of recommendations to the EPA Administrator. EPA has taken some actions to implement the recommendations contained in our 2005 report, as the following indicates: Ensure the GLI Clearinghouse is fully developed. We noted that EPA’s delayed development of the GLI Clearinghouse—a database intended to assist the states in developing consistent water quality criteria for toxic pollutants—was preventing the states from using this resource. To assist Great Lakes states in developing water quality criteria for GLI pollutants, we recommended that EPA ensure that the GLI Clearinghouse was fully developed, maintained, and made available to Great Lakes states. EPA launched the GLI Clearinghouse on its Web site in May 2006 and in February 2007, EPA Region 5 provided clearinghouse training to states. The clearinghouse currently contains criteria or toxicity information for 395 chemicals. EPA officials told us that the clearinghouse is now available to the states so they can independently calculate water quality criteria for GLI pollutants. EPA officials told us that some states, including Ohio, Wisconsin, and Illinois, plan on updating their water quality standards in the near future and believe that the clearinghouse will benefit them as well as other states as they update their standards. Gather and track information to assess the progress of GLI implementation. In 2005, we reported that EPA’s efforts to assess progress in implementing the GLI and its impact on reducing point source discharges have been hampered by lack of information on these discharges. To improve EPA’s ability to measure progress, we recommended that EPA gather and track information on dischargers’ efforts to reduce pollutant loadings in the basin. EPA has begun to review the efforts and progress made by one category of facilities— municipal wastewater treatment facilities—to reduce their mercury discharges into the basin. However, until EPA develops additional sources of information, it will not have the information needed to adequately assess progress toward meeting GLI goals. Increase efforts to resolve disagreements with Wisconsin. Although we found that the states had largely completed adoption of GLI standards, EPA had not resolved long-standing issues with Wisconsin regarding adoption and implementation of GLI provisions. To ensure the equitable and timely implementation of GLI by all the Great Lakes states, we recommended that that the EPA Administrator direct EPA Region 5, which is responsible for Wisconsin, to increase efforts to resolve disagreements with the state over inconsistencies between the state’s and the GLI’s provisions. Wisconsin officials believe the GLI provisions are not explicitly supported by Wisconsin law. Subsequently, EPA and Wisconsin officials have held discussions on this matter, and neither Wisconsin nor EPA officials believe that these disagreements are significantly affecting GLI implementation. However, they have been unable to completely resolve these issues. We found that similar issues have also surfaced with New York. Issue a permitting strategy for mercury. Because we found that Great Lakes’ states had developed inconsistent approaches for meeting the GLI mercury criterion, including differences in the use of variances, we recommended that EPA issue a permitting strategy to ensure a more consistent approach. EPA disagreed with this recommendation, asserting that a permitting strategy would not improve consistency. Instead, the agency continued to support state implementation efforts by developing guidance for PMPs, evaluating and determining compliance, and assessing what approaches are most effective in reducing mercury discharges by point sources. One such effort is EPA Region 5’s review of mercury PMP language in state-issued permits for wastewater treatment facilities. This review resulted in recommendations to the states in May 2007 to improve the enforceability and effectiveness of PMP provisions. However, additional efforts will be needed to ensure consistency at other types of facilities, such as industrial sites, across the Great Lakes states. In closing, Madam Chairwoman and Members of the Subcommittee, although progress has been made with mercury detection and increased knowledge of wastewater treatment facilities’ pollutant discharges to the Great Lakes, information is still lacking on the full extent of the problem that BCCs pose in the Great Lakes. As methods are developed to determine whether facilities’ discharges for other BCCs meet GLI criteria and EPA approves them, and as more permits include discharge limits, more information will be available on pollutant discharges in the basin. Even with these advances, however, extensive use of permit flexibilities could continue to undercut reductions in pollution levels and the ultimate achievement of GLI’s goals. This concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact David Maurer at (202) 512-3841 or maurerd@gao.gov. Key contributors to this testimony were Greg Carroll, Katheryn Summers Hubbell, Sherry L. McDonald, and Carol Herrnstadt Shulman. Other contributors included Jeanette Soares and Michele Fejfar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Millions of people in the United States and Canada depend on the Great Lakes for drinking water, recreation, and economic livelihood. During the 1970s, it became apparent that pollutants discharged into the Great Lakes Basin from point sources, such as industrial and municipal facilities, or from nonpoint sources, such as air emissions from power plants, were harming the Great Lakes. Some of these pollutants, known as bioaccumulative chemicals of concern (BCC), pose risks to fish and other species as well as to the humans and wildlife that consume them. In 1995, the Environmental Protection Agency (EPA) issued the Great Lakes Initiative (GLI). The GLI established water quality criteria to be used by states to establish pollutant discharge limits for some BCCs and other pollutants that are discharged by point sources. The GLI also allows states to include flexible permit implementation procedures (flexibilities) that allow facilities' discharges to exceed GLI criteria. This testimony is based on GAO's July 2005 report, Great Lakes Initiative: EPA Needs to Better Ensure the Complete and Consistent Implementation of Water Quality Standards (GAO-05-829) and updated information from EPA and the Great Lakes states. This statement addresses (1) the status of EPA's efforts to develop and approve methods to measure pollutants at the GLI water quality criteria levels, (2) the use of permit flexibilities, and (3) EPA's actions to implement GAO's 2005 recommendations. As GAO reported in 2005, developing the sensitive analytical methods needed to measure pollutants at the GLI water quality criteria level is a significant challenge to achieving GLI's goals. Of the nine BCCs for which criteria have been established, only two--mercury and lindane--have EPA-approved methods that will measure below those criteria levels. Measurement methods for the other BCCs are either not yet approved or cannot reliably measure to GLI criteria. Without such measurement, it is difficult for states to determine whether a facility is exceeding the criteria and if discharge limits are required in the facility's permit. As methods become available, states are able to include enforceable discharge limits in facilities' permits. For example, since EPA approved a more sensitive method for mercury in 1999, the number of permits with mercury limits has increased from 185 in May 2005 to 292 in November 2007. EPA and state officials expect this trend to continue. Similar increases may occur as more sensitive analytical methods are developed and approved for other BCCs. Flexibilities included in permits allow facilities' discharges to exceed GLI water quality criteria. For example, one type of flexibility--variances--will allow facilities to exceed the GLI criteria for a pollutant specified in their permits. Moreover, the GLI allows the repeated use of some of these permit flexibilities, and does not set a time frame for facilities to meet the GLI water quality criteria. As a result, EPA and state officials do not know when the GLI criteria will be met. In the 2005 report, GAO made a number of recommendations to EPA to help ensure full and consistent implementation of the GLI and to improve measures for monitoring progress toward achieving GLI's goals. EPA has taken some actions to implement the recommendations. For example, EPA has begun to review the efforts and progress made by one category of facilities--municipal wastewater treatment plants--to reduce their mercury discharges into the basin. However, until EPA gathers more information on the implementation of GLI and the impact it has had on reducing pollutant discharges from point sources, as we recommended, it will not be able to fully assess progress toward GLI goals.
With the escalation of the IED threat in Iraq dating back to 2003, DOD began identifying several counter-IED capability gaps including shortcomings in the areas of counter-IED technologies, qualified personnel with expertise in counter-IED tactics, training, dedicated funding, and the lack of an expedited acquisition process for developing new solutions to address emerging IED threats. Prior DOD efforts to defeat IEDs included various process teams and task forces. For example, DOD established the Joint IED Defeat Task Force in June 2005, which replaced three temporary organizations—the Army IED Task Force; the Joint IED Task Force; and the Under Secretary of Defense, Force Protection Working Group. To further focus DOD’s efforts and minimize duplication, DOD published a new directive in February 2006, which changed the name of the Joint IED Defeat Task Force to JIEDDO. This directive established JIEDDO as a joint entity and jointly manned organization within DOD, directly under the authority, direction, and control of the Deputy Secretary of Defense, rather than subjecting JIEDDO to more traditional review under an Under Secretary of Defense within the Office of the Secretary of Defense. DOD’s directive further states that JIEDDO shall focus all DOD actions in support of the combatant commanders’ and their respective Joint Task Forces’ efforts to defeat IEDs as weapons of strategic influence. Specifically JIEDDO is directed to identify, assess, and fund initiatives that provide specific counter-IED solutions, and is granted the authority to approve joint IED defeat initiatives valued up to $25 million and make recommendations to the Deputy Secretary of Defense for initiatives valued over that amount. Beginning in fiscal year 2007, Congress, has provided JIEDDO with its own separate appropriation, averaging $4 billion a year. JIEDDO may then transfer funds to the military service that is designated to sponsor a specific initiative. After JIEDDO provides funding authority to a military service, the designated service program manager, not JIEDDO, is responsible for managing the initiatives for which JIEDDO has provided funds. Since 2004, the Office of Management and Budget (OMB) Circular A-123 has specified that federal agencies have a fundamental responsibility to develop and maintain effective internal controls that ensure the prevention or detection of significant weaknesses—that is, weaknesses that could adversely affect the agency’s ability to meet its objectives. According to OMB, the importance of internal controls is addressed in many statutes and executive documents. OMB requires agencies and individual federal managers to take systematic and proactive measures to develop and implement appropriate, cost-effective internal controls for results-oriented management. In addition, the Federal Managers Financial Integrity Act of 1982 establishes the overall requirements with regard to internal controls. Accordingly, an agency head must establish controls that reasonably ensure that (1) obligations and costs are in compliance with applicable law; (2) all assets are safeguarded against waste, loss, unauthorized use, or misappropriation; and (3) revenues and expenditures applicable to agency operations are properly recorded and accounted for to permit the preparation of accounts and reliable financial and statistical reports and to maintain accountability over the assets. Specific internal control standards underlying the internal controls concept in the federal government are promulgated by GAO and are referred to as the Green Book. The DOD Comptroller is responsible for the implementation and oversight of DOD’s internal control program. Since its creation, JIEDDO has taken several steps to improve its management and operation of counter-IED efforts in response to our past work as well as to address congressional concerns. For example, in our ongoing work, we have noted that JIEDDO has been improving its strategic planning. In March 2007, observing that JIEDDO did not have a formal written strategic plan, we recommended that it develop such a plan based on the Government Performance and Results Act requirement implemented by the OMB circular A-11 requirement that government entities develop and implement a strategic plan for managing their efforts. Further, in 2007, Congress initially appropriated only a portion of JIEDDO’s requested fiscal year 2008 funding, and a Senate Appropriations Committee report directed JIEDDO to provide a comprehensive and detailed strategic plan so that additional funding could be considered. In response, JIEDDO, in November 2007, issued a strategic plan that provided an overarching framework for departmentwide counter-IED efforts. Additionally, JIEDDO continues to invest considerable effort to develop and manage JIEDDO-specific plans for countering IEDs. For example, during the second half of 2008, the JIEDDO director undertook a detailed analysis of three issues. The director looked at JIEDDO’s mission as defined in DOD guidance, the implicit and explicit functions associated with its mission, and the organizational structure needed to support and accomplish its mission. The effort resulted in JIEDDO publishing its JIEDDO Organization and Functions Guide in December 2008, within which JIEDDO formally established strategic planning as one of four mission areas. Actions taken in 2009 included developing and publishing a JIEDDO-specific strategic plan for fiscal years 2009 and 2010, reviewing JIEDDO’s existing performance measures to determine whether additional or alternative metrics might be needed, and engaging other government agencies and services involved in addressing the IED threat at a JIEDDO semiannual conference. As a result of these actions, JIEDDO is steadily improving its understanding of counter-IED challenges. Additionally, as we note in our report being issued today, JIEDDO and the services have taken some steps to improve visibility over their counter- IED efforts. For example, JIEDDO, the services, and several other DOD organizations compile some information on the wide range of IED defeat initiatives existing throughout the department. JIEDDO also promotes visibility by giving representatives from the Army’s and Marine Corps’ counter-IED coordination offices the opportunity to assist in the evaluation of IED defeat proposals. Additionally, JIEDDO maintains a network of liaison officers to facilitate counter-IED information sharing throughout the department. It also hosts a semiannual conference covering counter-IED topics such as agency roles and responsibilities, key issues, and current challenges. JIEDDO also hosts a technology outreach conference with industry, academia, and other DOD components to discuss the latest requirements and trends in the counter-IED effort. Lastly, the services provide some visibility over their own counter-IED initiatives by submitting information to JIEDDO for the quarterly reports that it submits to Congress. While JIEDDO has taken some steps toward improving its management of counter-IED efforts, several significant challenges remain that affect DOD’s ability to oversee JIEDDO. Some of these challenges are identified in the report we are issuing today and include a lack of full visibility by JIEDDO and the services over counter-IED initiatives throughout DOD, difficulties coordinating the transition of funding responsibility for joint IED defeat initiatives to the military services once counter-IED solutions have been developed, and a lack of clear criteria for defining what counter-IED training initiatives it will fund. Additionally, our ongoing work has identified other challenges including a lack of a means to gauge the effectiveness of its counter-IED efforts, a lack of consistent application of its counter-IED initiative acquisition process, and a lack of adequate internal controls required to provide DOD assurance that it is achieving its objectives. I will discuss each of these challenges in more detail. DOD’s ability to manage JIEDDO is hindered by its lack of full visibility over counter-IED initiatives throughout DOD. Although JIEDDO and various service organizations are developing and maintaining their own counter-IED initiative databases, JIEDDO and the services lack a comprehensive database of all existing counter-IED initiatives, which limits their visibility over counter-IED efforts across the department. JIEDDO is required to lead, advocate, and coordinate all DOD actions to defeat IEDs. Also, JIEDDO is required to maintain the current status of program execution, operational fielding, and performance of approved Joint IED Defeat initiatives. Despite the creation of JIEDDO, most of the organizations engaged in the IED defeat effort in existence prior to JIEDDO have continued to develop, maintain, and in many cases, expand their own IED defeat capabilities. For example, the Army continues to address the IED threat through such organizations as the Army’s Training and Doctrine Command, which provides training support and doctrinal formation for counter-IED activities, and the Research, Development & Engineering Command, which conducts counter-IED technology assessments and studies for Army leadership. Furthermore, an Army official stated that the Center for Army Lessons Learned continues to maintain an IED cell to collect and analyze counter-IED information. The Marine Corps’ Training and Education Command and the Marine Corps Center for Lessons Learned have also continued counter-IED efforts beyond the creation of JIEDDO. At the interagency level, the Technical Support Working Group continues its research and development of counter-IED technologies. Despite these ongoing efforts and JIEDDO’s mission to coordinate all DOD actions to defeat improvised explosive devices, JIEDDO does not maintain a comprehensive database of all IED defeat initiatives across the department. JIEDDO is currently focusing on developing a management system that will track its initiatives as they move through its own acquisition process. Although this system will help JIEDDO manage its counter-IED initiatives, it will track only JIEDDO-funded initiatives, not those being independently developed and procured by the services and other DOD components. Without incorporating service and other DOD components’ counter-IED initiatives, JIEDDO’s efforts to develop a counter-IED initiative database will not capture all efforts to defeat IEDs throughout DOD. In addition, the services do not have a central source of information for their own counter-IED efforts because there is currently no requirement that each service develop its own comprehensive database of all of its counter-IED initiatives. Without centralized counter-IED initiative databases, the services are limited in their ability to provide JIEDDO with a timely and comprehensive summary of all their existing initiatives. For example, the U.S. Army Research and Development and Engineering Command’s Counter-IED Task Force and the service counter-IED focal points—the Army Asymmetric Warfare Office’s Adaptive Networks, Threats and Solutions Division; and the Marine Corps Warfighting Lab— maintain databases of counter-IED initiatives. However, according to Army and Marine Corps officials, these databases are not comprehensive in covering all efforts within their respective service. Additionally, of these three databases, only the U.S. Army Research and Development and Engineering Command’s database is available for external use. Since the services are able to act independently to develop and procure their own counter-IED solutions, several service and Joint officials told us that a centralized counter-IED database would be of great benefit in coordinating and managing the department’s counter-IED programs. Furthermore, although JIEDDO involves the services in its process to select initiatives, the services lack full visibility over those JIEDDO-funded initiatives that bypass JIEDDO’s acquisition process, called the JIEDDO Capability Approval and Acquisition Management Process (JCAAMP). In this process, JIEDDO brings in representatives from the service to participate on several boards—such as a requirements, resources, and acquisition board—to evaluate counter-IED initiatives, and various integrated process teams. However, in its process to select counter-IED initiatives, JIEDDO has approved some counter-IED initiatives without vetting them through the appropriate service counter-IED focal points, because the process allows JIEDDO to make exceptions if deemed necessary and appropriate. For example, at least three counter-IED training initiatives sponsored by JIEDDO’s counter-IED joint training center were not vetted through the Army Asymmetric Warfare Office’s Adaptive Networks, Threats, and Solutions Branch—the Army’s focal point for its counter-IED effort—before being approved for JIEDDO funding. Service officials have said that not incorporating their views on initiatives limits their visibility of JIEDDO actions and could result in approved initiatives that are inconsistent with service needs. JIEDDO officials acknowledged that while it may be beneficial for some JIEDDO- funded initiatives to bypass its acquisition process in cases where an urgent requirement with limited time to field is identified, these cases do limit service visibility over all JIEDDO-funded initiatives. In response to these issues, we recommended in our report that is being issued today that the military services create their own comprehensive IED defeat initiative databases and work with JIEDDO to develop a DOD- wide database for all counter-IED initiatives. In response to this recommendation, DOD concurred and noted steps currently being taken to develop a DOD-wide database of counter-IED initiatives. While we recognize that this ongoing effort is a step in the right direction, these steps did not address the need for the services to develop databases of their initiatives as we also recommended. Until all of the services and other DOD components gain full awareness of their own individual counter-IED efforts and provide this input into a central database, any effort to establish a DOD-wide database of all counter-IED initiatives will be incomplete. We are also recommending that, in cases where initiatives bypass JIEDDO’s rapid acquisition process, JIEDDO develop a mechanism to notify the appropriate service counter-IED focal points of each initiative prior to its funding. In regard to this recommendation, DOD also concurred and noted steps it plans to take such as notifying stakeholders of all JIEDDO efforts or initiatives, whether or not JCAAMP processing is required. We agree that, if implemented, these actions would satisfy our recommendation. Although JIEDDO has recently taken several steps to improve its process to transition IED defeat initiatives to the military services following the development of new capabilities, JIEDDO still faces difficulties in this area. JIEDDO’s transitions of initiatives to the services are hindered by funding gaps between JIEDDO’s transition timeline and DOD’s budget cycle as well as by instances when service requirements are not fully considered during JIEDDO’s acquisition process. JIEDDO obtains funding for its acquisition and development programs through congressional appropriations for overseas contingency operations. JIEDDO typically remains responsible for funding counter-IED initiatives until they have been developed, fielded, and tested as proven capabilities. According to DOD’s directive, JIEDDO is then required to develop plans for transitioning proven joint IED defeat initiatives into DOD base budget programs of record for sustainment and further integration into existing service programs once those initiatives have been developed. As described in its instruction, JIEDDO plans to fund initiatives for 2 fiscal years of sustainment. However, service officials have stated that JIEDDO’s 2-year transition timeline may not allow the services enough time to request and receive funding through DOD’s base budgeting process, causing DOD to rely on service overseas contingency operations funding to sustain joint- funded counter-IED initiatives following JIEDDO’s 2-year transition timeline. According to JIEDDO’s latest transition brief for fiscal year 2010, the organization recommended the transfer of 19 initiatives totaling $233 million to the services for funding through overseas contingency operations appropriations and the transition of only 3 totaling $4.5 million into service base budget programs. The potential need for increased transition funds will continue given the large number of current initiatives funded by JIEDDO. For example, as of March 30, 2009, JIEDDO’s initiative management system listed 497 ongoing initiatives. In addition to the small number of transitions and transfers that have occurred within DOD to date, the services often decide to indefinitely defer assuming fundin responsibility for JIEDDO initiatives following JIEDDO’s intended 2-year transition or transfer point. According to JIEDDO’s fiscal year 2011 transition list, the Army and Navy have deferred or rejected the acceptance of 16 initiatives that JIEDDO had recommended for transition or transfer, totaling at least $16 million. Deferred or rejected initiatives are either sustained by JIEDDO indefinitely, transitioned or transferred during a future year, or terminated. When the services defer or reject the transition of initiatives, JIEDDO remains responsible for them beyond the intended 2-year transition or transfer point, a delay that could diminish its ability to fund new initiatives and leads to uncertainty about when or if the services will assume funding responsibility in the future. Furthermore, JIEDDO’s initiative transitions are hindered when service requirements are not fully considered during the development and integration of joint-funded counter-IED initiatives, as evidenced by two counter-IED radio jamming systems. In the first example, CENTCOM, whose area of responsibility includes both Iraq and Afghanistan, responded to an urgent operational need by publishing a requirement in 2006 for a man-portable IED jamming system for use in theater. In 2007, JIEDDO funded and delivered to theater a near-term solution to meet this capability gap. However, Army officials stated that the fielded system was underutilized by troops in Iraq, who thought the system was too heavy to carry, especially given the weight of their body armor. Since then, the joint counter-IED radio jamming program board has devised a plan to field a newer man-portable jamming system called CREW 3.1. According to JIEDDO, CREW 3.1 systems were developed by a joint technical requirements board that aimed to balance specific service requirements for man-portable systems. While CENTCOM maintains that CREW 3.1 is a requirement in-theater, and revalidated the need in September 2009, officials from the Army and Marine Corps have both stated that they do not have a formal requirement for the system. Nevertheless, DOD plans to field the equipment to each of the services in response to CENTCOM’s stated operational need. It remains unclear, however, which DOD organizations will be required to pay for procurement and sustainment costs for the CREW 3.1, since DOD has yet to identify the source of funding to procure additional quantities. In the second example, Army officials stated that they were not involved to the fullest extent possible in the evaluation and improvement process for a JIEDDO-funded vehicle-mounted jamming system, even though the Army was DOD’s primary user in terms of total number of systems fielded. The system, called the CREW Vehicle Receiver/Jammer (CVRJ), was initiated in response to an urgent warfighter need in November 2006 for a high-powered system to jam radio frequencies used to detonate IEDs. The development of this technology ultimately required at least 20 proposals for configuration changes to correct flaws found in its design after contract award. Two of the changes involved modifying the jammer so it could function properly at high temperatures. Another change was needed to prevent the jammer from interfering with vehicle global positioning systems. Army officials stated that had they had a more direct role on the Navy-led control board that managed configuration changes to the CVRJ, the system may have been more quickly integrated into the Army’s operations. As this transpired, the Army continued to use another jamming system, DUKE, as its principal counter-IED electronic warfare system. Not ensuring that service requirements are fully taken into account when evaluating counter-IED initiatives creates the potential for fielding equipment that is inconsistent with service requirements. This could later delay the transition of JIEDDO-funded initiatives to the services following JIEDDO’s 2-year transition timeline. To facilitate the transition of JIEDDO funded initiatives, our report issued today recommended that the military services work with JIEDDO to develop a comprehensive plan to guide the transition of each JIEDDO- funded initiative, including expected costs, identified funding sources, and a timeline including milestones for inclusion into the DOD base budget cycle. We also recommended that JIEDDO coordinate with the services prior to funding an initiative to ensure that service requirements are fully taken into account when making counter-IED investment decisions. In response to these recommendations, DOD concurred with our recommendation to develop a comprehensive plan and noted steps to be taken to address this issue. DOD partially concurred with our recommendation that JIEDDO coordinate with the services prior to funding an initiative, noting the department’s concern over the need for a rapid response to urgent warfighter needs. While we recognize the need to respond quickly to support warfighter needs, we continue to support our recommendation and reiterate the need for the integration of service requirements and full coordination prior to funding an initiative to ensure that these efforts are fully vetted throughout DOD before significant resources are committed. JIEDDO’s lack of clear criteria for the counter-IED training initiatives it will fund affects its counter-IED training investment decisions. JIEDDO devoted $454 million in fiscal year 2008 to support service counter-IED training requirements through such activities as constructing a network of realistic counter-IED training courses at 57 locations throughout the United States, Europe, and Korea. DOD’s directive defines a counter-IED initiative as a materiel or nonmateriel solution that addresses Joint IED Defeat capability gaps. Since our last report on this issue in March 2007, JIEDDO has attempted to clarify what types of counter-IED training it will fund in support of theater-urgent, counter-IED requirements. In its comments to our previous report, JIEDDO stated that it would fund an urgent theater counter-IED requirement if it “enables training support, including training aids and exercises.” JIEDDO also stated in its comments that it would fund an urgent-theater, counter-IED requirement only if it has a primary counter-IED application. Although JIEDDO has published criteria for determining what joint counter-IED urgent training requirements to fund and has supported service counter-IED training, it has not developed similar criteria for the funding of joint training initiatives not based on urgent requirements. For example, since fiscal year 2007, JIEDDO has spent $70.7 million on role players in an effort to simulate Iraqi social, political, and religious groups at DOD’s training centers. JIEDDO also spent $24.1 million on simulated villages at DOD’s training centers in an effort to make steel shipping containers resemble Iraqi buildings. According to Army officials, these role players and simulated villages funded by JIEDDO to support counter-IED training are also utilized in training not related to countering IEDs. As a result, JIEDDO has funded training initiatives that may have primary uses other than defeating IEDs, such as role players and simulated villages to replicate Iraqi conditions at various service combat training centers. Without criteria specifying which counter-IED training initiatives it will fund, JIEDDO may diminish its ability to fund future initiatives more directly related to the counter-IED mission. DOD also could hinder coordination in managing its resources, as decision makers at both the joint and service level operate under unclear selection guidelines for which types of training initiatives should be funded and by whom. We have therefore recommended in the report being issued today that JIEDDO evaluate counter-IED training initiatives using the same criteria it uses to evaluate theater-based joint counter-IED urgent requirements, and incorporate this new guidance into an instruction. In commenting on our recommendation, DOD partially concurred and expressed concerns regarding our recommendation noting that JIEDDO’s JCAAMP and the development of new DOD-wide guidance would address the issues we note in our report. In response, while we recognize the steps taken by DOD to identify counter-IED training gaps and guide counter-IED training, these actions do not establish criteria by which JIEDDO will fund counter- IED training. JIEDDO has not yet developed a means for reliably measuring the effectiveness of its efforts and investments in combating IEDs. The OMB circular A-11 notes that performance goals and measures are important components of a strategic plan and that it is essential to assess actual performance based on these goals and measures.. JIEDDO officials attribute difficulty in determining the effectiveness of its initiatives to isolating their effect on key IED threat indicators from the effect of other activities occurring in-theater at the same time, such as a surge in troops, changes in equipment in use by coalition forces, local observation of holidays, or changes in weather such as intense dust storms, which may cause a decrease in the number of IED incidents. JIEDDO has pursued performance measures since its inception to gauge whether its initiatives and internal operations and activities are operating effectively and efficiently, and achieving desired results. In December 2008 JIEDDO published a set of 78 specific performance measures for its organization. The list included, for example, metrics to evaluate JIEDDO’s response time in satisfying urgent theater requirements, the quality and relevance of counter-IED proposals JIEDDO solicits and receives in response to its solicitations, and the ratio of initiatives for which JIEDDO completes operational assessments. However, JIEDDO has not yet established baselines for these measures or specific goals and time frames for collecting, measuring, and analyzing the relevant data. Further, we have found several limitations with the data JIEDDO collects and relies upon to evaluate its performance. Our ongoing work has identified three areas in which the data JIEDDO uses to measure effectiveness and progress is unreliable or is inconsistently collected. First, data on effectiveness of initiatives based on feedback from warfighters in-theater is not consistently collected because JIEDDO does not routinely establish data-collection mechanisms or processes to obtain useful, relevant information needed to adequately assess the effectiveness of its initiatives. JIEDDO officials also said that data collection from soldiers operating in-theater is limited because the process of providing feedback may detract from higher priorities for warfighters. In response to this data shortfall, JIEDDO managers began an initiative in fiscal year 2009 to embed JIEDDO-funded teams within each brigade combat team to provide JIEDDO with an in-theater ability to collect needed data for evaluating initiatives. However, because this effort is just beginning, JIEDDO officials stated that they have not yet been able to assess its effectiveness. Second, data on the management of individual initiatives, such as data recording activities that take place throughout the development of an initiative, are not consistently recorded and maintained at JIEDDO. Officials attribute the poor data quality to the limited amount of time that JIEDDO staff are able to spend on this activity. JIEDDO staff are aware that documentation of management actions is needed to conduct counter-IED initiative evaluations and told us that they plan to make improvements. However, needed changes—such as routinely recording discussions, analysis, determinations, and findings occurring in key meetings involving JIEDDO and external parties and coding their activities in more detail to allow differentiation and deeper analysis of activities and initiatives—are yet to be developed and implemented. Third, JIEDDO does not collect or fully analyze data on unexpected outcomes, such as initiatives that may result in an increase in the occurrence or lethality of IEDs. However, we believe that such data can provide useful information that can be used to improve initiatives. For example, in response to a general officer request in Iraq, the Institute for Defense Analysis collected and analyzed IED incident data before and after a certain initiative to determine its effect on the rate of IED incidents. JIEDDO officials intended the initiative in question to result in the reduction in IED attacks. However, the data collected contradicted the intended result because the number of IED incidents increased in areas where the initiative was implemented. These data could provide lessons learned to fix the initiative or take another approach. We expect to provide further information and recommendations, if appropriate, on JIEDDO’s efforts to gauge the effectiveness of its counter-IED efforts—including issues involving data collection and reliability—in the report we will be issuing in early 2010. Although JIEDDO has established JCAAMP as its process to review and approve proposals for counter-IED initiatives, JIEDDO excludes some initiatives from that process. JCAAMP was established in response to DOD’s directive, which stated that all of JIEDDO’s initiatives are to go through a review and approval process. This requirement is consistent with government internal control standards, which identify properly segregating key duties and responsibilities—including responsibility for authorizing and processing transactions—as a fundamental control activity. In reviewing 56 initiatives for case studies, we found that JIEDDO excluded 26 of the 56 counter-IED initiatives from JCAAMP. For example, JIEDDO excluded one initiative to enhance the counter-IED training experience by funding role players who are to help create a realistic war environment. However, another initiative with similar purpose and objective was included in the JCAAMP process. As a result, when initiatives are excluded from JCAAMP, internal and external stakeholders do not have the opportunity to review, comment on, and potentially change the course of the initiative in coordination with competing or complementary efforts. Additionally, although the remaining 30 of 56 initiatives we reviewed went through JCAAMP, according to JIEDDO officials, we found that 22 of those 30 initiatives did not comply with some of the steps required by applicable DOD guidance. Applicable guidance includes JIEDDO’s directive, instruction, and standard operating procedures, which together identify a set of various decision points and actions, collectively intended to control JIEDDO’s use of resources. For example, we found that, for 16 initiatives among the 22, JIEDDO released funding to the services without obtaining required funding approval from either the Deputy Secretary of Defense—as is required for initiatives over $25 million—or from the JIEDDO Director, for initiatives up to $25 million. The exclusion of initiatives from JCAAMP, coupled with noncompliance with steps of the process required by applicable guidance, reduces transparency and accountability of JIEDDO’s actions within JIEDDO, as well as to the Deputy Secretary of Defense, the services, and other DOD components. Without management oversight at important milestones in the approval and acquisition process, some funds appropriated for JIEDDO may be used to support efforts that do not clearly advance the goal of countering IEDs. According to JIEDDO officials, systematic compliance with its process and documentation has been a weakness that JIEDDO has attempted to correct, and it continues to pursue improvements in this regard. During the course of our work, officials from different JIEDDO divisions— including its accounting and budgeting, acquisition oversight, and internal review divisions—said they saw significant improvement in discipline and compliance with JIEDDO’s process for managing counter-IED initiatives beginning in the last quarter of fiscal year 2009. As JIEDDO officials point out, the improvements they cite have occurred relatively recently and have not had time to demonstrate their full effect. Nonetheless, the findings in our ongoing review, and in prior GAO reports, confirm that JIEDDO has not had a systematic process in place to manage or document its activities and operations for the majority of its operating life. In the report we plan to issue in early 2010, we will present a more detailed assessment of JIEDDO’s review and approval process and will make recommendations as appropriate. While JIEDDO has affirmed the importance of addressing shortcomings in its internal control system and is taking action to this end, it still lacks adequate internal controls to ensure that it is achieving its objectives. An adequate system of internal controls supports performance-based management with the procedures, plans, and methods to meet the agency’s missions, goals, and objectives. Internal controls serve as the first line of defense in safeguarding assets and preventing and detecting errors and fraud, and they help program managers achieve desired results through effective stewardship of public resources. However, in July 2009 JIEDDO reported to the OSD Comptroller that a material weakness exists in JIEDDO’s internal control system and has existed since it was established in January 2006. OMB defines a material weakness as a deficiency or combination of deficiencies that could adversely affect the organization’s ability to meet its objectives and that the agency head determines to be significant enough to be reported outside the agency. For example, in our ongoing work we have identified, and JIEDDO officials have confirmed, that JIEDDO’s internal controls system has not: (1) provided for the identification and analysis of the risks JIEDDO faces in achieving its objectives from both external and internal sources; and (2) assessed its performance over time and ensured that the findings of audits and other reviews have been promptly resolved. Consequently, JIEDDO has not developed a set of control activities that ensure its directives—and ultimately its objectives—are carried out effectively. Without assurance from JIEDDO that it has identified and addressed its control weaknesses, OSD does not monitor JIEDDO’s progress and effectiveness and therefore is unable to detect the extent to which JIEDDO has weaknesses. Given the longstanding weaknesses in JIEDDO’s system of internal controls, it is unable to assure the DOD Comptroller that the program is achieving its objectives. The DOD Comptroller is responsible for the development and oversight of DOD’s internal control program. In carrying out its responsibilities, DOD Comptroller officials told us that they relied solely on JIEDDO to internally develop and implement effective internal control systems that address key program performance risks and monitor effectiveness and compliance, and to report deficiencies or weaknesses in its internal control system through a report called the annual assurance statement, which is provided each year to the OSD Office of the Director of Administration and Management. DOD uses additional techniques in its general oversight of JIEDDO, such as the Deputy Secretary of Defense’s review and approval of certain high-dollar counter-IED initiatives. However, JIEDDO’s annual assurance statement is the key mechanism DOD relies upon to comprehensively and uniformly summarize and monitor internal control system status within its organizations—including JIEDDO—and, more importantly, to report and elevate unresolved deficiencies to higher levels within and outside of DOD for awareness and action. However, DOD’s limited oversight system for JIEDDO has not fully addressed control weaknesses present at JIEDDO since its first year of operation. Further, JIEDDO did not detail these control weaknesses in either of its first two annual statements of assurance in 2007 and 2008 or in its third and most recent statement of assurance completed in July 2009. The 2009 assurance statement established a 3-year timeline with incremental milestones to develop and implement a complete internal management control program by the end of fiscal year 2012. In our report we plan to issue in early 2010, we will present a fuller assessment of JIEDDO’s management control processes, and will make recommendations as appropriate. In conclusion, Mr. Chairman, while JIEDDO has taken important steps to improve its management of DOD’s counter-IED efforts, DOD continues to face a number of challenges in its effort to gain full visibility over all counter-IED activities, coordinate the transition of JIEDDO initiatives, and clearly define the types of training initiatives it will fund. Additionally, JIEDDO’s approval process for counter-IED initiatives poses significant challenges to its ability to provide full transparency and accountability over its operations. All of these challenges highlight the need for DOD to evaluate the effectiveness of its current oversight of all counter-IED efforts across the department, yet the consistent collection of reliable performance data is one of JIEDDO’s greatest challenges. With improved internal controls, JIEDDO will be in a better position to ensure that it is in compliance with applicable law and its resources are safeguarded against waste. If these issues are not resolved, DOD’s various efforts to counter IEDs, including JIEDDO, face the potential for duplication of effort, unaddressed capability gaps, integration issues, and inefficient use of resources in an already fiscally challenged environment, and the department will lack a basis for confidence that it has retained the necessary capabilities to address the IED threat for the long term. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or members of the subcommittee may have at this time. For future questions about this statement, please contact me on (202) 512- 8365 or SolisW@GAO.gov. Individuals making key contributions to this statement include Cary Russell, Grace Coleman, Kevin Craw, Susan Ditto, William Horton, Richard Powelson, Tristan To, Yong Song, and John Strong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Improvised explosive devices (IED) are the number-one threat to troops in Iraq and Afghanistan, accounting for almost 40 percent of the attacks on coalition forces in Iraq. Although insurgents' use of IEDs in Iraq has begun to decline, in Afghanistan the number of IED incidents has significantly increased. The Joint IED Defeat Organization (JIEDDO) was created to lead, advocate, and coordinate all DOD efforts to defeat IEDs. Its primary role is to provide funding to the military services and DOD agencies to rapidly develop and field counter-IED solutions. Through fiscal year 2009, Congress has appropriated over $16 billion to JIEDDO. In addition, other DOD components, including the military services, have devoted at least $1.5 billion to the counter-IED effort--which does not include $22.7 billion for Mine Resistant Ambush Protected vehicles. This testimony is based on a report that GAO is issuing today as well as preliminary observations from ongoing work that GAO plans to report in early 2010. In the report being issued today, GAO is recommending that JIEDDO (1) improve its visibility of counter-IED efforts across DOD, (2) develop a complete plan to guide the transition of initiatives, and (3) define criteria for its training initiatives to help guide its funding decisions. DOD generally concurred with GAO's recommendations and noted actions to be taken. Since its creation, JIEDDO has taken several steps to improve its management of counter-IED efforts. For instance, GAO's ongoing work has found that JIEDDO has been improving the management of its efforts to defeat IEDs, including developing and implementing a strategic plan that provides an overarching framework for departmentwide efforts to defeat IEDs, as well as a JIEDDO-specific strategic plan. Also, as noted in the report GAO is issuing today, JIEDDO and the services have taken steps to improve visibility over their counter-IED efforts, and JIEDDO has taken several steps to support the ability of the services and defense agencies to program and fund counter-IED initiatives. However, several significant challenges remain that affect DOD's ability to oversee JIEDDO. Some of these challenges are identified in GAO's report being released today along with recommendations to address them. For example, one challenge is a lack of full visibility by JIEDDO and the services over counter-IED initiatives throughout DOD. Although JIEDDO and various service organizations are developing and maintaining their own counter-IED initiative databases, JIEDDO and the services lack a comprehensive database of all existing counter-IED initiatives, which limits their visibility over counter-IED efforts across the department. In addition, JIEDDO faces difficulties coordinating the transition of funding responsibility for joint counter-IED initiatives to the services, due to gaps between JIEDDO's transition timeline and DOD's base budget cycle. JIEDDO's initiative transitions also are hindered when service requirements are not fully considered during JIEDDO's acquisition process. JIEDDO also lacks clear criteria for defining what counter-IED training initiatives it will fund and, as a result, has funded training activities that may have primary uses other than defeating IEDs. Additionally, GAO's ongoing work has identified other oversight challenges. For example, JIEDDO lacks a means as well as reliable data to gauge the effectiveness of its counter-IED efforts. GAO's work has identified several areas in which data on the effectiveness and progress of IED-defeat initiatives are unreliable or inconsistently collected. In some cases, data are not collected in-theater because the initiatives may not be designed with adequate data-collection procedures. Another challenge facing JIEDDO is its inconsistent application of its counter-IED initiative acquisition process, allowing initiatives to bypass some or all of the process's key review and approval steps. Further, JIEDDO lacks adequate internal controls to ensure DOD that it is achieving its objectives. For example, in July 2009, JIEDDO reported that its internal controls system had a combination of deficiencies that constituted a material weakness. Such a weakness could adversely affect JIEDDO's ability to meet its objectives. Finally, JIEDDO has not developed a process for identification and analysis of the risks it faces in achieving its objectives from both external and internal sources, and it has not assessed its performance over time or ensured that the findings of audits and other reviews have been promptly resolved. As GAO completes its ongoing work it expects to issue a report with recommendations to address these issues.
The District of Columbia government, acting through the Mayor, the District’s Redevelopment Land Agency (RLA), and the District of Columbia Arena, L.P. (DCALP)—a limited partnership formed by the owner of the Washington Wizards and the Washington Capitals—agreed that DCALP would build a sports arena (estimated to cost about $175 million) and that the District would be responsible for financing certain predevelopment costs. The District agreed to be responsible for the predevelopment costs of: acquiring land, including the purchase of property not then owned by the connecting the Gallery Place Metrorail station to the sports arena; relocating District employees from two buildings on the site to other locations; and demolishing buildings, remediating soil, relocating utilities, and securing all regulatory approvals necessary for construction of the sports arena. The Omnibus Budget Support Act of 1994 (Arena Tax Act), as amended, provides for a Public Safety Fee (Arena Tax) to be levied on businesses located in the District based upon the annual gross receipts of such businesses. The Arena Tax is due on or before June 15 of each year. The Arena Tax Act requires the Mayor to raise the Arena Tax rates to provide for annual revenues of $9 million if the Arena Tax revenues are estimated to be less than $9 million. The Arena Tax Act also authorized RLA to pledge the Arena Tax as security to repay loans to finance predevelopment activities. The Arena Tax was first levied in fiscal year 1995 and mostly used to fund predevelopment activities. In subsequent years, the Arena Tax was used to pay principal and interest (debt service) on the bonds as required by the bond resolution. To initially finance the predevelopment costs of the sports arena, $2.5 million was advanced by the District’s Sports Commission. The funds were provided with the understanding that they would be repaid from the proceeds of a loan the District would secure. In August 1995, the District received a $53 million loan commitment (line of credit) from a consortium of banks. In January 1996, RLA issued about $60 million in revenue bonds backed by the Arena Tax and paid off the $36.6 million portion of the line of credit used. The funds originally available to pay the arena’s net predevelopment costs and to establish a debt service reserve totaled $66.6 million. These funds consisted of (1) $57.4 million in net bond proceeds from the sale of RLA Revenue Bonds in January 1996 and (2) about $9.2 million in 1995 net tax collections from the dedicated Arena Tax. Of the $66.6 million then available, $11 million was placed in two reserves. A mandatory $5 million capital reserve, which was required by the bond resolution, was established to pay for any insufficiency in the project fund. A reserve of about $6 million was established for debt service. Our objectives were to determine the status of the sports arena project’s (1) predevelopment costs, (2) revenue collections, and (3) bond redemption status. To determine the status of expenditures for predevelopment activities for the sports arena, we interviewed District officials on the Arena Task Force, the District’s Sports Commission and Corporation Counsel, and the D.C. Office of Treasury. We also held discussions with trustees for the bonds. We discussed the construction costs of the arena and Metrorail connection with officials from DCALP and WMATA. We reviewed all expenditures made since the period covered by our last report, from October 8, 1997, to April 30, 1998. Payments were made from the funds obtained from the net proceeds of the bond sale. The universe of payments included 10 expenditure items, which, at the time of our audit, represented 100 percent of the total funds spent in the review period. We reviewed each expenditure item to determine whether it was made within the terms of the contract or invoice amounts, it had been approved for payment by a District official, and the funds had actually been disbursed. We did not audit the reported taxes collected and deposited for the sports arena project. Therefore, we did not determine if the District government accurately identified the universe of taxpayers or reported all dedicated taxes for this project. However, we reviewed monthly statements provided by the lockbox trustee to determine the amount of taxes collected and placed in escrow. In addition, we confirmed that all payments due to the District from the ground lease had been made. This review provides an update on the previous work we performed. Our procedures were performed between March 1998 and June 1998 in accordance with generally accepted government auditing standards. Since our last report, predevelopment costs have increased from $58.6 million to about $61.5 million, which is a net amount of $2.9 million (5 percent). The District’s predevelopment activities consisted of acquiring land, constructing the Metrorail connection, relocating District employees, demolishing two buildings, remediating soil, relocating utilities, and using consultants to secure regulatory approvals. The District has completed almost all of its predevelopment activities and has spent $60 million, about 98 percent, of the estimated total expenditures. Table 1 shows the District’s total predevelopment activities financed for the sports arena project. As shown in table 1, land acquisition represents the largest increase in predevelopment expenditures. In order to assemble the arena site, the District acquired two pieces of property. At the time of our last report, November 1997, the price for one of the pieces of property had not been determined. On April 29, 1998, the District reached an out of court settlement with the owners to pay $8.2 million for the land. The price of the land was $2.9 million more than the $5.25 million the District had originally deposited with the D.C. Superior Court in invoking its power of eminent domain. To pay the additional $2.9 million, the District used funds from a grant made by the Department of Housing and Urban Development (HUD). Under HUD’s Community Development Block Grant (CDBG) program, the acquisition of property is a permitted use of grant funds. As of March 6, 1998, the District received permission from HUD to use CDBG funds to acquire the property. The Metrorail connection to the sports arena has been completed. As of April 21, 1998, $16.7 million and $18 million of the $19 million budget had been approved for expenditure and had been obligated, respectively. WMATA officials informed us that they expect the project to be closed out (all bills reviewed and approved for payment) by September 1998. It is their expectation that after the project is closed out, there will be about a $285,000 residual from the funds associated with the Department of Transportation (DOT) Capital Assistance Grant. According to WMATA officials, any residual balance must be used on a transportation related project. One District official told us that he expects the District to use these funds to defray the cost of design work on a Metrorail connection to the proposed new convention center. As shown in table 1, expenditures for the relocation of utilities have increased from the projected $3.4 million reported in our November 1997 report to about $3.5 million. The increase is attributable to a negotiated settlement between the District and the developer—DCALP—over the cost of infrastructure improvements to the site. In a letter dated October 6, 1997, DCALP cited seven infrastructure improvements it had made to the site, at a cost of $403,000, which it claimed under the terms of the Exclusive Development Rights Agreement (EDRA), that the District was responsible for. As part of its March 10, 1998, settlement, the District has obtained a legal agreement, intended to preclude the developer from prevailing in any future claims regarding the seven infrastructure improvements. All activities associated with soil remediation efforts have not been fully completed. The District’s project manager for the sports arena is still including in the expenditures an estimated $700,000 for the removal of concrete structures below the surface and contaminated soil on a parcel of land transferred to WMATA. District officials told us that they have budgeted sufficient funds to remove the concrete structures and cover the cost of remediation. They stated that based on tests done at the site, only limited amounts of the soil are contaminated. The project manager of the arena task force stated that the District has not contracted for the removal of the concrete because WMATA has not made a decision regarding the land’s use. The District’s Office of Corporation Counsel is actively pursuing its legal options for recouping the District’s cost for soil remediation and other related costs for the arena site. This office has obtained the assistance of a private law firm and a environmental study firm—both on a pro bono basis—to assist the city in its efforts to recover the District’s costs. The two firms have identified approximately 50 potential sources of the contaminants. The Corporation Counsel is currently assessing each firm’s potential liability and the ability of each firm to make restitution. Table 2 shows total receipts of about $65 million available as of April 30, 1998, to fund predevelopment costs. Revenues have increased from our November 1997 report mostly as a result of allocating a portion ($2.9 million) of the District’s CDBG grant funds received from HUD to pay for the increased price of the land the District acquired. Through April 30, 1998, the District had earned about $1.5 million in interest from the funds available to pay predevelopment costs. In our last report, we stated that all of the leasehold improvement costs associated with the relocated employees should have been paid from the District’s sports arena project fund rather than from the District’s appropriated funds because this activity was an allowable cost for the sports arena project. The project manager of the arena task force contends that $371,530 should be borne by the District since it was not factored into the original predevelopment activities budget. However, we have excluded the $371,530 District reimbursement since the cost should be borne by the sports arena project because these expenses were precipitated by the relocation action to allow arena construction. We had informed the District’s former Chief Financial Officer of this matter, and he had agreed to recoup the money from the sports arena project fund. As of June 30, 1998, the funds had not yet been returned to the District’s General Fund. As of April 30, 1998, collections for the 1997 Arena Tax had totaled about $9.6 million, about the same as the 1995 and 1996 collections of $9.3 million and $9.6 million, respectively. These funds were sufficient to meet 1997 principal and interest payments (about $5.9 million annually) on the bonds issued to finance the predevelopment expenses. The District forecasts Arena Tax collections of $9 million for each year that the bonds are outstanding. Since 1995, the trustees for the lockbox have reported that a total of $28.6 million has been collected exceeding the forecast of $27 million by $1.6 million. As was done in previous years, taxpayers were instructed to send their payments to a lockbox under the control of bank trustees. We verified that these funds were transferred to the trustee for the bonds and placed in accounts for principal and interest payments. Our analysis shows that if the present level of Arena Tax collections continue into the future, and if revenues from the ground lease of the arena and the $6 million in the debt service reserve, including interest earnings, are used, the sports arena bonds could be paid off in 2002, well before the 2010 maturity date of the longest term bonds. The combined total of $19.3 million in dedicated tax revenues collected for 1996 and 1997 is being used to pay principal and interest on the bonds. The District’s Sports Arena Special Tax Revenue Bonds include about $15.4 million in serial bonds, which have maturity dates from 1996 to 2000, and $44.5 million of term bonds with a stated maturity date of 2010 and mandatory sinking fund redemptions in the years 2001 through 2009. As of April 30, 1998, the bond trustees had paid out $14.2 million in principal and interest payments. Approximately $5.7 million had been paid in interest and $6 million of the serial bonds and $2.5 million of the term bonds had been redeemed. The remaining $5.1 million of Arena Tax collections was held in the debt service reserve funds (see next section). The bond resolution requires that any additional tax collected over the amount needed to pay debt service on bonds be placed in a super sinker fund and be used to redeem term bonds earlier than their due dates. The serial bonds cannot be redeemed earlier than their stated maturity dates. Table 3 shows our analysis of when the Arena Tax bonds can be fully paid off. Our analysis, which assumes similar future collections of dedicated tax revenues, annual ground lease revenue from DCALP, use of outstanding debt service reserve funds (plus interest), and no recession or cyclical downturns of the local economy, shows that the bonds could be paid off in the year 2002, or 8 years before the last scheduled maturity date. Upon redemption of all bonds in 2002, excess funds of $7.7 million would be transferred to the District General Fund. This scenario would save about $16.4 million in interest payments (see table 4). Once the arena tax bonds are repaid, the authorizing legislation calls for the dedicated taxes to be eliminated. The bond resolution requires that early redemption of term bonds occurs on a interest payment date—either May 1 or November 1 of each year—from excess revenues on deposit in the Redemption Account of the Debt Service Fund. On the interest payment date of November 1, 1997, no term bonds were redeemed even though $2.8 million was available in bond redemption account. We questioned the bond trustee as to why additional term bonds were not redeemed on November 1, 1997. Her response was that because of a change in personnel, the early redemption of term bonds had been overlooked. This missed opportunity to redeem term bonds prior to their maturity date could have caused the District to incur additional interest expense for that period. However, the excess funds were deposited in an interest earning account and the interest earned on the funds available for bond redemption was substantially the same as the average interest rate on the bonds that would have been redeemed and, accordingly, the District did not incur any losses. On April 3, 1998, the District received its first quarterly payment of $76,644 under the yearly ground lease for the arena site. These funds, as stipulated in the bond resolution, were placed in an account established to redeem bonds prior to their maturity date. The ground lease payments along with any excess funds in the project, capital reserve accounts, and debt reserve accounts are to be used for early bond redemption. We requested comments on a draft of this letter from the Mayor of the District of Columbia. The Mayor concurred with the information presented (see appendix I) and also provided, under separate cover, some technical suggestions that we have incorporated as appropriate to clarify the report. We are sending copies of this report to the Ranking Minority Member of your Subcommittee and to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations and their Subcommittees on the District of Columbia and the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs. Major contributors to this report are listed in appendix II. If you or your staff need further information, please contact me at (202) 512-4476. Richard T. Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the progress of the sports arena project in the District of Columbia, focusing on the project's predevelopment costs, revenue collections, and bond redemption status. GAO noted that: (1) the District has spent $60 million, about 98 percent of the estimated total cost of predevelopment activities, for the sports arena; (2) as of April 30, 1998, the District estimated total predevelopment costs to be about $61.5 million, a net increase of about $2.9 million over its October 7, 1997, estimate, as reported in GAO's November 1997 report; (3) the increase is largely due to the final agreed upon price the District paid for a parcel of land included in the arena site; (4) the only known expense not under contract or agreement is the District cost for soil remediation and the removal of concrete structures below the surface for a parcel of land transferred to the Washington Metropolitan Area Transit Authority; (5) the District's project manager for the sports arena has budgeted $700,000 for this activity, which is included in the total estimated cost; (6) the District's $5 million in remaining available funds for predevelopment costs for the sports arena appears to be sufficient to meet all estimated remaining expenditures; (7) as of April 30, 1998, the District had spent about $60 million and an additional $1.5 million was budgeted for the remaining predevelopment activities that will soon be completed, leaving approximately $3.5 million to pay unanticipated expenses or to redeem term bonds prior to their redemption dates; (8) collections from the dedicated arena tax have been more than sufficient to pay principal and interest of about $5.9 million annually on the bonds issued to finance the predevelopment expenses; (9) for each of the past 3 years, collections have exceeded the $9 million originally forecasted by the District, totalling about $1.6 million more than the District's forecast for the 3-year period; (10) as of April 30, 1998, the District had redeemed $6 million of the serial bonds and $2.5 million of the term bonds issued to finance the predevelopment expenses prior to their maturity date; (11) GAO's analysis shows that if the present level of collections are sustained, and revenues from the ground lease of the sports arena and the existing debt service reserve funds are used, all of the arena bonds would be paid by 2002, about 8 years before the 2010 maturity date; and (12) this redemption schedule would save the District about $16.4 million in interest costs, and allow about $7.7 million to be transferred to the District's General Fund.
No country has yet developed a geologic repository for the permanent disposal of highly radioactive waste. Because this type of nuclear waste produces relatively intense levels of radiation for thousands of years, developing an acceptably safe repository is a complex task involving diverse scientific and technical challenges. For example, DOE must design a repository that is compatible with the site and will be safe to operate for several decades. In addition, the Department must demonstrate how the combination of geologic (natural) and engineered (man-made) barriers to the migration of waste from the repository will operate effectively. Inherent in this demonstration are numerous uncertainties related to understanding and predicting how a repository will perform over a very long period of time. Finally, safety standards for evaluating a proposed repository that recognize the inherent uncertainty in the repository’s performance must be established. In the Nuclear Waste Policy Act of 1982, the Congress found that federal efforts during the previous 30 years to devise a permanent solution to the problems of disposing of radioactive waste had not been adequate. The act established, among other things, federal policy and responsibility for the safe management and disposal of highly radioactive waste from civilian nuclear power plants. The act charged DOE with selecting and investigating candidate sites for two repositories, recommending the selection of two sites for development, and constructing and operating one repository. DOE was required to establish guidelines for selecting and recommending repository sites that made specified geologic considerations the primary criteria. To ensure the safe management and disposal of waste for current and future generations, the act also required EPA to set environmental standards for the disposal of waste in repositories and NRC to establish regulations containing technical requirements and criteria for approving or disapproving of DOE’s applications to construct and operate repositories. Amendments to the act in 1987 directed DOE to investigate only the Yucca Mountain site. And the Energy Policy Act of 1992 required EPA to adopt specific public health and safety standards for that site on the basis of, and consistent with, a study of the scientific basis for such standards to be issued by the National Academy of Sciences, at EPA’s request, by the end of 1993. The 1992 act also required that, within 1 year after EPA adopted its standards for Yucca Mountain, NRC had to make its licensing regulations consistent with the standards. When the Congress passed the nuclear waste act, it expected that a repository could be operable by 1998. Subsequently, however, DOE extended the estimated date for a repository to 2003 and then to 2010. In the meantime, nuclear waste is accumulating and being stored at civilian nuclear power plants. The growing concern about the delay in beginning to remove nuclear waste from nuclear plant sites is reflected by a recent lawsuit and congressional consideration of legislation. In July 1996, the U.S. Court of Appeals for the District of Columbia Circuit ruled that the nuclear waste act creates an obligation for DOE to start disposing of utilities’ waste no later than January 31, 1998, and remanded the case for further proceedings. That same month, the Senate passed a bill (S. 1936) that, among other things, would have directed DOE to develop a facility for the interim storage of utilities’ waste on DOE’s Nevada Test Site. (A portion of Yucca Mountain lies within the western boundary of the Nevada Test Site.) Similar legislation was under consideration in the House of Representatives when the 104th Congress adjourned. Following the appropriation for fiscal year 1996, DOE (1) curtailed most investigative activities at Yucca Mountain, (2) decided to revise its guidelines for determining if the site is suitable for a repository, and (3) announced that it would assess, in 1998, the “viability” of a repository at Yucca Mountain. DOE anticipates that these changes could enable it to submit a license application to NRC in March 2002 at an affordable cost. (See fig. 1.) During fiscal year 1996, DOE curtailed, for the second consecutive year, the scope of its investigation of Yucca Mountain. In January 1992, DOE had estimated that it would cost $6.3 billion through 2001 to investigate the site and prepare a license application. As we reported in 1993, however, the budget requests and allotments of appropriations for the repository project from fiscal years 1991 through 1993 were less than the estimated funding requirements. (See table 1.) Therefore, in December 1994, DOE announced a plan to reorganize the investigation around tests to determine if the site is suitable for a repository, tests to support a license application, and tests that could be deferred until after the application had been submitted to NRC. According to DOE, the plan identified an aggressive field program, including drilling about 25 to 30 deep boreholes, and tests to be conducted from the surface of the site, in laboratories, and in the underground exploratory studies facility. This facility, which DOE expects to complete in 1997, is a U-shaped, 5-mile underground tunnel through Yucca Mountain. (See fig. 2.) DOE estimated that the reorganized investigation would cost about $2.9 billion for the 6 fiscal years from 1995 through 2000. Shortly after DOE received its appropriations for fiscal year 1996, it further reduced the scope of the investigation and eliminated about 875 positions for contract employees in the repository project. In addition, DOE reduced funding for waste storage, transportation, and program management activities by $82 million and eliminated more than 200 related positions for contract employees. The revised investigation is now focused on completing the viability assessment and, according to DOE, was developed using the following priorities, in descending order of importance: (1) synthesis and modeling of available information to focus testing programs on key uncertainties, (2) testing in the exploratory studies facility, and (3) surface testing, such as using existing and new wells from holes drilled into the groundwater underneath the site to test the characteristics of the groundwater. DOE estimated the cost of the revised investigation at about $2.1 billion for the 7 fiscal years from 1996 through 2002. Fundamental to the success of DOE’s revised approach for the repository project is its decision to revise its guidelines for determining if Yucca Mountain is a suitable site for a repository. DOE’s existing siting guidelines address the operation of a repository before it is permanently closed (preclosure guidelines) and the long-term behavior of the repository after it is closed (postclosure guidelines). For both areas, the guidelines are divided into “system” and “technical” guidelines. For example, the postclosure system guideline requires a demonstration that a proposed repository site and design would likely comply with EPA’s disposal standards and NRC’s licensing regulations. The technical guidelines establish specific conditions that are important to meeting the system guidelines. For example, the postclosure technical guidelines contain nine conditions that must be present at a site (qualifying conditions) and six conditions that must be absent from a site (disqualifying conditions) for DOE to find that the site is suitable for permanent waste disposal. Instead of comparing the Yucca Mountain site’s features to the technical conditions in DOE’s existing guidelines, the Department now plans to compare how a repository at Yucca Mountain would be expected to perform to EPA’s disposal standards and NRC’s licensing regulations. This approach, DOE says, will lead to a more efficient process for determining the site’s suitability by enabling the Department to focus investigative activities on issues that are most important to the performance of a repository at the site. The Department’s proposed changes to the guidelines were published for public comment on December 16, 1996 (61 Federal Register 66157). The principal objective of DOE’s new approach for the repository project is to issue a “viability assessment” in September 1998. The assessment will be a statement of the (1) tentative design and expected performance of the repository system, (2) necessary investigation activities and associated costs to submit a license application, and (3) estimated cost to construct and operate the repository. The assessment, in DOE’s view, will represent an improved appraisal of the prospects for disposing of nuclear waste at Yucca Mountain. According to the director of the disposal program, the assessment is intended to guide the completion of the work required for a site recommendation and to provide policymakers with a better estimate of the “viability” of a repository in the time frame required for decision-making. If the repository appears to be “viable,” then DOE intends to complete the work necessary to determine the suitability of the site, recommend that the site be selected for a repository, and, if the site is formally selected, apply for a construction license. The Department has not defined what constitutes a “viable” repository project; however, the assessment is not intended to demonstrate either that the Yucca Mountain site is suitable for or can be licensed as a repository. DOE also intends that the assessment be used to “inform” a possible decision in 1999 by the administration and the Congress to develop a facility near Yucca Mountain for storing nuclear waste until a repository is operational. An affirmative decision would trigger the beginning of the construction and operation of the storage facility and the transport of waste from nuclear power plants to the facility. As shown in figure 1, however, DOE does not expect to make a determination of the site’s suitability until July 1999 or to recommend a site until July 2001. Therefore, an earlier decision to develop a storage facility near Yucca Mountain could be viewed as a firm commitment to disposing of waste at Yucca Mountain. For example, the administration opposed S. 1936 because the bill would have designated a location on DOE’s Nevada Test Site as a site for a storage facility before DOE had completed the viability assessment. Such a designation, in the administration’s view, would have destroyed the credibility of the disposal program by prejudicing a future decision on a permanent repository at Yucca Mountain. According to the disposal program’s director, making a decision to develop a storage facility near Yucca Mountain after the viability assessment, although before DOE determines if the Yucca Mountain site is suitable for a repository, would provide for a more informed decision. Several uncertainties must be resolved in DOE’s favor if the Department is to achieve the project’s revised objectives and schedule. First, it is uncertain when EPA and NRC will issue the health standards and licensing regulations, respectively, that DOE will use to determine if Yucca Mountain is a suitable repository site. Also, the lack of applicable standards and regulations creates uncertainty about whether the scope of the Department’s site investigation has been adequate. Finally, limitations on the information that DOE is collecting in key areas and on NRC’s preparations to review a license application add more uncertainty to the repository project. The time it will take for EPA to issue its new disposal standards for Yucca Mountain and for NRC to conform its licensing regulations to those standards could affect DOE’s ability to make a decision on the site’s suitability and a recommendation on its current schedule. When EPA and NRC will issue their respective standards and licensing regulations is uncertain, but it could take 2 years or longer. Because NRC is required to conform its licensing regulations to EPA’s disposal standards, the standards must be issued first. In February 1993, EPA contracted with the National Academy of Sciences for the study, mandated by the Energy Policy Act of 1992, of the scientific basis for standards applicable to the Yucca Mountain site. In August 1995, the Academy issued its report. As of January 15, 1997, however, EPA had not issued proposed standards for public comment. EPA anticipates that it may be able to issue final standards within 1 year of proposing the standards for comment. According to officials of NRC’s waste management division, NRC expects to begin the process of revising its licensing regulations after EPA proposes its standards. For example, when NRC’s staff provides the Commission with comments on EPA’s proposed standards for the Commission’s consideration, the staff also plans to provide the Commission with a strategy for revising NRC’s licensing regulations. The two regulatory agencies’ previous experiences with earlier standards and licensing regulations have shown that it could take 2 years or longer to issue the new standards and revised licensing regulations. For example, EPA took almost 3 years from December 1982, when it proposed its original standards for nuclear waste repositories, to issue the final standards.After the standards were successfully challenged in court in 1987, EPA issued revised standards for public comment in February 1993 and final standards in December 1993. In June 1981, NRC proposed technical regulations for repositories. Subsequently, it took about 2 years for NRC to adopt the final technical regulations. Thus, it is unlikely that EPA’s standards and NRC’s revised licensing regulations will be in place by September 1998, when DOE expects to issue its viability assessment of the Yucca Mountain project. According to DOE, however, it is not important to have the standards and licensing regulations in place for the viability assessment because the Department does not intend to compare, in the assessment, the expected performance of the repository to the standards and regulations. The timing of EPA’s standards and NRC’s revised licensing regulations could, however, affect DOE’s schedule for completing a report that would provide the technical basis for determining if the Yucca Mountain site complies with the Department’s siting guidelines and for making the site selection recommendation. As discussed earlier, the Department intends to make compliance with the standards and licensing regulations its criteria for determining if Yucca Mountain is a suitable site for a repository. According to DOE, it needs to have the standards and licensing regulations in place at least 1 year before it makes the determination. In July 1999, DOE plans to complete an “interim evaluation” of the site’s suitability (see fig. 1) by issuing a technical report addressing the site’s compliance with the siting guidelines. To adhere to the Department’s schedule for completing the report, DOE needs to have the standards and licensing regulations in place by July 1998. Moreover, a recommendation by the Secretary to the President that the Yucca Mountain site be selected for a repository must, according to the nuclear waste act, be based on a comprehensive statement of the basis for the recommendation. Among other things, the comprehensive statement must contain NRC’s preliminary comments on the sufficiency of DOE’s investigation of Yucca Mountain for inclusion in a license application and the proposed form of the waste. In April 1999, DOE plans to issue a report to NRC documenting the investigation’s results. This report, DOE says, will provide information describing and modeling the site’s characteristics, the designs of the repository and waste packages, and the expected performance of the overall repository system. DOE intends to use this integrated discussion of its case for a safe repository at Yucca Mountain as the basis for NRC to provide its preliminary comments to DOE by January 2000. The Department will not be able to issue a meaningful report and NRC will not be in a position to provide the Department with its formal comments on the sufficiency of DOE’s site investigation until the standards and regulations have been issued. Until the substantive requirements of EPA’s disposal standards and NRC’s revised licensing regulations are known, DOE will not know if its scientific investigation of Yucca Mountain has adequately addressed all of the technical issues that are important to a credible determination of the site’s suitability and an acceptable license application. Although EPA has not yet proposed its standards, the National Academy of Sciences’ report to EPA on the technical basis for Yucca Mountain standards and DOE’s comments on that report show that there are significant differences of opinion about what the substantive requirements of the standards should be. The Academy recommended, among other things, standards that (1) limit the health risk, rather than the radiation dose, to individuals from the radioactive materials released from the repository; (2) require the measurement of compliance out to the time of peak risk, which is expected to occur tens or hundreds of thousands of years after the repository has been closed; and (3) define a critical group that would be at risk. Subsequently, DOE expressed three key concerns with the Academy’s recommendations and made recommendations to EPA on how the standards should be written. First, DOE recommended limiting compliance calculations to a 10,000-year period on the basis that the uncertainties in calculations over a longer period of time would limit the usefulness and validity of the calculations in a licensing proceeding. Second, DOE recommended using a less conservative level of risk than the Academy had recommended as a starting point for rulemaking. Third, DOE recommended a less complex and conservative approach to calculating risk to the critical group than the two options discussed in the Academy’s report. (See app. I for a more detailed discussion of the Academy’s report, DOE’s comments, and other issues related to the development of EPA’s standards and NRC’s licensing regulations.) The extent to which NRC revises its licensing regulations in making the regulations consistent with EPA’s standards could also affect the adequacy of DOE’s scientific investigation. Currently, NRC’s regulations require DOE to demonstrate compliance with EPA’s “generally applicable environmental standards.” However, these standards have been revised to pertain to repositories at sites other than Yucca Mountain. In addition, to provide sufficient confidence that a repository would perform as predicted, NRC’s regulations require DOE to demonstrate that the repository would comply with three more specific requirements. These requirements, called subsystem performance requirements, establish a minimum lifetime for packages containing waste, limits on the rate at which radioactive materials can be released from engineered (man-made) barriers within the repository, and the minimum time that water might take to travel from the repository to the accessible environment. NRC included these additional requirements because of the inherent uncertainty in an assessment of the performance of a repository over a long period of time. In its August 1995 report, the Academy concluded that NRC’s subsystem performance requirements could adversely affect the performance of a repository at Yucca Mountain by limiting DOE’s design flexibility. DOE agreed and recommended that NRC reconsider the use of these requirements. There are, however, arguments favoring these requirements. As recognized by NRC in the early 1980s, the subsystem performance requirements provide “defense in depth” by increasing confidence in assessments of compliance with EPA’s standards. Also, NRC pointed out that its regulations provide considerable design flexibility by permitting NRC to change the subsystem performance requirements, if warranted, during a licensing proceeding. According to the deputy director of NRC’s Division of Waste Management, NRC’s staff is considering whether or not the regulatory agency should retain these subsystem performance requirements in its licensing regulations and will address this issue when it provides the Commission with a proposed strategy for revising the agency’s licensing regulations. The outcome of this issue, as well as other issues that may arise as NRC revises its licensing regulations, could affect the scope and depth of the scientific investigation that DOE must perform to determine if Yucca Mountain is a suitable site for a repository. Regardless of the timing and substance of the final repository regulations, limitations on the information that DOE is collecting and on NRC’s preparations to review a license application increase uncertainty about the sufficiency of the Department’s investigation of Yucca Mountain. According to DOE, among the most important attributes of a repository at Yucca Mountain are the rate at which water seeps into the repository, the period of time that the packages containing waste will prevent the release of radioactive materials from them, and the manner in which radioactive materials that eventually reach the water table beneath the repository will be diluted by groundwater. Also, heat generated by the waste in the repository will affect the movement of water through the repository and the durability of the waste packages. There are indications of shortcomings in DOE’s investigation of all of these areas. For example, DOE may not have done enough to investigate the groundwater beneath and beyond Yucca Mountain, including where and how fast water moves and the rate at which water contaminated with waste materials would be diluted and dispersed as it enters the groundwater. According to the U.S. Geological Survey, which performs groundwater research for DOE, new questions about the importance of groundwater to the scientific investigation are beginning to arise; in the last decade, however, no new boreholes to address these uncertainties have been drilled, and only limited testing of the groundwater has occurred. One such issue is the unexplained cause of the large drop in the elevation of the water table at the northern end of Yucca Mountain. Geological Survey scientists say that this feature, which was discovered in 1981, is the most striking hydrologic feature in the area. According to the scientists, until they can explain the cause of the drop in the water table, they would find it difficult to claim that they understand the hydrology of the site. DOE agrees that the drop in the water table has not been fully evaluated. According to the Department, however, preliminary observations of a recently completed pumping test in an existing well indicate that this feature of the site has no effect on the flow of groundwater in the aquifer beneath Yucca Mountain. According to a 1996 report by DOE on the quality of the Geological Survey’s hydrologic investigations, major uncertainties, such as the unexplained drop in the groundwater level, at this stage of the scientific investigation limit understanding of how radioactive materials would move in groundwater. In the opinion of the report’s authors, the Geological Survey’s research has been severely handicapped by, among other things, the elimination of most borehole drilling from the investigation. (App. II discusses this limitation and others on DOE’s investigations of the hydrology at Yucca Mountain, as well as limitations on its investigations of the effects of heat on the repository’s performance and the testing of candidate materials for waste packages.) Under the existing legislative framework, NRC, not DOE, will ultimately decide if the Department’s investigation of Yucca Mountain has been adequate. Over the years, NRC has reviewed DOE’s repository project to identify and resolve technical issues, to prepare to review a license application, and to develop criteria for an acceptable license application. The criteria would provide guidance to DOE on NRC’s expectations for a license application that would adequately address the requirements of NRC’s licensing regulations. In 1995, NRC modified its approach to reviewing DOE’s repository project. Instead of trying to review all aspects of the project, NRC decided to identify and emphasize the 10 most important technical issues. According to NRC, however, in fiscal years 1996 and 1997 it eliminated its contractor support for independently evaluating 3 of these 10 issues because its nuclear waste appropriations for each year were only half of its $22 million appropriation for 1995. In the absence of funding, NRC will not conduct any more independent studies of the three issues. Instead, NRC’s staff will monitor DOE’s related activities and will bound related regulatory issues using conservative assumptions. Moreover, NRC said, if the recent budget trend continues, the agency would have to discontinue its contractor’s work on two more key technical issues and would not be able to complete its review of a DOE license application in the 3-year period required by the nuclear waste act. Thus, an additional uncertainty confronting DOE’s repository project is NRC’s position on the contents of an acceptable license application. NRC’s review of and comments on DOE’s 1998 viability assessment will provide the first insights into NRC’s formal position. (DOE does not intend to request comments on its viability assessment; however, NRC believes that its evaluation of the assessment would provide vital input to future decisions on the repository project.) For those key technical issues that NRC has reviewed, it intends to identify potential licensing weaknesses and major concerns with DOE’s designs or testing plans that could affect DOE’s estimate of the cost of the repository. For technical issues for which NRC has eliminated technical work by its contractor, the agency’s reviews of DOE’s designs and technical basis for performance assessments and cost estimates in the viability assessment will be limited and based on conservative assumptions and available knowledge. The viability assessment, however, is not a step required by the nuclear waste act. The first formal opportunity that the act provides NRC to comment on the sufficiency for a license application of DOE’s investigation of Yucca Mountain will occur when DOE seeks NRC’s preliminary comments on the sufficiency of the investigation. DOE expects to seek NRC’s comments in April 1999 and to receive the comments in January 2000. NRC’s next formal opportunity will be its initial review of DOE’s license application in 2002 to determine if the application is acceptable to begin the licensing proceeding. To the extent that NRC is unable to review important issues to gain confidence in DOE’s investigation and develop acceptance criteria, the agency intends to adopt conservative regulatory positions. Conservative positions could have the consequence of either requiring DOE to obtain and provide NRC with more information or, alternatively, to make modifications to the design of the repository system that could increase the system’s cost. DOE’s viability assessment may provide important insights into the expected design, performance, and cost of a repository at the Yucca Mountain site. However, the assessment’s utility as the basis for a decision in 1999 to develop a waste storage facility near the site is limited because the assessment will not demonstrate compliance with applicable siting guidelines, standards, and licensing regulations. Therefore, making such a decision on the basis of the viability assessment could be perceived as a firm commitment to eventually disposing of nuclear waste at the site. For essentially this reason, the administration opposed the provisions of S. 1936 that would have designated a site near Yucca Mountain for a storage facility before DOE had completed its viability assessment. The administration argued that such a designation would have destroyed the credibility of the disposal program by prejudicing a future decision on a permanent repository at Yucca Mountain. In our view, the logic of the administration’s position would also apply to such a designation made after the assessment has been completed but in advance of the decision on the site’s suitability, a recommendation that the site be selected for a repository, and the decision on licensing that must be made on the basis of compliance with the guidelines, standards, and regulations. We provided a draft of our report to DOE, EPA, and NRC for their review and comment. DOE and NRC provided written comments on this report, which appear in appendixes III and IV, respectively. EPA declined to comment on the report. DOE said that our report recommends that decisions involving the construction of a repository be suspended until the Secretary has recommended to the President that Yucca Mountain be selected for a repository. We did not propose such a recommendation. We merely observed that deciding to develop a waste storage facility near Yucca Mountain before the Department has determined that the proposed repository site complies with applicable siting guidelines, standards, and licensing regulations could be perceived as a firm commitment to eventually disposing of nuclear waste at the site. We revised our observation section to make clear that we were not proposing any recommendation. DOE also said that we appear to be misinformed about its plans to continue addressing the uncertainties related to hydrology, the effects of heat from waste on the performance of the repository, and waste package materials. The Department intends to make every reasonable effort to reduce uncertainties and, in a license application, will identify and discuss any remaining major uncertainties and the steps planned to reduce them. It is important to note, DOE added, that it is not required to demonstrate the performance of the repository system or components of this system until it submits a license application. We disagree that we are misinformed about DOE’s plans for addressing key technical issues. Our report states that, on the basis of the limited information that DOE has collected and concerns raised by technical experts, resolving existing uncertainties about these issues could affect the Department’s ability to achieve its objectives and schedule for the repository project. Whether or not DOE’s current plans to address key uncertainties are adequate can be definitively answered only after the Department has submitted an application to construct a repository. DOE provided other specific clarifying comments that we incorporated as appropriate. NRC pointed out that its previous and ongoing reviews of DOE’s site investigation project and interactions with the Department have documented feedback to DOE on what is needed for licensing. Therefore, NRC’s comments on the viability assessment’s discussion of DOE’s plans for the license application will reflect whatever significant differences remain between NRC’s staff and the Department. Moreover, NRC said, interactions between the two agencies focused on resolving licensing issues will continue, and should differences of opinion persist, they will be documented in the Commission’s preliminary sufficiency comments to be included in DOE’s site recommendation report. NRC provided other specific clarifying comments that we incorporated as appropriate. We performed our review at DOE’s headquarters in Washington, D.C., and at DOE’s Yucca Mountain Site Characterization Project Office in Las Vegas, Nevada. We also performed our review at the headquarters of NRC in Rockville, Maryland, and EPA in Washington, D.C. We visited the Yucca Mountain site in southern Nevada and met with representatives of the state of Nevada and Clark County, Nevada. We conducted our review from February 1996 through January 1997 in accordance with generally accepted government auditing standards. (See app. V for details of our scope and methodology.) A list of related GAO products appears at the end of this report. We are sending copies of this report to the Secretary of Energy; the Chairman, NRC; the Administrator of EPA; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. The Department of Energy’s (DOE) efforts to determine if a safe repository can be developed at Yucca Mountain are made more difficult because the site investigation is proceeding in parallel with fundamental changes to the regulations governing the project. If the site is eventually selected for a repository, DOE must demonstrate, in a licensing proceeding conducted by the Nuclear Regulatory Commission (NRC), that the proposed repository would comply with health standards issued by the Environmental Protection Agency (EPA) and NRC’s licensing regulations. However, EPA is just beginning the process of issuing health standards for the Yucca Mountain site that must be consistent with the findings and recommendations of a study by the National Academy of Sciences. And after EPA issues its standards, NRC must, if necessary, revise its licensing regulations to make the regulations consistent with the standards. The Academy has recommended that EPA take a different approach to setting standards for Yucca Mountain than the agency took a decade ago in setting its original standards for all nuclear waste repositories, including Yucca Mountain. DOE, however, has disagreed with several of the Academy’s findings and recommendations because, in part, of the perceived difficulty in implementing the recommended standards in a licensing proceeding on a repository at Yucca Mountain. Moreover, whether DOE’s scientific investigation of Yucca Mountain will be adequate to support a determination of the site’s suitability, a recommendation to select the site, and an acceptable license application on the Department’s current schedule will, to some extent, depend on when the final standards and licensing regulations are issued and their substantive requirements. Finally, DOE is in the process of basing its guidelines for determining the suitability of Yucca Mountain for a repository on EPA’s standards and NRC’s licensing regulations. Thus, the timing and content of the standards, licensing regulations, and siting guidelines that will be used for determining if the site is suitable for a repository, recommending that the site be selected for a repository, and applying for a license are currently unknown. The Nuclear Waste Policy Act of 1982 charged EPA with setting generally applicable environmental standards for the disposal of nuclear waste in repositories and NRC with setting criteria and technical requirements for licensing and regulating repositories. In December 1982, EPA proposed, and in September 1985 issued, its original disposal standards (40 C.F.R. part 191). The primary standard was based on containing waste materials within a repository. Specifically, the standard limited the cumulative releases of radioactive materials from the boundary of the repository to the accessible environment (the biosphere) for 10,000 years after closing a repository. In issuing this standard, EPA expected that the assessments of the long-term performance of a repository would be based on mathematical predictions of the anticipated behavior of both the natural and engineered (man-made) barriers making up the repository system and the likelihood of unanticipated events and processes, such as earthquakes and human intrusion, that could disrupt the repository. Subsequently, the Energy Policy Act of 1992 directed EPA to set specific disposal standards for the Yucca Mountain site that would prescribe the maximum annual effective dose to individual members of the public from the release of radioactive materials (disposed of in the repository) to the accessible environment. The act also required EPA to (1) arrange for an analysis by the National Academy of Sciences of the scientific basis for a standard to be applied at Yucca Mountain and (2) adopt health and safety standards on the basis of, and consistent with, the Academy’s findings and recommendations. Finally, the act required NRC to make its licensing regulations for a repository at Yucca Mountain consistent with EPA’s standards. In February 1993, EPA contracted with the Academy to study the technical basis for disposal standards for a repository at Yucca Mountain. The Academy issued its report to EPA in August 1995. Among other things, the Academy recommended that EPA limit the risk to individuals of adverse health effects from releases of radioactive materials from the repository rather than limiting the radiation dose to individuals or the cumulative releases of radioactive materials from the repository; measure compliance with the standard out to the point of peak risk to individuals, which is expected to occur tens or hundreds of thousands of years in the future, rather than the 10,000-year period in EPA’s original standards; define the “critical group” that would be at risk, rather than basing compliance on exposures of collective or worldwide populations to radiation; and separately evaluate the risk and consequences of intrusion into the repository by future humans and focus this evaluation on the repository’s capability to withstand such intrusion. As of January 15, 1997, EPA had not proposed standards for a repository at Yucca Mountain; however, according to the agency’s director of the Radiation Protection Division, Office of Radiation and Indoor Air, the agency may be able to issue final standards within 1 year of proposing the standards. EPA’s disposal standards for Yucca Mountain must be based on and consistent with the Academy’s findings and recommendations. DOE, however, expressed several concerns about the Academy’s recommendations. Depending on the substantive requirements of the standards that EPA eventually adopts, DOE may have to modify its scientific investigation of Yucca Mountain. In a November 2, 1995, letter to EPA, DOE expressed three key concerns about the Academy’s recommendations for a Yucca Mountain standard. First, the Department is concerned that uncertainties in the results of quantitative calculations made for a period that is greater than 10,000 years would limit the usefulness and validity of the calculations in a licensing proceeding. Therefore, DOE recommended that compliance calculations be limited to a period of 10,000 years. In DOE’s view, reasonably reliable calculations of a repository’s expected performance can be made for the shorter period of time. Second, DOE is concerned that the level of permissible risk to the designated critical population group that the Academy recommended as a starting point for developing the standards is unnecessarily conservative. The recommended level of risk would limit annual fatal cancers from the operation and closure of the repository to an increase of from 1 in 1 million to 1 in 100,000 in the affected population. According to DOE, none of the other federal and international regulations the Academy examined in its study require such a stringent limit over a period of hundreds of thousands of years. Moreover, DOE said, because of the overwhelming conservatism in the Academy’s study related to the calculations of risk levels, EPA should relax the starting point by a factor of 10; that is, the permissible level of risk should be a range of 1 in 100,000 to 1 in 10,000 increased fatal cancers per year. Third, DOE is concerned that the two options the Academy presented for calculating risk to the critical group and establishing a future reference biosphere are either too complex or too conservative. The majority of the Academy’s panel had recommended that EPA use theoretical statistical and analytical techniques to identify the observed characteristics of people currently living in the vicinity of the repository and to calculate the risk to this group. One panel member had recommended that EPA derive the average risk calculation from the radiation dose likely to be received by a “subsistence farmer.” This farmer was defined as the person likely to become the most contaminated because of his use of water extracted from a well near the repository to drink and to grow all of his food. DOE commented that the first of these two options is unprecedented and, among other things, would be very complicated to implement. The second option, according to DOE, appears to be simpler and easier to implement but would result in a very conservative level of risk. DOE suggested that a better option for calculating risk would be to base the calculations on the characteristics of a current population group perceived to be most at risk to radiation exposure from drinking contaminated groundwater and using it to irrigate the crops they would consume. In this option, DOE said, specific factors, such as the diversity of occupations and lifestyles and the relative consumption of local and imported foods, would be considered. The eventual content of EPA’s standard for Yucca Mountain is likely to influence the extent of the work that DOE must complete to determine if Yucca Mountain is a suitable site for a repository, recommend that the site be selected for that purpose, and submit an acceptable license application to NRC. For example, the substantive requirements of the standards could affect the scope of the investigation of groundwater around Yucca Mountain that is necessary to demonstrate compliance with the standards. According to DOE’s current strategy for containing and isolating waste in a repository at Yucca Mountain, the use of a dose-based or risk-based standard—instead of the release-based standard in EPA’s original standards for all repositories—would place additional emphasis on how radioactive materials would move through the rock layers in the saturated zone (the area containing groundwater) beneath and around Yucca Mountain. A goal of DOE’s containment strategy is to prove that the radioactive materials escaping into the saturated zone will be dispersed and diluted before they reach the accessible environment and therefore will result in acceptably low doses to humans over thousands of years. Hydrologists at the U.S. Geological Survey, who are investigating the hydrology of Yucca Mountain for DOE, told us that they are measuring the movement of injected tracer materials among three wells developed in close proximity to one another to model the flow of groundwater. In conjunction, DOE’s Los Alamos National Laboratory is modeling how the groundwater would transport radioactive materials. A limitation of the tests, however, is that they measure the transport of radioactive materials at only one point in time and space. For this reason, the tests are not likely to answer questions about the total flow of the groundwater system. A Geological Survey official stated that the project’s study plans provide for drilling another series of holes at a different location in 1998 and 1999, but the specifics of the study plans are uncertain. Thus, DOE may need to undertake additional work to help explain the flow of groundwater in the saturated zone and transport characteristics to verify theories that dispersion and dilution of radioactive materials will keep radiation doses low for thousands of years. Similarly, the period of regulatory compliance that EPA adopts in the final standard could affect the relative importance of hydrologic studies to a compliance determination. Geological Survey scientists told us that a period of compliance that is much longer than 10,000 years would place more emphasis on the behavior of the saturated zone. Currently, these scientists believe that very little water would move from Yucca Mountain down into the saturated zone in 10,000 years. Over a much longer time period, however, more water may reach the saturated zone. This possibility raises questions about how radioactive materials escaping the repository over the longer time period would be diluted in the groundwater to limit potential human exposure. As discussed above and in appendix II, however, groundwater flow and transport properties in the saturated zone are not well understood. The revisions that NRC may make to its licensing regulations could affect DOE’s ability to meet the objectives and schedule for its repository project. As required by the 1992 energy act, NRC is to make its licensing regulations consistent with EPA’s disposal standards for Yucca Mountain. Because EPA has not yet issued its standards, there are outstanding questions about how DOE will implement both the standards and NRC’s revised licensing regulations. One key unanswered question, for example, is whether NRC will retain certain requirements for repository performance that are contained in its existing licensing regulations. NRC’s existing licensing regulations require DOE to demonstrate compliance with EPA’s disposal standards. However, when NRC developed its regulations in the early 1980s, it recognized that the assessment of the performance of a repository over a long period of time entails considerable uncertainty. Therefore, to provide sufficient confidence that a proposed repository would perform as predicted, NRC’s regulations also require DOE to demonstrate that a repository would comply with three more specific requirements. These requirements, called subsystem performance requirements, establish (1) a minimum lifetime for packages containing waste, (2) limits on the rate at which radioactive materials may be released from engineered barriers within the repository, and (3) a minimum period of time that groundwater may take to travel from the repository to the accessible environment. In its report to EPA, the National Academy of Sciences concluded that the retention of the subsystem performance requirements in NRC’s licensing regulations could result in a less than optimal design and level of performance for the repository. For example, according to the Academy, DOE might find it necessary to move the repository site within Yucca Mountain to meet the subsystem performance requirement for groundwater travel time. In doing so, the Academy suggested, DOE might also increase the risk of human exposure to radioactive gases moving from the repository to the surface. Accordingly, the panel recommended precluding the subsystem performance requirements from foreclosing design options that ensure the best long-term performance of the repository. DOE, in commenting on the Academy’s report, agreed and recommended that NRC reconsider the use of subsystem performance requirements. In a previous report on seven foreign countries’ programs for disposing of nuclear waste, we noted that regulators in most of these countries are concerned only that proposed repositories meet overall safety goals (standards). These regulators said they expect to leave the design details to the repository developers. On the other hand, there are arguments in favor of retaining subsystem performance requirements. For example, in response to public comments on NRC’s proposed technical regulations for repositories, issued in June 1981, NRC stated that there is significant uncertainty in making assessments of the overall performance of a repository for a period covering thousands of years. NRC added that subsystem performance requirements provide “defense-in-depth” by increasing confidence in the assessments of compliance with EPA’s standards. NRC also pointed out that the subsystem performance requirements are not absolute—the final regulations, issued in June 1983, permit NRC to change them, if warranted, during a repository licensing proceeding. Thus, in NRC’s view at that time, its licensing regulations provided DOE with considerable flexibility to design an optimal repository system at a specific site. Furthermore, NRC noted, the subsystem performance requirements may be necessary to ensure that a repository will meet the numerical criteria in EPA’s (original) containment standard for unanticipated processes and events (such as earthquakes, flooding, or disruption of the repository by humans). Finally, NRC noted that its task is not only one of mathematically modeling a system and assigning values for particular barriers represented in the model to arrive at a “bottom line” for overall system performance. NRC is also concerned, it said, that its final judgments be made with a high degree of confidence. Accordingly, NRC stated, it can and will expect the performance of barriers to be enhanced so as to provide greater confidence in its licensing judgments, wherever practicable to do so. According to the deputy director of NRC’s Division of Waste Management, NRC’s staff expects to review its technical requirements for repositories and its licensing criteria and will re-evaluate, as part of this review, the need for subsystem performance requirements. However, NRC is waiting for EPA to issue its proposed standards for Yucca Mountain before proposing any changes to its licensing regulations. Fundamental to the success of DOE’s revised approach to completing the repository project is its decision to revise its criteria for determining if Yucca Mountain is a suitable site for a repository. The Nuclear Waste Policy Act required DOE to establish general guidelines for the recommendation of sites for nuclear waste repositories. These siting guidelines must specify detailed geologic considerations that shall be the primary criteria for the selection of sites in various geologic media. After obtaining public comment, including NRC’s concurrence with the guidelines, DOE issued them as a regulation in December 1984. The siting guidelines require that DOE evaluate individual sites and compare them on the basis of criteria that address (1) the operation of a repository before it is permanently closed (preclosure guidelines) and (2) the long-term behavior of the repository after it is closed (postclosure guidelines). Both the preclosure and postclosure guidelines are divided into system and technical guidelines. Three preclosure system guidelines establish performance objectives that must be taken into account during a repository’s operations in the areas of radiation safety; environment, socioeconomics and transportation; and ease and cost of siting, construction, operation, and closure. The postclosure system guideline establishes broad performance objectives for protecting public health and safety that are based on compliance with EPA’s disposal standards and NRC’s licensing regulations. These requirements must be met by the repository system, which must contain both natural and engineered barriers. The engineered barriers are to be designed to complement the natural barriers, which are to provide the primary means for waste isolation. The preclosure and postclosure guidelines also contain technical guidelines which establish specific conditions that are important to meeting the system guidelines. For example, the postclosure technical guidelines contain nine conditions that must be present at (qualifying conditions) and six conditions that must be absent from (disqualifying conditions) a site for DOE to find that the site is suitable for permanent waste disposal. For each such technical guideline, DOE is to make an evaluation of qualification or disqualification. Until recently, DOE had intended to use these siting guidelines as the basis for determining if Yucca Mountain is a suitable site for a repository. To this end, DOE had planned to complete, at an estimated cost of $634 million, sufficient scientific investigations and related technical reports to make preliminary technical findings in 1998 on whether Yucca Mountain meets the criteria contained in the guidelines. DOE has now decided, however, to amend the siting guidelines by adding new guidelines that would pertain only to the Yucca Mountain site. The proposed guidelines, which were published for public comment on December 16, 1996, would base the determination of the suitability of Yucca Mountain as a site for a repository on a comparison of the overall performance of a repository system at that site to EPA’s new disposal standards and NRC’s revised licensing regulations. DOE does not, as required by the original guidelines, intend to determine the presence or absence of each qualifying and disqualifying condition contained in the technical guidelines. An overall system performance approach, DOE says, will lead to a more efficient process for evaluating the suitability of the Yucca Mountain site. DOE believes that the overall approach to a repository system’s performance is the appropriate method to consider all relevant site features because the approach identifies, in an integrated manner, those attributes of the site and engineered components that are most important to the protection of health and safety. According to DOE, the information gained from the site investigations and the preliminary assessments of how a repository would perform at the site show that the significance of selected site characteristics should not be judged in isolation from one another or from a specific design concept for the repository. For example, a geological structural feature may seem to be a detriment because it provides a fast pathway for groundwater flow through the mountain when considered alone, but in consideration with a specific repository design, the feature may act beneficially by channeling groundwater flow away from the waste, thereby reducing the chances that the groundwater will contact the waste packages and cause them to fail. According to DOE, its amendments to the siting guidelines will be developed concurrently with the development of a site-specific radiological protection standard for Yucca Mountain by EPA and conformance of the licensing regulations to this new standard by NRC. Moreover, as DOE agreed when it issued the original guidelines, the Department intends to obtain NRC’s concurrence with the amended guidelines. After the completion of a public comment period, DOE expects to issue the revised guidelines in 1997. A key uncertainty, however, is the timing of the issuance of EPA’s standards and NRC’s revised licensing regulations. According to DOE’s manager for site suitability and licensing, DOE needs NRC to complete revisions to its licensing regulations 1 year before DOE makes its determination of site suitability (now scheduled for July 1999) and 2 years beforehand if NRC makes major changes to the regulations. He added that DOE’s determination of the suitability of Yucca Mountain will be based on comparing an up-to-date assessment of the repository’s performance to EPA’s standard and NRC’s licensing regulations. To preserve the repository project at Yucca Mountain following the unexpectedly low appropriations for fiscal year 1996, DOE redirected the project to address the major unresolved technical issues so that, in 1998, the Department can assess the viability of a repository at the site. DOE is developing a strategy for containing and isolating waste in the repository to guide the preparation of this assessment. The draft strategy specifies the natural and engineered (man-made) barriers that DOE will rely on to isolate waste from the accessible environment and provide the technical basis for setting priorities for designing the repository and completing the scientific investigation of Yucca Mountain. Following the viability assessment, DOE would complete the work it believes is necessary to (1) determine if Yucca Mountain is a suitable site for a repository, (2) recommend selection of the site for that purpose, and (3) submit a license application to NRC. However, the limited information that DOE will have in several areas that are important to its strategy for containing and isolating waste could affect its ability to achieve its objectives for the repository project on its current schedule. These key areas include the hydrology of Yucca Mountain and the surrounding area, the effects of heat on the repository’s performance, and the testing of candidate materials for waste packages. In 1994, the Nuclear Waste Technical Review Board concluded that DOE had not established exploration and testing priorities for determining if Yucca Mountain is a suitable site for a repository. To that end, the Board recommended that DOE articulate a clear waste isolation strategy that provides an understandable technical rationale for assigning priorities to studies of the site. DOE agreed and began developing the elements of such a strategy. In July 1996, DOE published a draft of its evolving strategy for containing and isolating waste in a repository at Yucca Mountain. The strategy, which represents DOE’s approach to addressing and resolving issues related to the long-term performance of the repository, is based on the observation that there is very little water in the rocks in and around the repository area to dissolve and transport radioactive materials to the environment. The goals of the strategy are to contain nearly all radioactive materials within waste packages for several thousands of years and ensure that doses to the public living near the site will be acceptably low. The strategy relies primarily on emplacing waste packages in an area in Yucca Mountain above the water table to delay and minimize releases of radioactive materials to the environment when the waste packages finally begin to fail. Secondary lines of defense to enhance containment and isolation lie in potential engineered (man-made) barriers adjacent to the waste packages and in the natural system that are expected to delay the movement of radioactive materials released from waste packages. The strategy defines the following key attributes for predicting the performance of engineered and natural barriers: The rate at which water seeps into the repository. Assessments of the repository’s performance have shown that water seeping into the emplacement areas is the most important attribute of the ability of the site to contain and isolate waste. This process affects all aspects of performance, from the life of the waste packages to the movement of radioactive materials. The integrity of waste packages (containment). As long as waste packages remain intact, the waste will be completely contained and prevented from any contact with the surrounding rock or the groundwater. According to DOE, containment times exceeding 1,000 years are feasible. The rate of release of radioactive materials from failed waste packages. Performance assessments have shown that the release rate is one of the key factors in determining the peak doses of radiation that the affected public would be exposed to each year. The transport of radioactive materials through barriers. The potential radiation dose depends directly on the concentration of radioactive materials in water. These concentrations change as the materials move in water through engineered and natural barriers to points where people can use the water. The dilution in the groundwater. Dilution is an important factor that can reduce concentrations of radioactive materials and limit doses of radiation to humans. If the amount of water seeping into the repository and contacting the waste is small, the concentration of radioactive materials will be reduced when the contaminated water is added to the groundwater. The strategy also hypothesizes that some cross-cutting issues, such as the effects on the repository’s performance of the heat generated by the waste, can be dealt with successfully as the repository is designed and that other issues, such as the potential effects of climate changes, human interference, and volcanoes, will not significantly reduce the repository’s performance. The strategy outlines tests and analyses to be pursued to try to substantiate the five key attributes and to address cross-cutting issues. According to DOE, the waste containment and isolation strategy will guide its plans for a viability assessment in 1998. DOE would use the strategy to guide the scientific and engineering studies necessary to confirm or revise the models that are used to predict the performance of the repository and to provide the technical basis for a license application. In a report on its activities for 1995, the Nuclear Waste Technical Review Board concluded that DOE was making considerable progress in developing its waste strategy and made several recommendations for improving it. First, the Board said the strategy relies heavily on the presumed dryness of the Yucca Mountain site and recommended that the strategy identify ways to compensate for an unexpectedly high movement of water between the repository and the water table. Second, the Board criticized the qualitative descriptions of the waste attributes and recommended that DOE designate a numeric limit for radiation doses to individuals and specify conditions under which exposures to releases of radioactive materials would be assumed to occur. Also, the Board said, DOE’s strategy does not contain criteria for validating or rejecting the five attributes; therefore, a clearer understanding is needed of the degree of proof that is being sought for each attribute. Finally, after pointing out that all five attributes address favorable conditions, the Board said the strategy would be strengthened if DOE placed more emphasis on identifying potential mechanisms for the repository system to fail and on formulating testable hypotheses about the importance of these mechanisms. Knowing how water moves through and under Yucca Mountain is critical to the repository project. DOE is studying the hydrology of both the groundwater beneath the site and the area above the water table because the movement of water through the mountain to the groundwater is considered the primary means by which radioactive materials could move from the repository to the environment. Recently, the U.S. Geological Survey identified a number of issues concerning studies of the saturated (groundwater) and unsaturated (above the water table) zones at Yucca Mountain. In April 1996, Geological Survey scientists wrote a memorandum that updated their understanding of the key inputs for the models of the flow of water in the saturated zone. They noted that as new issues about the importance of uncertainties about the saturated zone have been raised, the level of understanding of many issues has not increased. They attributed this situation to the lack of new boreholes drilled to the saturated zone since the mid-1980s and the limited testing of the saturated zone since then. One key issue the scientists raised is the resolution of a large drop in the elevation of the groundwater (hydraulic gradient) at the northern end of Yucca Mountain that was discovered in 1981. Estimates of the direction and rate that water moves beneath the site and how radioactive materials would be diluted in the groundwater may differ considerably, depending on different explanations of the cause of the gradient. The gradient remains a concern because the scientists cannot account for its origin. It would be difficult, they said, to claim that they understood the hydrology of the site if they could not explain the cause of the most striking feature in the area. Earlier, in 1992, the technical project officer for the Geological Survey wrote that the large hydraulic gradient must be understood to understand the hydrology of the saturated zone and that it would be “folly” to determine the suitability of the site without a reasonable understanding of this feature and its durability. In commenting on a draft of our report, DOE pointed out that one existing well is being used to test hypotheses about the origin of the large hydraulic gradient and that the Geological Survey is currently interpreting the test information. The test information, DOE said, may either tell it what it needs to know or indicate how to approach the problem by, for example, drilling another hole or identifying another type of necessary test. Also, NRC commented that studies to date have not shown a significant negative effect on performance as the result of the gradient. The Geological Survey also identified what it considers importanthydrological issues concerning the (1) scarcity of transport data and (2) flow of water directly from the Amargosa Desert near Yucca Mountain to Death Valley to the west. The first issue reflects a severe lack of information to support transport models, which in turn support performance assessment models. According to the Geological Survey scientists, transport data are scarce because measurements are being made at only one site and may be made at only one additional site in the future. The second issue reflects a potential change in the conceptual model of the flow of groundwater in the region. Little information is available to choose among competing models, and what information is available is subject to different interpretations. Acquiring additional, unambiguous information, however, would be very expensive and may not be warranted. In June 1996, DOE issued a report on the Geological Survey’s program to ensure the quality of its research on the repository project. Although the report’s authors concluded that the quality assurance program was adequate, they also expressed concern about persistent, major, unquantified uncertainties at this stage in the project. The report’s authors also concluded that the project was severely handicapped by the absence of high-quality hydrochemical data from the site and the elimination of most borehole drilling from DOE’s site investigation plans. Specifically, they noted that (1) boreholes to resolve the cause of the large hydraulic gradient north of the site and to test an aquifer at a second location have not been drilled, (2) existing boreholes in the Amargosa Desert have not been sampled or instrumented and drilling in the Southern Funeral Mountains (southwest of Yucca Mountain, in the direction of Death Valley) has not occurred, and (3) instrumentation to measure water levels in numerous boreholes has been removed and mothballed. These actions are inconsistent, they continued, with what is expected of a useful model for the flow of water in the saturated zone, particularly insofar as such a model would be used to support evaluations of the transport of radioactive materials. The issues appear to have been caused by unrealistic expectations for “bounding” system performance in the absence of data that would allow uncertainties to be quantified. According to the Geological Survey scientists, an important aspect of understanding how water would flow through the unsaturated zone at Yucca Mountain is to study the rate at which water infiltrates the mountain from the surface. Using a network of shallow boreholes across the site, scientists are monitoring changes in moisture content in the upper 50 feet of ground where these changes occur each year. This monitoring program is necessary to develop an adequate record of moisture changes in what is one of the driest areas of the country. The Geological Survey has developed a model of soil moisture and produced a preliminary map of the rates at which water infiltrates the unsaturated zone across the mountain. According to the scientists, the model is fairly rigorous, but certain assumptions must be made while using it. However, the infiltration program has ended sooner than had been planned and no new work is planned to study uncertainties in the model. The scientists are concerned about whether there is a sufficient level of confidence in the supporting assumptions used in its model of soil moisture and whether the data and assumptions supporting the model can withstand external scrutiny. They stated that drilling and instrumenting more boreholes would provide more information, but project officials are considering an alternative approach of compensating for uncertainty in this area by backfilling the repository with a material that would keep water away from waste longer. Geological Survey scientists are also concerned about the adequacy of the studies of water moving through the repository horizon (i.e., percolation). According to these scientists, such studies have been reduced substantially from original plans. The project had planned to drill 17 boreholes to the water table at various locations around the site. Monitoring of pneumatic pressure, temperature, and water potential was to be performed in each borehole for a minimum of 3 to 5 years. According to the Geological Survey, however, while 15 boreholes have been drilled, 4 of them were drilled in different locations than planned. Of the 15 boreholes, 8 do not reach the repository horizon, and no borehole has been drilled deep enough to characterize the Ghost Dance Fault in the Calico Hills Formation. In addition, only seven of the boreholes have been instrumented to monitor pneumatic pressure, temperature, and water potential. Finally, other tests have been reduced or deleted from the testing program altogether. In commenting on a draft of our report, DOE recognized that more information on the saturated zone is needed and stated that its long-range site investigation plan includes additional tests in the saturated zone. DOE added, however, that its primary focus remains on the unsaturated zone because of the importance of this area to its strategy for containing and isolating waste. After DOE has acquired a better understanding of the overall performance of the proposed repository system, the Department said, it may decide that it can get better performance from the repository system by changing waste package materials rather than by more precisely defining some aspect of water flow. Finally, DOE stated that the concept of backfilling waste storage rooms in the repository with a selected material is one option for improving the repository’s performance but that this concept is unrelated to the potential need to reduce hydrological uncertainties. One other issue has recently emerged that affects DOE’s understanding of the hydrology of the unsaturated zone at Yucca Mountain. DOE has detected the presence of the isotope chlorine-36, produced from atmospheric tests of nuclear weapons about 50 years ago, at the level of the proposed repository. DOE has been testing for the presence of this and other elements to provide information on the age of the water at various locations in the mountain and on the travel time of water through preferential paths, such as faults and fractures, in the rock. DOE found elevated levels of chlorine-36 in samples from five locations within the exploratory studies facility. According to the disposal program’s director, the findings need not be, but could be, a critical problem. In DOE’s current view, the findings appear to indicate rapid flow of water along preferential pathways. DOE is collecting and analyzing additional samples to confirm results and to provide new information on new areas of the exploratory tunnel. In addition, DOE will perform more modeling studies to evaluate the chlorine-36 data as they relate to the understanding of the hydrologic processes of Yucca Mountain and DOE’s conclusions about the repository’s performance. According to DOE, a key issue that it must address in its investigation of Yucca Mountain is the uncertainty about the interaction of the heat generated by waste in the repository with the surrounding rock, the water contained in Yucca Mountain, and the packages containing waste. To provide information on this issue, DOE planned a series of experiments in the exploratory studies facility and at the surface near Yucca Mountain that began in 1996 and will continue until about 2000. DOE’s general testing strategy is to perform simpler, smaller-scale tests first and then move to a more complex, larger test. However, a peer review team, the Nuclear Waste Technical Review Board, and DOE’s Lawrence Livermore National Laboratory have raised concerns about the testing program. In general, these concerns are that DOE is not doing large enough tests for long enough periods of time. Because of the decay of radioactive materials in nuclear waste, it will continue to produce heat for thousands of years after its disposal in a repository. The Nuclear Waste Technical Review Board described this issue—called thermal loading—as one that would largely determine the level of uncertainty about the repository’s long-term performance. As early as 1990, the Board stated that the strategy selected to control the temperatures in a repository is a fundamental decision because the selected strategy will affect most components of the waste management system, including methods for storing and transporting waste, the design of waste packages, and the design, size, performance, and cost of the repository. The thermal load of the repository has the potential to significantly redistribute moisture within Yucca Mountain, resulting in extended periods of dryness in the repository or channeling of moisture toward waste packages. Therefore, it is necessary to understand the effects of the thermal load on the temperature of the surrounding rock as well as the movement of water and gases in the vicinity of the repository to have confidence in predictions of containment and long-term waste isolation. The distribution of temperature, liquid saturation, and humidity within the repository will influence the corrosion of metals, alteration of minerals, and geochemistry. These factors are important in predicting the containment time within the waste package and transport times through both the engineered and natural barrier systems. DOE’s thermal test strategy described several sequential tests, in general order of scale and complexity, in the exploratory studies facility. Early tests would be relatively small in scale and limited in complexity. Information gained from these early tests would help in understanding and interpreting results from larger, more complex tests of longer duration. In the first underground test, a long heating rod would be inserted in a horizontal hole to heat the surrounding rock. The next test that DOE intends to perform is intended to heat a larger volume of rock with rows of heating rods emplaced in the walls of an excavated room and in heaters, shaped like waste containers, placed on the floor of the room. DOE considers this test, called a drift-scale test, to be a smaller, less complex, and less costly surrogate to a large-scale, long-duration test that would address information needs that could only be answered by tests that approach the scale of several waste storage rooms (drifts) in the repository. In addition to its planned underground thermal tests, DOE initiated a test on the surface near Yucca Mountain in a large isolated block of rock. This test was intended to develop and evaluate techniques and data for monitoring the changes in thermal and hydrological properties in a heated rock mass with controlled boundary conditions and provide data to understand the larger and more complex tests in Yucca Mountain. DOE stopped this large-block test in fiscal year 1996 due to budget reductions but restarted it in fiscal year 1997. In 1995, DOE established a team of six experts to conduct an external peer review of its thermal testing program. The objective of the review was to evaluate the program’s approach to understanding the thermohydrologic conditions at Yucca Mountain that would be generated by the heating of the repository. In its review of the thermal testing program, the review team’s primary recommendation was that a large-scale, long-duration test and the large-block test be carried out. The review team concluded that the smaller, less complex tests with single and multiple heating rods are not needed because these tests would be insufficient in scope to fully develop relevant processes. Only the large-scale, long-duration test, the review team said, would give results over a cross-sectional area large enough to be meaningful. The review team noted that DOE is in a major undertaking involving the thermohydrologic behavior of a fractured rock mass for which there is no precedent. By setting up a long-term experiment, DOE could acquire a substantial database, and analysis of the data could begin almost immediately after the experiment has begun. In addition, the review team said, critical design decisions cannot be made using smaller tests because the volume of rock being affected is too small to develop the effects that reveal the “global” picture. The team cautioned that the cost and time to perform the large-scale, long-duration test would be substantial but also stated that scientific defensibility must overrule mandated scheduling and cost constraints. If DOE is forced to choose from among all the tests, the review team said, the large-scale, long-duration test should be done. DOE disagreed with the review team’s recommendation on the large-scale, long-duration test. According to DOE, its planned approach to conducting the large-block test, the test with a single heating rod, and the drift-scale test is consistent with DOE’s strategy of progressing from simple to complex and small to large thermal tests and will likely provide the necessary data to defend a thermal loading strategy for the site. Consequently, DOE’s current plans do not include performing the large-scale, long-duration test; however, the Department will consider implementing the large-scale, long-duration test if it determines that the planned tests of smaller scale and duration do not provide sufficient data and confidence in related models. In 1995, the Nuclear Waste Technical Review Board wrote that there is considerable uncertainty associated with the thermohydrologic processes at Yucca Mountain. According to the Board, there is agreement that some heater tests have to be done, but there is no clear enunciation of what types of data are to be collected, how they will be obtained, or the ultimate use to which the data will be put. Furthermore, the relatively limited experience of the scientific community in modeling complex thermohydrologic problems in areas like the unsaturated zone at Yucca Mountain will make it especially difficult for DOE to establish the validity of predictions through short-term thermal testing. The Board supported the initiation of a long-term, tunnel-scale thermal test, recommended that DOE give more thought to how more information can be obtained from all heater tests, and concluded that little data will be available for use in DOE’s 1998 viability assessment. Finally, as early as 1992, DOE’s Lawrence Livermore National Laboratory had raised concerns about the length of heater tests at Yucca Mountain. DOE had established a task force to consider this issue. A draft report by the task force recommended that in order to meet the schedule for submitting a license application in 2001, a heater test to be performed by the laboratory should be completed in 6 years. At that time, scientists at the laboratory argued that a 6-year test period would barely be long enough for geochemical reactions to take place that could be sampled. Therefore, the scientists said, the 6-year test duration was the minimum time that they could support from a technical standpoint. As currently planned, the drift-scale test would run for 4 years with options for a longer test period if evaluation of the test data warrants the longer duration. In commenting on a draft of our report, NRC stated that its principal concern is that the thermal testing be representative of the range of repository conditions, rather than the scale or duration of the tests. It added that the testing information that will be available at DOE’s current planned date of license application will be limited and will need to be confirmed by additional data collected during performance confirmation or, if the additional data differ significantly from the original design bases and assumptions in the license application, the design may be modified through the license amendment process. NRC has questioned whether DOE is allowing enough time to test potential materials for waste packages before it submits a license application to NRC. The waste package refers to the waste form and any containers, shielding, packing, and other absorbent materials surrounding an individual waste container. According to DOE, waste packages will consist of multiple metal barriers designed to contain the wastes by resisting corrosion for thousands of years. In July 1995, NRC’s representatives observed DOE’s audit of the effectiveness of the waste package design processes used by the Department’s primary contractor for the repository project. Following the audit, NRC’s representatives reported to their managers at NRC’s headquarters that DOE is following a strategy of continuing development and analytical work on a selected set of candidate waste package materials. According to this report, the final choice of materials for waste packages will not be made until a prototype waste package is made or by the time DOE submits its license application to NRC. Also, DOE’s primary contractor for the repository project indicated that none of the currently available data on the performance of materials, such as corrosion, will be used for licensing. NRC’s report stated that the contractor plans to obtain test data over only 5 years to analyze long-term failures of waste packages in the license application. Validating waste package performance is expected to continue during the operation of the repository. In their report, NRC’s representatives concluded that predicting the long-term performance of waste packages will be difficult using only the relatively short-term test results that will be available when the license application is submitted in 2002. DOE does not agree with the comments in NRC’s report. The Department expects that available data on mechanical and corrosion performance of materials will be used to support a license application. Also, DOE said it is not clear that NRC’s conclusion about predicting long-term performance of waste packages from short-term test results is accurate. NRC said it may still be possible to show, with reasonable assurance, that the overall system performance standard is met at the time of license application. NRC added that its licensing regulations anticipate that additional research may be required to determine the adequacy of the design and provide that a license to construct a repository may have conditions related to the satisfactory resolution of safety questions for which research is being conducted. To identify the adjustments the Department of Energy made to its disposal program and the potential impediments to achieving the Department’s current objectives and schedule for the repository project, we performed our work primarily at DOE’s headquarters in Washington, D.C., and its Yucca Mountain Site Characterization Project Office in Las Vegas, Nevada. At these locations, we obtained and reviewed information from officials of DOE’s Office of Civilian Radioactive Waste Management, including officials assigned to the site investigation project; officials of DOE’s management and operating contractor for the project; and officials of the U.S. Geological Survey, which is a participant on the project. We also visited the candidate repository site at Yucca Mountain, Nevada, including observing activities under way in the exploratory studies facility tunnel in the mountain. We also obtained and reviewed information from officials of the (1) Division of Waste Management, Nuclear Regulatory Commission; (2) Office of Radiation and Indoor Air, Environmental Protection Agency, Washington, D.C.; (3) Agency for Nuclear Projects, state of Nevada, Carson City, Nevada; (4) comprehensive planning office of Clark County, Nevada; and (5) Nuclear Waste Technical Review Board, Arlington, Virginia. Department of Energy: Unethical Conduct at DOE’s Yucca Mountain Project (GAO/OSI-96-2, Sept. 30, 1996). Nuclear Waste: DOE’s Management and Organization of the Nevada Repository Project (GAO/RCED-95-27, Dec. 23, 1994). Nuclear Waste: Comprehensive Review of the Disposal Program Is Needed (GAO/RCED-94-299, Sept. 27, 1994). Nuclear Waste: Foreign Countries’ Approaches to High-Level Waste Storage and Disposal (GAO/RCED-94-172, Aug. 4, 1994). Independent Evaluation (GAO/RCED-94-258R, July 27, 1994). Nuclear Waste: Yucca Mountain Project Management and Funding Issues (GAO/T-RCED-93-58, July 1, 1993). Nuclear Waste: Yucca Mountain Project Behind Schedule and Facing Major Scientific Uncertainties (GAO/RCED-93-124, May 21, 1993). Energy Issues: Transition Series (GAO/OGC-093-13TR, Dec. 1992). Nuclear Waste: Status of Actions to Improve DOE User-Fee Assessments (GAO/RCED-92-165, June 10, 1992). Nuclear Waste: DOE’s Repository Site Investigations, a Long and Difficult Task (GAO/RCED-92-73, May 27, 1992). Nuclear Waste: Development of Casks for Transporting Spent Fuel Needs Modification (GAO/RCED-92-56, Mar. 13, 1992). Nuclear Waste: Operation of Monitored Retrievable Storage Facility Is Unlikely by 1998 (GAO/RCED-91-194, Sept. 24, 1991). Nuclear Waste: Changes Needed in DOE User-Fee Assessments (GAO/T-RCED-91-52, May 8, 1991). Nuclear Waste: DOE Expenditures on the Yucca Mountain Project (GAO/T-RCED-91-37, Apr. 18, 1991). Nuclear Waste: Changes Needed in DOE User-Fee Assessments to Avoid Funding Shortfall (GAO/RCED-90-65, June 7, 1990). Nuclear Waste: Quarterly Report as of March 31, 1989 (GAO/RCED-89-178, Aug. 14, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Energy's (DOE) Yucca Mountain Project, focusing on: (1) adjustments DOE made to the disposal program due to reduced appropriations; and (2) potential impediments to achieving DOE's objectives and schedule for the repository project. GAO found that: (1) because DOE did not receive the amount of appropriations requested for fiscal year 1996, it revised the scope and objectives of the repository project with the goal of applying for a construction license in March 2002, about 5 months later than had been planned; (2) specifically, DOE: (a) curtailed most investigative activities at Yucca Mountain in favor of analyzing the information already collected to focus the remaining investigative activities on key uncertainties; (b) decided to revise its guidelines for determining if the Yucca Mountain site is suitable for a repository by deleting those criteria that require compliance with specific technical conditions, such as those concerning the travel time for groundwater; and (c) will issue, in September 1998, an assessment of the expected design, performance, and cost of a repository at Yucca Mountain. This report is intended to support decisions on continuing the repository project and authorizing a waste storage facility near Yucca Mountain that may be made before DOE has determined if the site is suitable for a repository; (3) several impediments must be resolved in DOE's favor if it is to achieve the project's revised objectives and schedule; (4) it is uncertain when the Environmental Protection Agency and Nuclear Regulatory Commission (NRC) will issue the health standards and licensing regulations, respectively, that DOE needs to determine if Yucca Mountain is a suitable repository site; (5) the absence of applicable standards and regulations creates uncertainty about whether the scope of DOE's site investigation is adequate; and (6) limitations on information that DOE is collecting in key areas, such as hydrology and the effects of the heat generated by waste on the performance of the repository and NRC's preparations to review a license application, add more uncertainty to the repository project.
The authority for DOD to establish a pay-for-performance management system for civilian defense intelligence employees originated in the National Defense Authorization Act for Fiscal Year 1997. Initially, in 1998, only the National Geospatial-Intelligence Agency implemented Full Pay Modernization for its employees. Specifically, the Office of the Secretary of Defense granted the National Geospatial-Intelligence Agency the authority to pilot test a pay-for-performance system. The National Geospatial-Intelligence Agency eventually converted all of its employees out of the General Schedule pay scale and into a system it called Total Pay Compensation in 1999 and thus has been under that system for about 10 years. As stated previously, ODNI, with agreement from agencies and departments in the Intelligence Community, established the overarching evaluation and performance based pay framework for this community— the National Intelligence Civilian Compensation Program. This framework was established by Intelligence Community Directives, which among other things, set common rating categories and performance standards that were adopted by the Intelligence Community. According to Intelligence Community Directive 650, the Director of National Intelligence has the responsibility to establish, in collaboration and coordination with the heads of executive departments and independent agencies with Intelligence Community employees, a set of unifying Intelligence Community-wide principles, policies, and procedures governing the compensation of civilian employees in the Intelligence Community. DOD, in 2007, designated USD(I) as the organization responsible for overseeing the implementation of DCIPS. USD(I) based DCIPS primarily on the pay-for-performance system implemented at the National Geospatial-Intelligence Agency. Appendix IV shows the notable differences between the National Geospatial-Intelligence Agency’s system and DOD’s DCIPS. Additionally, figure 1 outlines the U.S. Intelligence Community’s pay modernization efforts under the National Intelligence Civilian Compensation Program framework. DCIPS is one of the first systems to use this framework. DCIPS will be the performance management system applicable to DOD civilian intelligence personnel in the DOD intelligence components, which include the Defense Intelligence Agency, the National Geospatial- Intelligence Agency, the National Reconnaissance Office, the National Security Agency, the Office of the Under Secretary of Defense for Intelligence, and the intelligence elements of the military departments. Although not a defense intelligence component, the Defense Security Service also converted to DCIPS. Implementation of DCIPS pay-for-performance began in September 2008 and consists of three specific phases: (1) Performance Management, which focuses on the processes of setting expectations and objectives for monitoring, rating, and rewarding employee performance; (2) Pay Bands, which moves employee pay from the General Schedule/Government Grade pay scale to the five pay ranges associated with a particular DCIPS work category and work level; and (3) the First Performance Payout, which is when employees will receive a combination of their performance-based salary increase and their performance-based bonus increase for the first time. The DOD components have implemented DCIPS, in some instances, at different times. Figure 2 depicts the timeline, at the time of our review, for each component’s phased implementation. Oct. Nov. Jul. Au. Oct. Jan. Jan. OUSD(I) OUSD(I) OUSD(I) Under DCIPS, performance management consists of two interrelated processes: the performance management process and the pay pool process.The performance management process includes a 12-month performance evaluation period that runs annually from October 1 through September 30, unless USD(I) has granted an exception. During this period, employees, along with their supervisors—who are also referred to as rating officials—collaborate to identify performance expectations and outcome-focused objectives; engage in regular dialogue to monitor performance throughout the year, including a required mid-point review; develop performance strengths and skills; document achievements through employee self-assessments and rating official assessments; and, finally, conduct an end-of-year performance review. At the end of the performance evaluation period, the rating official completes an evaluation of record for each of the employees they supervise. These evaluations of record are then passed through two levels of review: first by reviewing officials and then by the Performance Management Performance Review Authority. Reviewing officials are responsible for coordinating with rating officials in evaluating and rating the performance of employees. Concurrent with the actions of the reviewing officials, the Performance Management Performance Review Authority conducts a high-level review of all evaluations of record and ratings across the component with the intent of ensuring rigor and consistency across all supervisors and reviewing officials and compliance with applicable laws and regulations. Within 45 days of the end of the performance evaluation period, all ratings must be finalized and approved by the reviewing officials and the Performance Management Performance Review Authority. The pay pool process begins at the same time as the performance management process with the establishment of pay pool structures and annual training to strengthen participants’ understanding about the pay pool process from October 1 to September 30. However, pay pools begin their annual deliberations about employee salary increases and bonuses after ratings are finalized. A pay pool is a group of individuals who share in the distribution of a common pay-for-performance fund. Each employee is assigned to a pay pool according to considerations regarding organizational structure, geographic location, and/or occupation. Figure 3 illustrates a sample DCIPS pay pool structure, specifically the relationship between the members of each pay pool—the employee, supervisor or rating official, reviewing official, pay pool panel, pay pool manager, and performance review authority. Each of these pay pool members has defined responsibilities during the annual deliberations and pay out process. The Pay Pool Performance Review Authority, who can be either an individual or a panel of individuals, oversees one or more pay pools to ensure procedural consistency among the pay pools under its authority. The Pay Pool Manager provides financial, scheduling, and business rules guidance for the process; settles differences among panel members; and approves the final pay pool panel recommendations. The Pay Pool Panel members, which include reviewing officials and, in some cases, rating officials, are responsible for determining performance-based salary increases and bonuses using established pay pool guidance. Payouts are normally effective on the first day of the first pay period following January 1 of the new calendar year. The department issued overall guidance in September 2009 regarding its pay pool business rules. DCIPS is a pay-banded performance management system. As such, employees have converted or will convert from the General Schedule/General Government system to five distinct pay bands. Under the General Schedule/General Government system, salary is determined by the 15-grade/10-step system. Pay banding consolidates these 15 grades into five broad pay bands, and the DCIPS pay system establishes a salary range for each pay band, with a minimum and a maximum pay rate. Figure 4 illustrates which Government Grade/General Schedule pay grades/steps Schedule pay grades/steps apply to each pay band during conversion. apply to each pay band during conversion. Although DOD has taken some steps to implement internal safeguards to ensure that the DCIPS performance management system is fair, effective, and credible, opportunities exist to improve DOD’s implementation of 2 of the 10 safeguards. Specifically, DOD has taken some steps to (1) link employee objectives and the agency’s strategic goals and mission; (2) provide a system to better link individual pay to performance in an equitable manner; (3) train and retrain employees and supervisors in the system’s operation; (4) require ongoing performance feedback between supervisors and employees; (5) assure meaningful distinctions in employee performance; (6) ensure agency resources are allocated for the design, implementation, and administration of the system; (7) assure that there is an independent and credible employee appeals mechanism; (8) assure reasonable transparency of the system and its operation; (9) involve employees in the design and implementation of the system; and (10) adhere to merit principles set forth in section 2301 of title 5 of the U.S. Code. We have previously reported that continued monitoring of such systems’ safeguards is needed to help ensure DOD’s actions are effective as implementation proceeds. While we believe continued monitoring of all of these safeguards is needed as implementation proceeds and more employees become covered by DCIPS, we determined that USD(I)’s implementation of two safeguards—employee involvement and the adherence to merit principles—could be improved immediately. Until USD(I) effectively implements all of the safeguards, employees will not have assurance that the system is fair, equitable, and credible, which ultimately could undermine employees’ confidence and result in failure of the system. DOD has made efforts to link employees’ objectives to the agency’s strategic goals, mission, and desired outcomes. For example, DCIPS guidance stipulates that employees’ individual performance objectives should align with the goals and objectives of the National Intelligence Strategy, DOD, and the employee’s organization. Specifically, an employee, in conjunction with a rating official and supervisor (if different), will establish approximately three to six performance objectives, which set specific performance targets for the individual, and link to National Intelligence Strategy, departmental, and component goals and objectives. Further, according to the DCIPS guidance, performance objectives for non-supervisory employees should be appropriate to the employee’s pay band, pay, and career or occupation category, and will be structured such that they are specific, measurable, achievable, relevant, and time-limited (SMART). The guidance further requires the creation of annual performance plans, to serve as records of the performance planning process, which are to be reviewed and approved by reviewing officials to ensure they are consistent with organizational goals and objectives. DOD officials we spoke with identified SMART objectives as the primary method of linking individual employee performance objectives to agency mission and goals. Figure 5 illustrates how an individual’s dual’s SMART objectives align to agency and National Intelligence Strategy goals. SMART objectives align to agency and National Intelligence Strategy goals. USD(I) officials stated that DCIPS’s design allows for a better linkage between individual pay and performance than the previous General Schedule pay scale. DCIPS policy requires that DCIPS shall provide a basis for linking performance-based pay increases and bonuses to (1) individual accomplishments, (2) demonstrated competencies, and (3) contributions to organizational missions and results—such that the greatest rewards go to those who make the greatest contributions, consistent with both performance and competitive pay administration principles. Moreover, DCIPS draft guidance states that the goal of the system is that it provide for a reward system that attempts to motivate employees to increase their performance contribution, making the employees’ level of performance commensurate with their total compensation. Several DCIPS components we spoke with, including the Army, Marine Corps, and Air Force, cited DOD’s Compensation Work Bench, a computerized tool that calculates pay increases by using performance ratings and pay pool information as a primary mechanism for quantitatively connecting individual performance and pay. In addition, the same components also cited the Performance Appraisal Application, an online tool for monitoring employee performance throughout a rating cycle, as another means of establishing such linkage. Although DOD has created policy to better link an individual’s pay to performance, it is too soon, given the current implementation status, to determine the extent to which pay will be equitably linked to performance, as a full performance cycle has not been completed and DCIPS payouts have not yet occurred. DOD has taken several steps to provide extensive training to DCIPS users in the implementation and operation of the performance management system. For example, DCIPS policy requires that employees be trained in the system, and that rating officials, supervisors, pay pool managers, and pay pool members be trained in their responsibilities. According to USD(I), each of the DCIPS components is required to implement training, tailoring any materials provided by USD(I) as necessary to meet the needs of its workforce. Additionally, there are currently a number of training mechanisms, including Web-based courses, classroom sessions, and town hall forums—so employees have a range of opportunities to learn about DCIPS. USD(I) provided a training curriculum that includes courses such as DCIPS 101, Managing Performance, and DCIPS Compensation Fundamentals. Some training tools are designed for distinct groups (i.e., supervisors, human resource personnel, etc.) in order to ensure that different groups have a contextual understanding of DCIPS. See appendix V for a list of major courses in this curriculum. Officials we spoke with at a number of DCIPS components stated that they offer a variety of classroom and Web-based training tools, some of which were adapted from USD(I) training in order to better suit the needs of the component’s workforce. For example, one component modified USD(I)’s iSuccess course, which provides employees with step-by-step instruction on how to write SMART performance objectives and self-assessments. Other components have employed innovative approaches to training, such as conducting joint training sessions with employees and supervisors in order to increase transparency and to open dialogue between the two groups. Additionally, USD(I) administered a number of training evaluations for its introductory DCIPS courses that indicated that employees generally viewed the training as informative and beneficial. However, during our nongeneralizable discussion groups with employees, we found that employee perceptions of training were somewhat mixed, as participants at 9 of our 13 discussion group sites stated that too many questions regarding DCIPS went unanswered, including questions posed during training. In particular, employees in one discussion group stated that training on developing performance objectives was not helpful because it focused on developing objectives for jobs that had very specific outputs, such as making widgets. Although such feedback indicates that the breadth of training offerings, as well as the scope and/or format of individual sessions, could be improved, we note that we conducted our discussion groups during April 2009 and May 2009, and according to one USD(I) official, new training courses have since been added, such as Compensation Fundamentals. DCIPS policy requires that rating officials and/or supervisors provide employees with meaningful, constructive, and candid feedback relative to their progress against performance expectations in at least one documented midpoint performance review and an end-of-year review. In addition, guidance requires rating officials and employees to engage in dialogue throughout the rating period to, among other things, develop performance objectives and an individual’s development plan. They are also required to discuss progress toward achieving performance objectives, behaviors related to successful performance, and individual employee development. Most of the DCIPS components we spoke with stated that additional feedback beyond the minimum required guidance is encouraged, but not mandatory. Formal feedback between employees and supervisors should be documented in the Performance Appraisal Application—DOD’s online performance management tool. At 7 of the 13 sites we visited, discussion group participants told us that communication with supervisors has increased under DCIPS, with most interactions being face-to-face, as encouraged by DOD. DCIPS is intended to create a performance management system that provides meaningful distinctions in employee performance. However, because performance evaluations have yet to occur under DCIPS, it is unclear the extent to which ratings will actually result in meaningful distinctions. Unlike the pass fail system, which some of the employees were under, the performance ratings scale for DCIPS consists of five rating categories, of which the lowest rating is a “1” (unacceptable performance) and the highest rating is a “5” (outstanding performance). Ratings are determined by comparing employee performance against performance standards for the employee’s pay-band level. Officials we spoke with at USD(I) and the DCIPS components also cited other mechanisms to implement this safeguard. For example, USD(I) told us that distinctions in individual employee performance will also be made through the bonus process. While all employees with performance evaluations rated Successful or above will be eligible, USD(I) officials expect that only 45 percent to 50 percent of employees who are eligible will receive a bonus. A USD(I) official noted that limiting bonuses to less than 50 percent of the staff will make bonuses more meaningful. Also, to ensure accountability at the supervisory level, one of the components told us it requires supervisors to demonstrate how they make distinctions in ratings as part of their own performance objectives. Finally, DOD officials stated that the mock performance review process will provide an opportunity to determine how meaningful distinctions in performance will be made, as well as a chance to garner lessons learned for assessing performance. Several of our discussion group participants expressed concern that there is potential for a “forced distribution” of ratings (i.e., a fixed numeric or percentage limitation on any rating level), which could effectively erode meaningful distinctions in individual employee performance. However, USD(I) officials told us that they had informed the components that forced distributions of ratings are unacceptable and potentially illegal, and that USD(I) has emphasized rigor and consistency in ratings throughout DCIPS’s implementation by way of leadership training and the Performance Review Authority. Additionally, in August 2009, USD(I) posted a statement on the DCIPS Web site reiterating its prohibition on forced distribution of ratings found in DCIPS guidance. DOD, through USD(I), has taken steps to ensure that agency resources are allocated for the implementation and administration of DCIPS. For example, DCIPS guidance provides for an initial permanent salary increase budget that is no less than what would have been available for step increases, quality step increases, and within-band promotions under the previous personnel system. Further, USD(I) will conduct, in coordination with the components, an annual analysis of salary adjustments to determine the effects on the distribution of the workforce within pay bands, position of the workforce relative to the applicable labor market, anticipated adjustments to the ranges, and projected General Schedule increases for the year in which the next payout is to be effective. Funding for the implementation of DCIPS was drawn from two primary funding streams including: 1) the National Intelligence Program, and 2) Military Intelligence Program. According to USD(I), funding was used to cover the costs associated with conversion, including training, technology, and Within-Grade Increases. USD(I) and several of the DCIPS components we spoke with indicated that resources were sufficient to implement the system. In particular, one component told us that the Office of the Director for National Intelligence has been very receptive to resource concerns and had asked to be notified of any shortfalls. In fact, at the time of our review, only one DCIPS component told us it had requested additional funds for a shortfall. In addition, USD(I) created a resource management group consisting of Chief Financial Officer officials from each DCIPS component in order to ensure the proper level of funding is available for payouts beginning in 2010. We previously identified an independent and credible employee appeals mechanism as a key component to ensuring that pay-for performance systems are fair, effective, and credible. DCIPS does not provide a distinct mechanism for employees to appeal adverse actions. Instead, it relies on existing agency procedures to fulfill this function, so that each of the defense intelligence components has its own appeals mechanism. According to USD(I) officials, guidance that would provide DCIPS a distinct employee appeals mechanism is in draft. When issued, according to these officials, this guidance will provide the minimum requirements for adverse action appeals, including fundamental due process, based on the requirements established in chapter 75 of title 5 of the U.S. Code. According to ODNI officials, chapter 75 does not statutorily apply to DCIPS. Rather, DOD is adopting these standards pursuant to ODNI Intelligence Community Directives. Additionally, ODNI guidance provides that employees will receive due process in any adverse action, as defined by applicable law and regulation, involving performance, as established by their respective departments or agencies, including an objective and transparent appeals process. DOD has taken steps to ensure a reasonable amount of transparency is incorporated into the implementation of DCIPS. For example, in contrast to the National Security Personnel System—which uses a system of weighted shares to determine employee payouts—DCIPS uses a software algorithm, available to all DCIPS employees, to calculate salary increases and bonus awards. In addition, USD(I) officials told us that USD(I) has communicated the performance management process through town hall meetings, DCIPS Web sites, quarterly newsletters, and letters from USD(I) management. Similarly, the DCIPS components are individually conducting a range of activities to provide transparency, such as their own town halls and open forum discussions. In particular, officials from one component told us that they conducted a survey of employees to determine how they received information about DCIPS and how they preferred to receive such information in the future. According to USD(I) officials, sharing aggregate rating results with employees is key to ensuring transparency and ultimately to gaining employee acceptance of the system. These officials also told us that they are instructing the DCIPS components to publish aggregate rating results. In fact, in September 2009, USD(I) provided a template for reporting DCIPS performance evaluation and payout results to the workforce. USD(I) officials stated that while the template can be tailored to suit specific agency needs, it will also establish a common way of reporting in which individual employees will be able to see where they stand relative to their peers and within pay bands. Separately, according to these same officials, USD(I) also plans to publish rating results at the department level by merging the results of all pay pool data from each of the DCIPS components. According to ODNI officials, they intended to do the same for Intelligence Community-wide results. USD(I) and the defense intelligence components have taken some steps to involve employees in the implementation of DCIPS, however more opportunities exist to expand this involvement. As we previously reported, involvement in a performance management system’s design and implementation must be early, active, and continuing if DOD employees are to gain a sense of understanding and ownership of the changes that are being made. Specifically, USD(I) and the defense intelligence components have used various mechanisms to obtain employee input. For example, USD(I) sponsored a survey to validate performance competencies for DCIPS and administered training evaluations for a variety of DCIPS courses, covering topics such as SMART objectives. In addition, the defense intelligence components conducted town hall meetings to provide domestic and overseas employees with information about DCIPS and to communicate with the workforce. According to a USD(I) official, the components possess considerable discretion regarding the nature and extent of employee involvement at the agency level, and as such, have independently employed a number of feedback mechanisms, including discussion groups and “brown bag” meetings. In most cases, the impact of such efforts is unclear; however, officials at one DCIPS component told us that some employee concerns were elevated to the Defense Intelligence Human Resources Board and actions were taken. For example, some employees expressed concerns about the elimination of career ladders, which eventually resulted in a policy change allowing employees who were hired under a particular career ladder to remain in that career ladder under DCIPS. Similarly, USD(I) provided us with a draft guide to writing effective performance objectives, which, according to officials, was produced at the request of employees that attended a pilot training course. While the above-mentioned steps demonstrate a commitment to engage the workforce, USD(I) has not taken advantage of other opportunities to expand such efforts by establishing a formal process for the continuous involvement of employees in DCIPS. As we previously reported, leading organizations involve employees directly and consider their input before finalizing key decisions—such as draft guidance. Although USD(I) officials stated they allow employees to comment on draft guidance, USD(I) does not have, in its guidance, a formalized process for the continued and direct involvement of employees in the development and implementation of DCIPS. This is of concern, since employees and supervisors in discussion groups at 12 of the 13 sites we visited indicated that they had limited or no involvement in the design and implementation of the system. Without continuous employee involvement in the implementation of DCIPS, employees may experience a loss of ownership over the system, which could ultimately undermine its credibility. USD(I) has taken steps to ensure that DCIPS incorporates the merit principles set forth in section 2301 of title 5 of the U.S. Code. The Office of Personnel Management has noted that prior to rolling out an alternative personnel system, an agency should document its business processes and procedures associated with all aspects of the system. In September 2009, USD(I) provided to us a document that stipulates that no later than March 31, 2010, components will provide the USD(I) Human Capital Management Office with detailed data, including demographic analysis, on performance evaluation and payout results. In September 2009, USD(I) also published a template for publishing DCIPS performance evaluation and payout results to the workforce. This template provides a sample aggregate workforce report for employees, which contains demographic-based reporting categories, including gender, race, ethnicity, age, disability status, and veterans’ status and provides details to report to employees, including each groups’ average rating, salary increase, and bonus. While notable, this 2009 document, however, does not specify what data are to be collected for the post-decisional demographic analysis, how the data should be analyzed, what process the components should follow to investigate potential barriers to fair and equitable ratings and their causes, or a process for eliminating barriers that are found. Until DOD specifies these steps in its guidance, the intelligence components may not follow a consistent approach in these areas, the department may be unable to fully determine whether potential barriers to fair and equitable ratings exist, and employees may lack confidence in the fairness and credibility of the DCIPS and its ratings. To help ensure equity, fairness, and non- discrimination in ratings, we are recommending that DOD issue guidance on its analysis of finalized ratings that explains how the demographic analysis of ratings is to be conducted. We have previously reported with another pay-for-performance system that continued monitoring of safeguards is needed to help ensure that a department’s actions are effective as implementation progresses. We have also reported that adequate evaluation procedures would, among other things, facilitate better congressional oversight, allow for any midcourse corrections, assist DOD in benchmarking its progress, and help document best practices and lessons learned with employees and other stakeholders. In October 2009, DOD provided us with a draft evaluation plan that details tentative procedures to monitor and evaluate DCIPS implementation, including all of the safeguards. For example, it provides for the examination of the relationship between performance ratings and annual performance payouts, and establishes methods of obtaining employee feedback, such as attitude surveys, interviews, and focus groups. According to DOD officials, they do not expect to execute the evaluation plan until after the first payout, in January 2010. DOD’s efforts to draft an evaluation plan are notable; however, without finalizing and executing such a plan, the department will not have a clear understanding of whether it is achieving its desired outcomes as part of implementing the new performance management system for its intelligence components. At the time of our review, DOD had several mechanisms to engage employees and provide information. However, these mechanisms did not comprehensively identify employee perceptions. GAO conducted 26 discussion groups, which while not generalizable, did show that employees and supervisors had mixed views about certain aspects of the system. Additionally, DOD’s planned mechanisms do not include certain questions related to the safeguards. DOD, at the time of our review, had several mechanisms in place to provide information to employees about DCIPS; however, these mechanisms did not comprehensively identify and address employee perceptions. Specifically, the defense intelligence components conducted numerous town hall meetings to brief employees on DCIPS—covering such topics as the performance management cycle and roles and responsibilities of employees/supervisors---and to understand their concerns. USD(I) also maintained a Web site that contained frequently asked questions submitted by employees and USD(I)’s response. Some of the frequently asked questions provided by the naval Intelligence Community, as an example, included: Will basic civil service protections be preserved, such as whistle blower protections and veteran’s preference? What safeguards will be in place to ensure that DCIPS rewards merit for merit's sake, and does not cater to nepotism and cronyism? USD(I) officials stated that it has used several other mechanisms, including site visits and the annual Intelligence Community Climate Survey, to collect employee opinions on various management policies and practices. While these efforts are notable, these mechanisms do not comprehensively identify employee perceptions of DCIPS. However, USD(I) does have plans to implement additional mechanisms that will be discussed later in this report. The non-generalizable results of the discussion groups we conducted identified, among other things, mixed views about certain aspects of the system. Specifically, our discussion groups identified areas that employees and supervisors found positive regarding DCIPS and several areas where they expressed a consistent set of concerns about DCIPS, some of which are listed below. Our prior work, as well as that of the Office of Personnel Management, has recognized that organizational transformations, such as the adoption of a new performance management system, often entail fundamental and radical changes that require an adjustment period to gain employees’ trust and acceptance. As a result, we expect major change management initiatives in large-scale organizations to take several years to be fully successful. At 7 of the 13 locations visited, discussion group participants generally expressed positive views about the concept of pay for performance. For example, employees at one location stated they like the idea of linking pay to performance and think that there is more opportunity for financial growth. Additionally, supervisors at another location stated they thought DCIPS is a better system than pay for tenure/time. At another location, supervisors stated they liked the concept of DCIPS because they felt pay for performance will reward the hard workers. However, participants in 9 of 13 discussion groups felt that DCIPS was being implemented too quickly. Additionally, employees and supervisors at 9 of the 13 locations visited said too many questions about DCIPS went unanswered. For example, employees at one location felt that in-class instructors were unable to provide answers to basic questions about DCIPS and its implementation. Further, supervisors in another location stated they felt unprepared to answer employee questions about DCIPS. Participants at 10 of the 13 locations visited said the amount of time spent working on DCIPS diverts attention from their mission work. For example, supervisors at one location stated mission activities have taken a back seat to the activities required to implement DCIPS, and at another location supervisors were dismayed by the significant amount of time the rating process entails. Both employees and supervisors at several locations also felt that DCIPS was a tremendous administrative burden. For example, supervisors in one discussion group stated the administrative burden is a “nightmare,” while supervisors in another discussion group stated DCIPS is too time-consuming, takes away from actual work of value, monopolizes the chain of command at critical moments; and is overly laborious without tangible benefits compared with other systems. Other supervisors stated employees are now more focused on DCIPS metrics than their actual jobs. Moreover, employees in one discussion group stated that DCIPS is a detriment to the mission because it is a huge administrative burden that takes one away from performing his or her mission work. We have previously reported that high-performing organizations continuously review and revise their performance management systems based on data-driven lessons learned and changing needs in the environment. Consistent with this approach, USD(I) officials have drafted four surveys to be used by the components that will cover various parts of DCIPS (training, performance objectives, ratings process, and payouts) and be accompanied by guidance on how to assess survey results. However, while these surveys cover aspects of DCIPS, they lack questions that would provide insight on certain aspects of the safeguards, such as the likelihood an employee would utilize the internal grievance process to challenge a rating. Additionally, the surveys—at the time of our review—did not directly ask questions or measure employees’ overall acceptance of DCIPS. Further, it is unclear exactly when these surveys will be implemented, although USD(I) officials said they hoped to start soon in order to capture baseline feedback from the first year. USD(I) officials further said the results of the surveys will inform future changes to DCIPS. However, without implementing a mechanism—like the four surveys that include questions regarding certain safeguards, such as the internal grievance process—DOD may not be able to comprehensively and accurately identify and measure employee perceptions. Human capital reform is one of the most significant transformations in the federal government. In our 2009 High-Risk Series update, we identified the importance of developing a clear linkage between individual employee performance and organizational success and pointed out that the success of implementing a performance management system is contingent on how, when, and the basis on which it is done. However, at the end of this review, legislation was signed by the President that contained provisions that affect DCIPS. As mentioned previously, the USD(I) November 3, 2009, memorandum to the defense intelligence workforce noted that the legislation did not repeal or terminate DCIPS, but suspended certain provisions of the DCIPS pay-setting regulations until December 31, 2010, to allow for an independent review of DCIPS. This memorandum also stated that the department would continue to press forward with unifying the defense Intelligence Community under a common personnel system and specifically noted that the National Geospatial-Intelligence Agency would continue under all DCIPS regulations—as allowed by the legislation—and would be the focus of the department’s review of DCIPS. We have acknowledged in prior work on performance management systems that moving too quickly or prematurely could have detrimental consequences for such systems. The additional review of DCIPS efforts to date may provide the department time needed to address any potential issues and help ensure successful implementation. We have further reported that a basic framework is needed to implement major reforms, including performance management systems. Our prior reports make it clear that incorporation of internal safeguards is fundamental for the effective implementation of performance management systems. Further, we have reported that committed top leadership and involving employees in a new performance management system is a continuous process. While we recognize that DOD faces many challenges in changing the culture to implement a pay-for-performance system capable of serving the entire DOD Intelligence Community, we believe that it is imperative that DOD continue to explore ways to build employee confidence in the system to help ensure the system’s success. By partially incorporating the two safeguards we specifically mention, DOD could put the fairness and credibility of DCIPS at risk. However, given the newness of DCIPS, constant monitoring of all safeguards is a prudent course of action. Further, without developing an evaluation plan that assesses DCIPS, including the safeguards, the department will be unable to determine if it is meeting its intended human capital reform goals. Finally, until DOD implements its mechanism to comprehensively and accurately identify and measure employee perceptions, including questions related to the safeguards such as the internal grievance process, it is not well positioned to develop a strategy to effectively address concerns raised by employees regarding DCIPS. Employees are the number one stakeholders in this type of transformation. With employees from the National Geospatial-Intelligence Agency being the only employees continuing under DCIPS regulations and given the agency’s 10- year history with a pay for performance human capital system, the perspective of those employees will provide DOD with valuable insights as it reviews DCIPS and monitors the implementation of the safeguards. As the Office of Personnel Management and other studies have shown, it takes time for employees to accept organizational transformation—in this case, a move to a performance management system. As a result, employee acceptance of the system—both eligible employees in the defense intelligence components as well as those in the National Geospatial- Intelligence Agency—is dependent on those employees’ involvement in the system’s design and implementation. Ultimately, the success of the system is dependent on this acceptance. To improve DOD’s implementation of internal safeguards in DCIPS, and mechanisms to identify employee perceptions of it, we recommend that the Secretary of Defense direct that the Under Secretary of Defense for Intelligence take the following four actions: Issue guidance to institutionalize a process to involve employees continually in future design and implementation changes to DCIPS; Issue guidance on its analysis of finalized ratings that explains how the demographic analysis of ratings is to be conducted, to help ensure equity, fairness, and non-discrimination in ratings; Finalize and execute its evaluation plan with metrics to assess the system, including the implementation of internal safeguards, to help ensure the department evaluates the impact of DCIPS; and Expeditiously implement mechanisms—including the four surveys— that comprehensively and accurately identify and measure employee perceptions; and ensure those mechanisms include questions regarding certain safeguards, such as the internal grievance process and employees’ acceptance of DCIPS. We provided a draft of this report to DOD and ODNI. DOD, in written comments, concurred with all of our recommendations. We provided ODNI with a draft of this report because, though not the focus of our review, ODNI has played a significant role in strategic human capital management reform for the U.S. Intelligence Community and is thus well positioned to provide additional insights and comments on DCIPS and companion efforts in the Intelligence Community. Both DOD and ODNI provided us with technical comments, which we incorporated in this report, as appropriate. DOD’s and ODNI’s written comments are reprinted in their entirety in appendixes VII and VIII, respectively. In its written comments, DOD noted there are inherent challenges implicit in implementing a change of this magnitude—specifically establishing a common DCIPS framework within the defense intelligence components that is fair and equitable, consistent, and transparent. We agree with the department and note in our report that change of this magnitude can take several years to be fully successful. Furthermore, DOD characterized our recommendations as logical next steps in the evolution of DCIPS and elaborated on specific steps it was taking to address each of our recommendations. First, DOD stated that, as recommended, it was developing guidance to more formally institutionalize a process to involve employees continually in design, implementation, and evaluation to the evolving DCIPS. DOD noted that since the Intelligence Community does not have employee bargaining units, it is all the more important to ensure a robust and consistent process for employee engagement. Second, regarding our recommendation that DOD issue guidance on the analysis of its ratings, the department noted that it issued initial guidance and was finalizing guidance for individual components that takes into account requirements of the Fiscal Year 2010 National Defense Authorization Act. Third, DOD stated that, as recommended, it was in the process of finalizing the DCIPS evaluation plan with metrics to assess the system and stated that the department recognized the importance of evaluating DCIPS. Fourth, DOD stated that, as recommended, it was finalizing plans to develop mechanisms that comprehensively and accurately identify and measure employee perceptions. DOD also noted that, as recommended, the mechanisms would include questions regarding certain safeguards, such as the internal grievance process and employees’ acceptance of DCIPS. If implemented in accordance with our recommendations, the department’s actions appear to be a positive step in helping ensure fairness, equity, and credibility of the personnel system. In written comments, ODNI stated that it appreciated the opportunity to comment on our report, thought the overall tone of the report was fair and balanced; but noted that they felt the reports Highlights page, unlike the overall report, was overly negative. We reevaluated our Highlights page to ensure that it appropriately reflected our findings as seen throughout the report and made some changes to address ODNI’s comments about tone. For example, we previously enumerated the ten safeguards in the highlights page but deleted a number of those to incorporate specific actions that DOD had taken to more directly mirror language in other parts of our report. ODNI also stated that it believed our report should emphasize that DCIPS was authorized by statute in 1997 and is separate and distinct from the National Security Personnel System. Our draft noted both of these points. ONDI also noted in its comments that it believed our report should emphasize that DCIPS and the NICCP are intended to meet the goals of the Intelligence Reform and Terrorism Prevention Act of 2004. We have made appropriate changes to our report but note also that we reviewed the implementation of DCIPS and not ODNI’s National Intelligence Civilian Compensation Program. ODNI further stated that Intelligence Community Directive 650 clearly lays out 10 guiding principles that very closely align to the 10 criteria we chose for our review. We agree but note that our objective was not to determine whether DCIPS met the intent of Intelligence Community Directives but rather to determine whether DCIPS incorporated the safeguards identified in our prior work as best practices for public and private performance management systems. ODNI also commented that change is often difficult for employees to accept and there will always be some employee discomfort; however, these officials believed that this discomfort is more a reflection of where DCIPS is in its implementation schedule than with any material defect with system’s design. We also acknowledge, in our draft and in prior reports, that major change management initiatives in large- scale organizations take several years to be fully successful. ODNI expressed an appreciation for our comprehensive review and our recommendations to DOD and agreed to work with USD(I) in an expeditious manner to address the areas we identified. ODNI made a number of other technical comments that we considered and incorporated into our draft, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3604 or by e-mail at farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to the report are listed in appendix IX. In March 2005, the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction recommended to the President that the Director of National Intelligence use its human resources authority to create a uniform system for performance evaluations and compensation, and develop a more comprehensive and creative set of performance incentives. In response to the commission’s recommendation, the Director of National Intelligence established the National Intelligence Civilian Compensation Program (NICCP), which creates a uniform system of performance evaluation and compensation for the Intelligence Community's civilian workforce and aims at building a culture of collaboration across the Intelligence Community. NICCP represents a fundamental shift from the current General Schedule pay scale to a more performance-based, market model. The cornerstone of the Office of the Director of National Intelligence’s approach to establishing NICCP has been inter-departmental collaboration within the Intelligence Community. An ODNI official noted that NICCP essentially acts as a "treaty", or common, framework that establishes the performance management and pay rules that are to be commonly and consistently applied across the Intelligence Community. Specifically, NICCP institutes a common set of core requirements such as setting basic rates of pay, managing performance, and pay based on performance. This framework also includes establishing six common performance elements by which all Intelligence Community civilian employees will be assessed. Specifically, these include Accountability for Results, Communication, Critical Thinking, Engagement and Collaboration, Personal Leadership and Integrity, and Technical Expertise. Supervisors will also be evaluated on six performance elements, of which they share four with non-supervisors—Accountability for Results, Communication, Critical Thinking, Engagement and Collaboration—and two that are unique to them, Leadership and Integrity, and Management Proficiency. Additionally, rating levels under this new system are from 1 to 5—with 1 being unacceptable performance and 5 being outstanding performance In addition to being applicable to Intelligence Community employees within DOD, NICCP is also applicable to certain other national intelligence organizations from other federal agencies and departments—including the Central Intelligence Agency, the Department of Homeland Security, the Federal Bureau of Investigation, and the Office of the Director of National Intelligence, which currently have pay-setting authorities. For example, the Central Intelligence Agency is currently using its statutory authority to implement a pay-for-performance system and has, to date, created a Pay Modernization Office, and developed a project plan, implementation schedule, and pay modernization Web site. According to officials in the Office of the Director of National Intelligence, other federal agencies or departments that do not currently have the same statutory authorities include offices within Departments of Energy, State, Treasury, and the Drug Enforcement Administration. As reported by ODNI, the Intelligence Community agreed upon the several “enabling” directives that actually constitute NICCP. Specifically, the essence of the NICCP framework has been captured in a suite of five enabling directives. They include the following: Intelligence Community Directive 650—National Intelligence Civilian Compensation Program: Guiding Principles and Framework (Effective April 28, 2008). Intelligence Community Directive 651—Performance Management System Requirements for the Intelligence Community Civilian Workforce (Effective November 28, 2007 and Updated Nov. 21, 2008). Intelligence Community Directive 652—Occupational Structure for the IC Civilian Workforce, (Effective April 28, 2008). Intelligence Community Directive 653––Pay-Setting and Administration Policies for the IC Civilian Workforce (Effective May 14, 2008)) Intelligence Community Directive 654 –Performance-Based Pay for the IC Civilian Workforce (Effective April 28, 2008.) While our review focused on two merit principles that relate directly to performance management, 5 U.S.C. §§ 2301(b)2 and (b)(8A), the following provides the entire list of merit principles found in section 2301: Section 2301 of title 5 of the U.S. Code applies to executive agencies and requires federal personnel management to be implemented consistent with the following merit system principles. 1. Recruitment should be from qualified individuals from appropriate sources in an endeavor to achieve a work force from all segments of society, and selection and advancement should be determined solely on the basis of relative ability, knowledge and skills, after fair and open competition which assures that all receive equal opportunity. 2. All employees and applicants for employment should receive fair and equitable treatment in all aspects of personnel management without regard to political affiliation, race, color, religion, national origin, sex, marital status, age, or handicapping condition, and with proper regard for their privacy and constitutional rights. 3. Equal pay should be provided for work of equal value, with appropriate consideration of both national and local rates paid by employers in the private sector, and appropriate incentives and recognition should be provided for excellence in performance. 4. All employees should maintain high standards of integrity, conduct, and concern for the public interest. 5. The Federal work force should be used efficiently and effectively. 6. Employees should be retained on the basis of adequacy of their performance, inadequate performance should be corrected, and employees should be separated who cannot or will not improve their performance to meet required standards. 7. Employees should be provided effective education and training in cases in which such education and training would result in better organizational and individual performance. 8. Employees should be— (A) protected against arbitrary action, personal favoritism, or coercion for partisan political purposes, and (B) prohibited from using their official authority or influence for the purpose of interfering with or affecting the result of an election or a nomination for election. 9. Employees should be protected against reprisal for the lawful disclosure of information which the employees reasonably believe evidences— (A) a violation of any law, rule, or regulation, or (B) mismanagement, a gross waste of funds, an abuse of authority, or a substantial and specific danger to public health or safety. In conducting our review of the Defense Civilian Intelligence Personnel System (DCIPS), we limited our scope to the performance management aspect of DCIPS. We did not address either the performance management of the Senior Executive Service at the Department of Defense (DOD) or other aspects of DCIPS, such as classification and pay. To determine the extent to which DOD has incorporated internal safeguards and accountability mechanisms into DCIPS, we used the following internal safeguards and accountability mechanisms, which were derived from our previous work on pay-for-performance management systems in the federal government: Assure that the agency’s performance management system links employee objectives to the agency’s strategic plan, related goals, and desired outcomes; Implement a pay-for-performance evaluation system to better link individual pay to performance, and provide an equitable method for appraising and compensating employees; Provide adequate training and retraining for supervisors, managers, and employees in the implementation and operation of the performance management system; Institute a process for ensuring ongoing performance feedback and dialogue between supervisors, managers, and employees throughout the appraisal period and setting timetables for review; Assure that the agency’s performance management system results in meaningful distinctions in individual employee performance; Provide a means for ensuring that adequate agency resources are allocated for the design, implementation, and administration of the performance management system; Assure that there is an independent and credible employee appeals Assure that there are reasonable transparency and appropriate accountability mechanisms in connection with the results of the performance management process, including periodic reports on internal assessments and employee survey results relating to performance management and individual pay decisions while protecting individual confidentiality; Involve employees in the design of the system, to include employees directly involved in validating any related implementation of the system; and Adhere to the merit principles set forth in section 2301 of title 5 of the U.S. Code. (Two of these merit principles, which relate directly to performance management—((b)2 and (b)(8A)—for example, identify (1) fair and equal treatment, regardless of factors such as political affiliation, race, color, sex, age, or handicapping condition and (2) protection against arbitrary action, personal favoritism, and coercion for partisan political purposes, as necessary, in all aspects of personnel management. The merit principles are listed in their entirety in appendix II.) To assess the implementation of these safeguards and accountability mechanisms, we obtained, reviewed, and analyzed DOD guidance and other regulations provided by officials in the Office of the Director of National Intelligence, the Office of the Under Secretary of Defense for Intelligence, and the intelligence components in DOD. Specifically, we reviewed and analyzed key documents such as DCIPS guidance and policies, along with Office of Personnel Management guidance on performance management systems. We also reviewed available DCIPS training materials, including self-paced online trainings on the DCIPS Web site: http://dcips.dtic.mil/index.html, attended the DCIPS Data Administrator Training Course, and reviewed and analyzed DVDs of town hall meetings recorded by the Office of Naval Intelligence. Because DCIPS was in early implementation, we continuously reviewed the DCIPS Web sites including the Under Secretary of Defense for Intelligence’s main Web site for updates on training materials and policies. Finally, we obtained relevant documentation and interviewed key Intelligence Community and DOD officials from the following organizations: The Associate Director of National Intelligence for Human Capital and Intelligence Community Chief Human Capital Officer, Office of the Director of National Intelligence; The Under Secretary of Defense for Intelligence; Under Secretary of Defense for Intelligence, Human Capital Under Secretary of Defense for Intelligence, Chief of Staff Directorate; Defense Intelligence Agency, Directorate for Human Capital, Office National Geospatial-Intelligence Agency, DCIPS Program National Reconnaissance Office, Office of Human Resources; National Security Agency, Human Resource Strategies; Department of the Army, Intelligence Personnel Management Department of the Navy, Civilian Personnel Programs; Office of Naval Intelligence, Civilian Intelligence Personnel Headquarters, U.S. Marine Corps, Intelligence Department, Department of the Air Force, DCIPS Program Office; Defense Security Service, Office of Human Resources. To determine the extent that DOD had developed mechanisms to identify and address employee perceptions about DCIPS, we evaluated two primary sources of information. First, we reviewed the results of existing mechanisms DOD is using to address employee perceptions—which included climate surveys for the Intelligence Community, town hall meetings, along with information from the USD(I)’s Web site. Second, we conducted small group discussions with civilian intelligence personnel within the department who were converting to DCIPS and administered a short questionnaire to these participants to collect information on their background, tenure with the federal service and DOD, and attitudes toward DCIPS. We conducted 26 discussion groups with defense civilian intelligence employees and supervisors from 7 of the 10 defense intelligence components converting to DCIPS. For the purposes of our discussion groups, we omitted defense civilian intelligence personnel from the Army, Air Force, and the Defense Security Service because at the time of our review, these components had not attained the same level of implementation as the other defense intelligence components. Additionally, for the defense intelligence components we did conduct discussion groups with, we also conducted discussion groups with 6 of 7 defense intelligence components that had a field location. Our overall objective in using the discussion group approach was to obtain insight into employee and supervisor perceptions about DCIPS and its implementation thus far. Discussion groups, which are similar in nature and intent to focus groups, involve structured small group discussions that are designed to obtain in-depth information about specific issues. The information obtained is such that it cannot easily be obtained from a set of individual interviews. From each location, we requested that each defense intelligence component draw a systematic sample from its list of personnel in order to obtain a sample of 8 to 12 employees and 8 to 12 supervisors to participate. At the majority of the discussion groups, we reached our goal of meeting with 8 to 12 employees and supervisors in each discussion group; however, since participation was not compulsory and at some locations populations of employees to draw this random sample from were small, in a few instances we did not reach the recommended 8 participants in the group. Discussions were held in a semi-structured manner, led by a moderator who followed a standardized list of questions. The discussions were documented by one or two other analysts at each location. For field sites, we selected components that had a concentration of more than 25 employees. In conducting our discussion groups, our intent was to achieve saturation—the point at which we were no longer hearing new information. As noted, we conducted 26 discussion groups with employees and supervisors of DOD civilian intelligence personnel at the 13 DOD sites we visited. Our design allowed us to identify themes, if any, in perceptions held by employees and supervisors. Discussion groups were conducted between April 2009 and May 2009. A discussion guide was developed to facilitate the discussion group moderator in leading the discussions. The guide helped the moderator address several topics related to employees’ and supervisors’ perceptions of the performance management system, including their overall perception of DCIPS and the rating process, the training they received on DCIPS, the communication they have with their supervisor, positive aspects of DCIPS, and any changes they would make to DCIPS, among others. Each discussion group began with the moderator greeting the participants, describing the purpose of the study, and explaining the procedures for the discussion group. Participants were assured that all of their comments would be discussed in the aggregate or as part of larger themes that emerged. The moderator asked participants open-ended questions related to DCIPS. All discussion groups were moderated by a GAO analyst, while at least one other GAO analyst observed the discussion group and took notes. After each discussion group, the moderator and note taker reviewed the notes from the session to ensure that the nature of the comments was captured accurately. We performed content analysis of our discussion group sessions in order to identify the themes that emerged during the sessions and to summarize participant perceptions of DCIPS. Specifically, at the conclusion of all our discussion group sessions, we reviewed responses from each of the discussion groups and created a list of themes. We then reviewed the comments from each of the 26 discussion groups and assigned comments to the appropriate themes, which were agreed upon by three analysts. The responses were used in our evaluation and discussion of how civilian employees perceive DCIPS. Discussion groups are not designed to (1) demonstrate the extent of a problem or to generalize the results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, discussion groups are intended to provide in-depth information about participants’ reasons for holding certain attitudes about specific topics and to offer insights into the range of concerns about and support for an issue. Specifically, the projectability of the information obtained during our discussion groups is limited for three reasons. First, the information gathered during our discussion groups on DCIPS represents the responses of only the employees and supervisors present in our 26 discussion groups. The experiences of other employees and supervisors under DCIPS who did not participate in our discussion groups may have varied. Second, while the composition of our discussion groups was designed to ensure a random sample of employees and supervisors under DCIPS, our sampling did not take into account any other demographic or job-specific information. Third, our discussion group samples are not generalizable to all component locations. We administered a questionnaire to discussion group participants during the discussion group session to obtain further information on their backgrounds and perceptions of DCIPS. The questionnaire was administered and received from 238 participants of our discussion groups. The purpose of our questionnaire was to (1) collect demographic data from participants for the purpose of reporting with whom we spoke (see table 1), and (2) collect information from participants that could not easily be obtained through discussion, e.g., information participants may have been uncomfortable sharing in a group setting. Specifically, the questionnaire included questions designed to obtain employees’ perceptions of DCIPS as compared with their previous personnel system, the accuracy with which they felt their ratings would reflect their performance, and management’s methods for conveying individual and group rating information. Since the questionnaire was used to collect supplemental information and was administered solely to the participants of our discussion groups, the results represent the opinions of only those employees who participated in our discussion groups. Therefore, the results of our questionnaire cannot be generalized across the population of DOD civilian intelligence personnel. We conducted our review from November 2008 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Defense Civilian Intelligence Personnel System (DCIPS) is largely based on the National Geospatial-Intelligence Agency’s Total Pay Compensation pay-for-performance system. The National Geospatial- Intelligence Agency’s system was in existence for about 10 years (1999- 2009). Table 2 provides a comparison of the two systems. Table 2. DCIPS and Total Pay Compensation Comparison. Pub. L. No. 104-201 §§1631-1632 (1996), as amended by Pub. L. No. 106-398, § 1141 (2000) (codified at 10 U.S.C. §§1601-1614). Established by Rater and approved by Reviewer(s) Work roles and occupations crosswalk to OPM job titles/categories. Final Ratings will be inserted into Compensation Work Bench—a software tool which utilizes an algorithm to determine salary increases and bonus awards.Any changes to pay increases based on the algorithm, per DCIPS guidance, must be documented, justified, and approved by the PRA (see below) Final ratings are inserted into a Total Performance Compensation spreadsheet—software tool which utilizes an algorithm to determine salary increases and bonuses. Any changes to salary increases and bonuses, per guidance, must be documented, justified, and approved by boards, office level directors, and Agency review authority. A Pay Pool PRA oversees one or more pay pools, and conducts a summary review of all salary decisions to assess conformance to policy guidance and equity across pay pools. The Pay Pool PRA approves the final pay pool decisions. The Under Secretary of Defense for Intelligence has designed several training courses as part of a curriculum for the Defense Civilian Intelligence Personnel System (DCIPS). This curriculum covers various aspects of DCIPS. Table 3 illustrates the range of training courses provided to Intelligence Community employees. In addition to the Defense Civilian Intelligence Personnel System (DCIPS), DOD has also been implementing a pay-for-performance system for civilian employees who were not in the Intelligence Community—the National Security Personnel System. Table 4 provides a comparison of the two systems. DCIPS and the National Security Personnel System: A Comparison Pub. L. No. 104-201 §§1631-1632 (1996), as amended by Pub. L. No. 106-398, § 1141 (2000) (codified at 10 U.S.C. §§1601-1614). Pub. L. No. 108-136, § 1101 (2003), as amended by Pub. L. No. 110-181, § 1106 (2008) (codified at 5 U.S.C. §§ 9901-9904). Established by Rater and approved by Reviewer(s) before the pay pool process. Established by the Pay Pool. Pay Pools are responsible for reviewing ratings of record, share allocations, and payout distribution. Job titles aligned to four occupationally- based career groups. One common pay band structure for all occupations aligned to common work categories/levels 4 career groups comprised of 15 pay schedules and 44 pay bands. Employee payout in early January DCIPS and the National Security Personnel System: A Comparison Final Ratings will be inserted into Compensation Work Bench—a software tool which utilizes an algorithm to determine salary increases and bonus awards.Any changes to pay increases based on the algorithm must be documented, justified, and approved by the PRA (see below) Also uses a Compensation Work Bench, however employees are assigned a number of shares based on their performance rating; the value of one share is determined by the overall number of shares awarded. Pay Pool Performance Review Authority (PRA) A Pay Pool PRA oversees one or more pay pools, and conducts a summary review of all salary decisions to identify potential issues with regard to merit, consistency, or unlawful discrimination among the pay pools under its authority. The Pay Pool PRA approves the final pay pool decisions. Provides oversight of several pay pools, and addresses the consistency of performance management policies within a component, major command, field activity, or other organization as determined by the component. Ms. Brenda S. Farrell Director, Defense Capabilities (U) This responds to the November 12,2009 request for review of a draft report entitled "DOD Civilian Personnel: Intelligence Personnel System Incorporates Safeguards, but Opportunities Exist for Improvement," GAO-I 0-134. (U) We appreciate the opportunity to comment on the draft report given the central role the ODNI played in recent development of Defense Civilian Intelligence Personnel System (DCIPS) and the National Intelligence Civilian Compensation Program (NICCP), and the importance of those companion efforts to the Intelligence Community's overall transformation. Please find attached our suggested edits to the body of the report and our official comments for inclusion in the appendices. (U) Overall, we believe the tenor of the report is fair and balanced, Highlights are overly negative and should be modified to more accurately reflect the tone In addition, we believe the report should also emphasize that and body of the report. DCIPS and the NICCP are intended to meet the goals of the Intelligence Reform and Terrorism Prevention Act of 2004. Thus, the ODNI and DoD are pursuing these efforts as a means of integrating and unifying the Community under a single, common human capital policy framework, where the IC's agencies and elements have historically operated under as many as six separate statutory personnel systems. GAO has noted that common human capital policies can act as a powerful tool in support of organizational transformation, and no where is this more critical than in the Ie. We also believe the report should emphasize that DCIPS was authorized by statute in 1997 and is separate and distinct from the National Security Personnel System; the latter was authorized several years later and has taken a much different path with respect to its design and implementation. (U) In 2001 GAO identified human capital as a "High Risk Area" across the executive branch, and it has been a champion of civil service reform ever since. We applaud your efforts in that regard and believe that NICCP/DCIPS is consistent with the spirit and intent of GAO's views. We also appreciate your comprehensive review and thoughtful recommendations; we take them seriously and will work with DoD to implementthem insofar as possible. (D) If you have any questions regarding this matter, please do not hesitate to contact me at (703) 275-2473. Director of Legislative Affairs Comments of the Intelligence Community Chief Human Capital Officer on the GAO DRAFT Report: Intelligence Personnel System Incorporates Safeguards, but Opportunities Exist for Improvement ODNI appreciates the opportunity to comment on this GAO report. While the overall tenor of the report is fair and balanced, we do feel obligated to make a couple of important points. The design of DCIPS complies with all IC Directives, which were developed after an extensive period of collaboration among IC agencies and elements. The policy design represents a serious consideration of lessons learned from best practices found in existing successful alternative pay systems (with particular attention paid to NGA). Furthermore, the IC did gather input (in 2006) from hundreds of IC employees during the policy development and program design phases. It has always been our intention to continue soliciting additional employee suggestions for process improvement at the conclusion of each annual performance and pay cycle. IC Directive 650 clearly lays out ten guiding principles which very closely align to the ten criteria chosen by GAO for their review. We agree that employees must be informed and educated on the details of the IC-wide program, as well as their department or agency's compensation and performance management systems. They are to be given the opportunity to provide feedback on the content of those systems and their implementation, and their feedback must be considered when those systems are developed, implemented, and administered. During the design and implementation phases of our change initiative, we made several changes based on employee feedback. For example, we decided to pass through to all employees the full general pay increase (unadjusted by performance results). We also modified our implementation schedules whenever the agencies or elements didn't feel their workforce was properly prepared to convert to DCIPS. Change is often difficult for employees to accept, and there will always be some who are uncomfortable with the rate of change. But we believe this is more a reflection of where DCIPS is in its implementation schedule then any material defect in the design. The feedback that will be the most valuable will only come after we have been allowed to run all the way through a pay-for-performance cycle so we can evaluate the results. Regarding safeguards, our ICDs clearly affirm the need for employee protections. We must provide rigorous oversight of the administration of IC compensation and performance management systems, including review mechanisms to guard against unlawful discrimination and partisan pressures, and other non-merit factors such as cronyism and favoritism. We must also ensure transparency of merit-based pay and performance decisions for employees. We acknowledge that DCIPS can and must be improved, and agree to work with USD(I) in an expeditious manner to address the areas you have identified. However, we strongly believe that DCIPS has been established on a strong foundation of policy directives and incorporates many best practices in its processes. We think DCIPS is off to a very solid start and will only get better. Brenda S. Farrell, (202) 512-3604, or farrellb@gao.gov. In addition to the contact named above, Marion Gatling (Assistant Director), Beth Bowditch, Margaret Braley, Ryan D’Amore, Nicole Harms, Cynthia Heckman, Mae Jones, James P. Krustapentus, Lonnie McAllister, II, Spencer Tacktill, Carolyn Taylor, John Van Shaik, José Watkins, and Greg Wilmoth made key contributions to this report. Human Capital: Monitoring of Safeguards and Addressing Employee Perceptions Are Key to Implementing a Civilian Performance Management System in DOD. GAO-10-102. Washington, D.C.: October 28, 2009. Human Capital: Continued Monitoring of Internal Safeguards and an Action Plan to Address Employee Concerns Could Improve Implementation of the National Security Personnel System. GAO-09-840. Washington, D.C.: June 25, 2009. Human Capital: Improved Implementation of Safeguards and an Action Plan to Address Employee Concerns Could Increase Employee Acceptance of the National Security Personnel System. GAO-09-464T. Washington, D.C.: April 1, 2009. Questions for the Record Related to the Implementation of the Department of Defense’s National Security Personnel System. GAO-09-669R. Washington, D.C.: May 18, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Human Capital: DOD Needs to Improve Implementation of and Address Employee Concerns about Its National Security Personnel System. GAO-08-773. Washington, D.C.: September 10, 2008. Human Capital: DOD Needs Better Internal Controls and Visibility over Costs for Implementing Its National Security Personnel System. GAO-07-851. Washington, D.C.: July 16, 2007. Office of Personnel Management: Key Lessons Learned to Date for Strengthening Capacity to Lead and Implement Human Capital Reforms. GAO-07-90. Washington, D.C.: January 19, 2007. Post-Hearing Questions for the Record Related to the Department of Defense’s National Security Personnel System (NSPS). GAO-06-582R. Washington, D.C.: March 24, 2006. Human Capital: Observations on Final Regulations for DOD’s National Security Personnel System. GAO-06-227T. Washington, D.C.: November 17, 2005. Human Capital: Designing and Managing Market-Based and More Performance-Oriented Pay Systems. GAO-05-1048T. Washington, D.C.: September 27, 2005. Human Capital: DOD’s National Security Personnel System Faces Implementation Challenges. GAO-05-730. Washington, D.C.: July 14, 2005. Questions for the Record Related to the Department of Defense’s National Security Personnel System. GAO-05-771R. Washington, D.C.: June 14, 2005. Questions for the Record Regarding the Department of Defense’s National Security Personnel System. GAO-05-770R. Washington, D.C.: May 31, 2005. Post-Hearing Questions Related to the Department of Defense’s National Security Personnel System. GAO-05-641R. Washington, D.C.: April 29, 2005. Human Capital: Preliminary Observations on Proposed Regulations for DOD’s National Security Personnel System. GAO-05-559T. Washington, D.C.: April 14, 2005. Human Capital: Preliminary Observations on Proposed Department of Defense National Security Personnel System Regulations. GAO-05-517T. Washington, D.C.: April 12, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. Posthearing Questions Related to Strategic Human Capital Management. GAO-03-779R. Washington, D.C.: May 22, 2003. Human Capital: DOD’s Civilian Personnel Strategic Management and the Proposed National Security Personnel System. GAO-03-493T. Washington, D.C.: May 12, 2003.
Since 2001, Government Accountability Office (GAO) has designated strategic human capital management as a high-risk area because of the federal government's long-standing lack of a consistent approach to such management. In 2007, the Under Secretary of Defense for Intelligence (USD(I)) began developing a human capital system--called the Defense Civilian Intelligence Personnel System (DCIPS)--to manage Department of Defense (DOD) civilian intelligence personnel. In response to a congressional request, GAO examined the extent to which DOD has (1) incorporated internal safeguards into DCIPS and monitored the implementation of these safeguards and (2) developed mechanisms to identify employee perceptions about DCIPS. GAO analyzed guidance, interviewed appropriate officials, and conducted discussion groups with employees at select DOD components. At the end of GAO's review, legislation was enacted that impacts, among other things, how DCIPS employees will be paid. While early in its implementation of DCIPS, DOD has taken some positive steps to incorporate 10 internal safeguards to help ensure the fair, effective, and credible implementation of the system; however, opportunities exist to immediately improve the implementation of two of these safeguards, and continued monitoring of all is needed. For example, one safeguard requires employees to be trained on the system's operations, and GAO noted that DOD had provided extensive training to employees on DCIPS to include several Web-based and classroom courses. For another safeguard--which requires ongoing performance feedback--GAO noted that DOD's guidance requires feedback between employees and supervisors at the midpoint and at the close of the performance rating cycle. However, GAO determined that in the case of two safeguards--involving employees and fully implementing the merit principles--DOD could immediately improve its implementation. First, while DOD has leveraged mechanisms like town hall meetings and "brown bags" to involve employees in DCIPS, its guidance does not identify a formalized process for the continuous involvement of employees in the system implementation--which could ultimately undermine its credibility. Second, while DOD has stated that it will conduct an analysis of final ratings utilizing demographic data, DOD does not have a written policy outlining how this will be accomplished, and therefore may be unable to fully determine whether potential barriers to fair and equitable ratings exist. Without steps to improve implementation of this safeguard, employees may lack confidence in the system. Finally, GAO previously reported--for systems like DCIPS--that continued monitoring of such systems' safeguards is needed to help ensure agency actions are effective. In October 2009, DOD provided GAO with a draft DCIPS evaluation plan that would be executed after the first payout in January 2010. Without finalizing and executing the plan, DOD will not know if it has achieved desired outcomes from the system. DOD has used several mechanisms to provide employees with information; however, these mechanisms do not comprehensively identify and address employee perceptions of DCIPS. For example, USD(I), among other things, maintains a Web-site that contains frequently asked questions submitted by employees and responses by USD(I). Absent, however, are mechanisms to systematically identify employee perceptions. The nongeneralizable results of the discussion groups GAO conducted with employees and supervisors yielded mixed views. For example, participants generally expressed positive views about the concept of pay for performance. But participants at most of the Intelligence Components noted that DCIPS was being implemented too quickly or many questions went unanswered. Although DOD officials have drafted surveys that will allow them to more comprehensively collect employee perceptions about DCIPS, these surveys lack questions that would provide insight about employee perceptions of certain safeguards and overall acceptance of DCIPS. Without including such questions and expeditiously implementing its surveys, DOD will not have clear insight into employee perceptions.
According to HUD, the ESG program was designed to be the first step in a continuum of assistance to prevent homelessness and to enable individuals and families experiencing homelessness to move toward independent living. More specifically, the program objectives were to increase the number and quality of emergency shelters for individuals and families experiencing homelessness, to operate these facilities and provide essential social services, and to help prevent homelessness. The ESG program is targeted at persons experiencing homelessness. It was originally established by the Homeless Housing Act of 1986, in response to the growing issue of homelessness among men, women, and children in the United States. In general, the ESG program uses the Community Development Block Grant (CDBG) formula as the basis for allocating funds to states, metropolitan cities, and urban counties. The CDBG formula uses factors reflecting community need, including poverty, population, housing overcrowding, and age of housing. According to HUD, in fiscal year 2009, there were 360 ESG grantees. For fiscal year 2009, HUD awarded $160 million in ESG funding to grantees. Figure 1 shows the total amount of ESG funds received by grantees, by state, for fiscal year 2009. The ESG program generally requires matching contributions by grantees, thus increasing the total funds used to provide services under the program. Metropolitan cities and urban counties must match the ESG funding dollar-for-dollar with cash or noncash resources from public or private sources. States are generally subject to the same requirement, with an exemption for the first $100,000 in funding. ESG funds may reach eligible projects through different routes, as shown in figure 2. First, HUD allocates ESG funds to grantees. Metropolitan cities, urban counties, and territories may carry out the program directly or subgrant all or part of their ESG funds to nonprofit organizations. States cannot carry out program activities directly, and must subgrant ESG funds (but may retain up to 5 percent for administration, as discussed below) to local governments or nonprofit organizations. Local governments receiving ESG funds as a subgrant from the state may carry out the program themselves or further subgrant funds to nonprofit organizations. HUD allows ESG grantees flexibility to determine how to award funds to subgrantees. For example, many grantees conduct a competitive process for awarding funds to subgrantees. Other grantees offer repeat funding to organizations that have demonstrated success with ESG-funded homeless assistance programs in the past, or they alternate funding each year among multiple agencies with ongoing homeless assistance programs. Grantees also might make relatively few, but relatively larger, subgrants, or award relatively smaller grants to a greater number of subgrantees. Subgrantees and grantees that are not states may use ESG funding to conduct a range of eligible activities which, as previously noted, include the rehabilitation or remodeling of buildings to be used as shelters, operation of the facilities, essential supportive services, and homeless prevention. Under current law, ESG program grantees may use up to 5 percent of their grant award for administrative purposes, which can include staff to administer the grant, the preparation of progress reports and audits, or the monitoring of subgrantees. Grantees are not required to share any of their ESG administrative allowance with subgrantees, except in one instance—when a state awards a subgrant to a unit of local government. According to HUD, the department does not track the extent to which grantees share their ESG administrative allowance with subgrantees. The HEARTH Act made major changes to the ESG program, while renaming it the Emergency Solutions Grants Program. As noted earlier, the HEARTH Act changed the amount of ESG funds that grantees may use to cover administrative costs, increasing it from 5 percent to a maximum of 7.5 percent of the total grant amount. Programmatically, the HEARTH Act also made the following changes: The act authorized new eligible homeless assistance activities: short-term rental assistance, medium-term rental assistance, security deposits, utility deposits and payments, and moving costs. It established housing relocation and stabilization services as a major focus area for both homeless assistance and homeless prevention, including outreach, housing search, legal services, and credit repair. It established rapid re-housing as a major focus area for homeless assistance. The aim of rapid re-housing is to help people experiencing homelessness return to permanent housing as soon as possible. According to a national homeless advocacy group, these efforts reduce the length of time people remain in homeless shelters, which in turn opens beds for others who need them and reduces the public and personal costs of homelessness. HUD expects to implement the HEARTH Act changes, including increasing the allowance for administrative costs, with the program’s fiscal year 2011 allocation. We found that ESG grantees and subgrantees in the states we visited performed a range of administrative activities, but the program’s allowance for administrative costs generally did not fully cover the cost of these activities. As a result, grantees and subgrantees told us they must cover any shortfalls with funds from other sources, which diminishes their ability to support other activities. In addition, there are minimal standards that can be used as guidance for evaluating the appropriateness of ESG administrative costs, and we found that grantees and subgrantees in the states we visited monitored ESG administrative costs at varying levels of detail. Grantees in the states we visited told us they conducted various activities to administer their ESG allocations. As figure 3 shows, these activities generally fell into five categories: application/approval, financial, reporting, monitoring/oversight, and other. Our review found these grantees’ ESG administrative activities generally focused on awarding subgrants and monitoring subgrantee performance. For example, City of Philadelphia officials told us they awarded a total of $2.2 million through five ESG grants for fiscal year 2009 and their administrative activities included, among other things, approval and tracking of subgrantee budgets and program monitoring. Similarly, City and County of San Francisco officials reported they awarded $944,900 in ESG grants to 19 local service providers for fiscal year 2009 and their administrative activities included site visits and audit reviews. Among grantees we reviewed, current practice in retaining the 5 percent administrative allowance varied, as shown in table 1. Where grantees kept all or most of the administrative allowance, officials told us this was to cover, at least in part, their administration costs. Where they kept none of the allowance, officials said this was to maximize funds available to local service providers. Table 1 also shows what grantees told us they expect to retain under the higher administrative allowance provided under the HEARTH Act. Subgrantees in the states we visited also reported a range of administrative activity. As figure 4 shows, these activities generally fell into six categories: application/approval, financial, reporting, management, monitoring/oversight, and other. Our review found that their ESG administrative activities generally focused on operating programs and reporting outcomes. For example, one Georgia subgrantee told us its ESG administrative activities included a portion of the executive director’s time, for program oversight; preparing monthly reimbursement requests; coordinating maintenance; and training and coordinating volunteers. Similarly, a Michigan subgrantee told us its ESG administrative activities included oversight and supervision of its program, financial reporting and auditing, and reporting shelter statistics. Grantees and subgrantees in the states we visited told us the ESG administrative allowance generally did not fully cover their actual costs to administer the grant award, and that as a result, they relied on other sources to cover any unfunded costs. We found that grantees’ and subgrantees’ actual ESG administrative costs depended on a number of factors, such as the number of grant awards made, level of oversight provided, number of staff involved in administrative tasks, and types of ESG program activities funded. Figure 5 provides details on the estimated unfunded ESG administrative costs and sources used to cover these costs for grantees and subgrantees we visited. Overall, the unfunded administrative costs reported to us across the eight grantees and 22 subgrantees we visited for which information was available averaged an estimated 13.2 percent of the ESG allocation, with a range of 2.5 percent to 56 percent. However, HUD officials cautioned that some subgrantees we visited appear to be confusing program activities with administrative activities, which might have affected their estimates of actual administrative costs. For example, California ESG program officials estimated their unfunded ESG administrative costs at 4 percent of the state’s ESG allocation (actual administrative costs equal to 8 percent of ESG allocation, less 4 percent retained for administrative costs). To cover these unfunded costs, the officials said they rely on the state’s general fund revenues. Similarly, City of Oakland (California) officials estimated their unfunded ESG administrative costs at 25 percent of their ESG annual allocation (actual administrative costs equal to 30 percent of ESG allocation, less 5 percent retained for administrative costs). These officials also told us that they used the city’s general and redevelopment funds to cover the unfunded costs. In Pennsylvania, one subgrantee estimated its unfunded ESG administrative costs at 2.5 percent of its grant award (based on actual costs, with no administrative allowance from its grantee). This subgrantee, which reported using ESG funds for a one-time building repair project, told us that it used private donations, including from corporations and foundations, to cover its unfunded costs. In Michigan, a subgrantee estimated its uncovered ESG administrative costs at 14 percent (based on actual costs with no administrative allowance), saying it also relied on private donations to cover its unfunded costs. As previously noted, grantees must match their ESG allocations, and subgrantees can provide the match. These matching funds provide a potential source for covering administrative costs. Several subgrantees in the states we visited told us there has been a trend toward more private donations being restricted—that is, made for specific programs or purposes, rather than generally available for a subgrantee’s operations, including administrative costs. Thus, reliance on private donations to cover unfunded ESG administrative costs may become more challenging. For example, one subgrantee told us that donors feel it is more attractive to fund specific programs that have more tangible outcomes compared with funding administrative costs. Another subgrantee told us that business donors tend to target contributions to address specific issues and achieve particular results. Finally, one subgrantee told us that nonprofits themselves have contributed to this trend by telling potential donors they will use donations to undertake specific nonadministrative tasks. Some grantees and subgrantees in the states we visited told us the need to cover unfunded ESG administrative costs using other funding sources has diminished their ability to fund other program activities. For example, one grantee told us that amounts spent to cover unfunded ESG administrative costs could otherwise be directed toward community and economic development activities. Another grantee cited housing counseling and home purchase down-payment assistance as areas that could receive funding but for the need to cover unfunded ESG administrative costs. One subgrantee also told us it could otherwise devote more resources to programs aimed at adoption, single mothers, and family counseling if not for unfunded ESG administrative costs. Some grantees and subgrantees also told us that unfunded ESG administrative costs can affect program administration, interest in participating in the program, and program oversight. For example, one grantee told us that it chooses to make fewer but larger ESG awards to subgrantees, rather than make a greater number of smaller awards, in part because it is less costly to oversee a smaller number of subgrantees. In addition, two subgrantees told us that but for other mitigating factors, they would consider not participating in the ESG program because of the unfunded administrative costs. Some grantees also told us that if more funds were available for administrative costs, there could be greater monitoring of subgrantee activity. One grantee noted that it must stop monitoring subgrantees during parts of the year and generally does not do as much oversight as is desirable. Another grantee added it has difficulty meeting its goal of making at least one site visit to subgrantees each year. According to HUD officials, the ESG program was established with a lower administrative cost allowance based on the expectation that grantees could obtain funds from other sources to cover unfunded ESG administrative costs. The officials also told us that although they do not have comprehensive information on the extent to which the ESG administrative cost allowance is sufficient to cover grantees’ actual administrative costs, the agency has received many informal comments over time characterizing the allowance as insufficient. As GAO has noted previously, there is no government-wide definition of what constitutes an administrative cost. For the ESG program in particular, there are a number of sources that provide standards for administrative costs, but we found they generally offer little detail for evaluating the appropriateness of these costs. For grantees, there are regulations and agency guidance that address administrative costs. HUD Regulations. HUD regulations for ESG administrative costs define such costs by way of example only, to include costs associated with: accounting for the use of grant funds, preparing reports for submission to HUD, obtaining program audits, similar costs related to administering the grant after the award, and staff salaries associated with these administrative costs. Under the regulations, administrative costs do not include the costs of carrying out eligible activities under the ESG program. HUD ESG Program Desk Guide. The desk guide provides an overview of the ESG program, describes the funding process, and covers topics including the initial application, grant administration, project implementation, and performance monitoring. For administrative costs, the desk guide also works on the basis of example, stating that eligible administrative costs include staff to operate the program, preparation of progress reports and audits, and monitoring of recipients. Ineligible administrative costs include the preparation of the Consolidated Plan and other application submissions, conferences or training in professional fields, and salary of an organization’s executive director, except to the extent they are involved in carrying out eligible administrative functions. In addition to the regulations and the desk guide, HUD also publishes the Guide for Review of ESG Cost Allowability and the Guide for Review of ESG Financial Management as resources for grantees. These guides, however, do not provide any additional details on the appropriateness of administrative expenses. The guides refer to compliance with regulations and circulars published by the Office of Management and Budget (OMB). In particular, OMB Circular A-87, Cost Principles for State, Local, and Indian Tribal Governments, details principles for determining allowable costs incurred by state, local, and federally recognized Indian tribal governments under grants and other agreements with the federal government. These principles are not specific to the ESG program, and the circular is not necessarily the final authority on such matters, as it requires agencies administering programs to issue regulations implementing the circular. HUD officials told us the agency’s ESG regulations incorporate the provisions of OMB Circular A-87. It is difficult to evaluate the appropriateness of grantees’ ESG administrative costs because the available sources for doing so, as described above, are brief and not exhaustive. For example, Pennsylvania ESG program officials undertake a number of activities during the preaward application stage, including providing technical assistance to applicants and offering general training every several years, but it is not explicitly clear under the federal guidance whether such activities are eligible administrative costs based on available sources for evaluation. In addition, San Francisco ESG program officials told us they included office space rental, general overhead, and utility costs among their ESG administrative costs, but the available sources do not address nonpersonnel costs. As a result, it is not clear whether such specific activities are eligible administrative costs. Further complicating the issue of examining administrative costs is grantees’ self-funding of ESG administrative costs. To the extent grantees use other funding sources to cover unfunded ESG administrative costs, as discussed earlier, the ESG program standards for administrative expenses do not apply. For subgrantees we visited, we also found that ESG administrative cost standards were varied and can offer little or no detail for evaluating the appropriateness of these costs. Generally, grantees address subgrantee administrative costs by providing rules or guidance through program solicitation documents or contracts with subgrantees. The State of California’s ESG Notice of Funding Availability, for example, states that eligible administrative costs are “only those necessary to administer the grant, not to administer or operate the shelter.” In addition, specific allowable administrative expenses include staff costs to prepare ESG reports, communications with ESG staff, payment for the ESG share of a required audit, and staff costs associated with processing accounting records and billings. The City of Atlanta takes a different approach, citing administrative expenses as identified under OMB Circular A-122, Cost Principles for Non-Profit Organizations, as acceptable. This circular distinguishes administrative costs from other types of expenses, and includes consideration of a number of different expense categories. The state of Georgia took the least detailed approach among the states we visited, as Georgia ESG program officials told us they do not provide criteria for administrative costs because the state does not fund these types of costs. As with grantees, the level of detail in the various cost standards for subgrantees’ administrative costs can make it difficult to assess the appropriateness of spending. For example, as noted, California rules cite expenses necessary to administer the ESG grant itself, not to administer or operate a shelter. However, one California subgrantee reported to us that its ESG administrative activities include those associated with client intake, handling client case management forms, and technical support. Similarly, as noted, the City of Atlanta relies on OMB Circular A-122, which identifies administrative costs as a form of “indirect costs”—those incurred for common or joint objectives—and defines “administration” as “general administration and general expenses.” However, a subgrantee also reported to us that its ESG administrative activities include those associated with a range of client-focused dealings spanning intake to post- program follow-up. HUD officials told us that both client intake and case management (including handling case management forms) activities are not eligible administrative costs under the ESG program; rather, these activities are eligible program costs under the shelter operations and essential services categories. Moreover, as with grantees, a complicating factor is subgrantees’ self-funding of ESG administrative costs. To monitor the ESG program’s grantees, HUD field offices annually conduct a risk analysis to determine which grant programs are higher risk and thus warrant attention. According to HUD officials, the ESG program usually is not identified for any heightened on-site monitoring. However, HUD officials said that HUD field office staff conduct off-site monitoring of many ESG grants annually. ESG grantees must submit a Consolidated Annual Performance and Evaluation Report that contains qualitative and quantitative information about ESG, including annual expenditures and accomplishments. More broadly, grantees prepare an annual action plan that describes, among other things, how they plan to use ESG funds. The plan includes a brief description of activities, and it varies as to whether the plan includes details on administrative expenses, HUD officials told us. Overall, HUD officials told us they have not conducted any comprehensive evaluation of ESG administrative costs for grant recipients. Grantees and subgrantees in the states we visited also monitored ESG administrative costs at varying levels of detail. Grantees told us they generally monitored subgrantee administrative costs through budget reviews, either before or after grant award, or both, and also through in- office monitoring and subgrantee site visits. For example, San Francisco ESG program officials told us they evaluate subgrantees’ audits, conduct site visits, perform business and cost reviews, and provide technical assistance. In addition, City of Detroit ESG program officials told us they do not perform a specific check of ESG administrative spending but watch for any obvious problems, such as whether a program’s total administrative costs exceed 10 percent. Further, City of Atlanta officials told us they review proposed budgets of subgrantees as part of the application process, and applications with administrative costs deemed to be too high (greater than 20 percent) are rated negatively. They added that the city monitors its ESG subgrantees annually, but does not specifically track the administrative costs of ESG-funded activities because the city provides no funding for these administrative costs. We found that the funding and treatment of administrative costs varied across the other targeted federal homeless grant programs we reviewed. We identified variations in areas such as the administrative allowance provided to grantees, requirements for sharing any of that allowance with subgrantees, and guidance on the appropriateness of administrative costs. First, as shown in figure 6, the extent to which each program included a maximum administrative allowance varied, and when a maximum allowance was specified, the amount of that allowance varied widely. Among programs with a maximum administrative allowance, the ESG program’s current 5 percent maximum administrative allowance for grantees is one of the lower allowances. The maximum administrative allowance for the other programs that have specified a maximum allowance ranges from 4 percent to 50 percent. Second, we found that program rules for grantee sharing of administrative allowances with subgrantees varied across homeless programs with similar funding structures. For example, HUD’s Supportive Housing Program requires grantees to share administrative allowances with subgrantees, but does not specify the amount. The Department of Labor’s Homeless Veterans’ Reintegration Program does not require sharing of administrative allowances, but gives grantees discretion to share with subgrantees. The ESG program combines mandatory and discretionary sharing—it requires grantees that are state governments to share an unspecified portion of their administrative allowance when passing funds to local governments. Otherwise, sharing is optional but not mandated. These specific programs and their particular rules notwithstanding, most of the programs we reviewed do not provide administrative cost allowances for when grantees pass along funds to subrecipients. In all, there was considerable variation across programs in provision of subgrantee administrative allowances. Third, we found that program guidance on the appropriateness of administrative costs differed across the targeted homeless programs we reviewed, and that no program offered comprehensive direction on eligible and ineligible administrative activities. As noted earlier, the ESG program’s desk guide provides examples of both eligible and ineligible administrative activities, albeit not exhaustively. By contrast, five of the targeted programs’ rules—including programs of the Departments of Education, Labor, and Health and Human Services—do not specifically define eligible or ineligible administrative activities. Instead, some of these programs’ rules reference OMB cost principles and note that administrative costs must be reasonable and necessary, as defined by OMB Circular A-87. HUD’s Supportive Housing Program follows an ESG- style example approach. We also found that the ESG program’s maximum administrative allowance for grantees was one of the lower allowances for HUD formula grant programs offered through HUD’s Office of Community Planning and Development. As table 2 shows, the ESG program’s administrative allowance for grantees will also remain one of the lower of the group after it increases to 7.5 percent. The ESG program is among four formula grant programs offered through the Office of Community Planning and Development, which seeks to develop communities by promoting decent housing and expanded economic opportunities for low- and moderate- income persons. However, given the programs’ diverse missions, as also shown in table 2, the nature and amount of administrative costs may vary among them. A number of grantees and subgrantees in the states we visited and others told us they expect that the newly allowable ESG activities authorized by the HEARTH Act will result in different kinds of administrative activities that in many cases will be more costly than before. As previously noted, the act increased the range of eligible prevention and re-housing activities to include short- or medium-term rental assistance and housing relocation or stabilization services. Overall, grantees and subgrantees told us they expect changes in areas including client screening and eligibility verification, technical assistance to subgrantees, number of applicants for grants, and facility management and collaboration with third parties, which in turn could affect administrative costs. For example, City of San Francisco and Pennsylvania state officials told us the new activities authorized by the act might result in a greater number of applicants for grant awards, or their agencies might have to provide more outreach and technical assistance to subgrantees. In addition, one California subgrantee told us that they expect an effort to have people leave shelters more quickly under the new ESG activities. This subgrantee added that this might increase the administrative costs associated with collecting and reporting data on an increased number of people coming through the program. This subgrantee also said it expects the new ESG activities to have a secondary effect in shelters themselves, where a changing mix of residents likely will mean higher administrative costs. This subgrantee said that new HEARTH Act-style programs will likely enroll the best functioning people, so those left in shelters will be relatively less functioning—and hence more costly to manage. Another subgrantee, in Michigan, told us it is already starting to see changes in administrative costs with expansion of activities beyond traditional emergency shelter services and into rapid re-housing. For example, new program activities require more time for administration, both internally and externally, and there have been organizational changes such as in handling of rent funds. Finally, one California subgrantee estimated its administrative costs could rise from about 3.5 percent to between 12 percent to 14 percent under the new ESG activities. As noted previously, however, HUD officials told us that some subgrantees we visited appear to be confusing program activities with administrative activities, which might have affected their estimates of actual administrative costs. While a number of grantees and subgrantees told us they expect the nature of administrative activities to change, and their costs to increase, not all the recipients we visited agreed that higher administrative costs are likely. For example, a Pennsylvania subgrantee told us it anticipates that the administrative costs associated with a prevention program would probably be equal to the costs of a shelter program, and it would not expect costs to be higher unless program requirements become more onerous. California state officials told us they do not expect the nature or amount of administrative costs will change with new program activities, because activities already change frequently today. Similarly, a Michigan subgrantee told us that barring any increase in regulatory requirements, it does not expect any added burden in areas such as reporting of program activity, audit duties, or office space required for administration. Overall, expectations about higher administrative costs are plainly prospective in nature, because the new activities have not yet been implemented. Although the HEARTH Act makes significant changes to allowable ESG activities, it remains unclear when actual program changes might be implemented. According to HUD officials, the total funds allocated to the ESG program will determine the extent to which money is available for the new services. HUD officials also told us that a significant increase in ESG funding, along with significant program changes, could increase grantees’ costs of monitoring and reporting, because more money must be tracked and monitored in conjunction with a wider array of program requirements. Uncertainty over how and when the new ESG program might be implemented, as well as variation in the nature of administrative activities seen in the current ESG program, complicate any attempt to determine the appropriate size of the program’s administrative allowance. Providing such an allowance helps ensure funds are spent properly and directed to their appropriate purpose. But if the allowance is insufficient to allow adequate administration and oversight, program efficiency and effectiveness could be at risk. Grantees and subgrantees we spoke with reported that the current ESG administrative allowance does not fully cover their administrative costs. Moreover, our work indicates that even with the new administrative allowance of 7.5 percent, the ESG program would still have one of the lower allowances among similarly structured homeless grant programs. If the new ESG program increases in complexity or scope of services, its administrative cost allowance will take on even more significance in the future. We provided a draft of this report to the Departments of Housing and Urban Development, Education, Health and Human Services, and Labor for their review and comment. HUD did not provide formal comments, but noted by e-mail that some subgrantees we visited may not be making a proper distinction between program costs and administrative costs, which could have the effect of overstating any need for a larger ESG administrative allowance. We reflected this sentiment throughout this report as appropriate. HUD further indicated that the department would examine what steps it could take to help grantees and subgrantees better understand which administrative costs can be funded under the ESG program and the extent to which administrative costs differ from activity delivery costs. HUD added that these steps would include providing greater clarity and detail on what costs are eligible under the different ESG activity categories, including administrative costs, in a proposed new rule the department is developing to implement the changes to the McKinney-Vento Homeless Assistance Act provided in the HEARTH Act. HUD also provided technical comments by e-mail, which we have incorporated into the report as appropriate. The Secretaries of Education, Health and Human Services, and Labor did not provide comments. We are sending copies of this report to interested congressional committees and the Secretaries of the Departments of Housing and Urban Development; Education; Health and Human Services; and Labor. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or cackleya@gao.gov if you or members of your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix II for key contributors to this report. To determine the types of administrative activities performed and costs incurred under the Emergency Shelter Grants Program (ESG) of the U.S. Department of Housing and Urban Development (HUD), and the extent to which grant proceeds cover these administrative costs, we made site visits to four states: California, Georgia, Michigan, and Pennsylvania. We selected these states based on the amount of ESG funding distributed to these grantees for fiscal year 2009 and their geographic location across the country. We initially identified state-level grantees receiving more than $1.5 million in ESG funding, in order to focus on states with relatively more ESG activity. This criterion reduced our target group to 20 states. We judgmentally selected the four states we visited by considering proximity of the capital city, where state officials are located, to the location of other grantees we could visit concurrently. Within the four states, we visited nine grantees (four state governments and five local governments) and 25 subgrantees. This allowed us to obtain illustrative observations from state officials, local government officials, and representatives of local homeless service providers on the operation of the ESG program, with an emphasis on type and level of spending to administer grants received under the program. Table 3 provides details on grantees’ receipt of ESG funds in the states we visited. The states we visited collectively received 24.5 percent of the total ESG funds HUD awarded to grantees in fiscal year 2009. Because we used a nongeneralizable sample to select state grantees that had received larger amounts of ESG funding in fiscal year 2009, our findings cannot be used to make inferences about other grant recipients. Other grantees that we did not visit may have different characteristics that are unknown to us. However, we believe that our selection of the states and recipients was appropriate for our design and objectives, and that the selection provides valid and reliable evidence to support our work. We interviewed grantees and subgrantees in the states we visited to obtain information on administrative activities performed, the cost of performing those activities, and related topics. We also interviewed HUD officials, plus representatives of national organizations involved with homeless issues, that are familiar with trends in charitable giving, or that represent local governments. We also researched the legislative history of the ESG program. We examined HUD guidance, federal regulations, and relevant Office of Management and Budget (OMB) circulars on allowability of administrative costs, including circulars A-87, Cost Principles for State, Local, and Indian Tribal Governments, and A-122, Cost Principles for Non-Profit Organizations. Further, we reviewed state and local government ESG solicitation documents, such as Notices of Funding Availability and Requests for Proposal. To determine how the ESG program’s allowance for administrative costs compares with administrative cost allowances for selected other targeted federal homeless grant programs, plus selected other HUD formula-based grant programs, we interviewed officials from HUD and the Departments of Education, Labor, and Health and Human Services. We examined relevant federal statutes and regulations, as well as relevant OMB circulars. We also examined program guidance and documents, such as desk guides, resource manuals, solicitations for grant applications, and requests for applications, for the federal targeted homeless grant programs and the other HUD formula grant programs that we reviewed. To determine how the nature or amount of administrative costs might be different under the changes Congress made to the ESG program in the Homeless Emergency Assistance and Rapid Transition to Housing Act of 2009, we reviewed relevant provisions of the act detailing the newly allowable activities. We also interviewed HUD officials, state and local government officials, representatives of homeless organizations, and homeless service providers to obtain their perspectives. We conducted this performance audit from August 2009 to May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Marshall Hamlett, Assistant Director; William Chatlos; Meredith Graves; Kun-Fang Lee; Marc Molino; Christopher Schmitt; Jennifer Schwartz; and Paul Thompson made major contributions to this report.
The Homeless Emergency Assistance and Rapid Transition to Housing Act of 2009 (HEARTH Act) directed GAO to study the appropriate administrative costs of the U.S. Department of Housing and Urban Development (HUD) Emergency Shelter Grants Program (ESG)--a widely used, formula-based program that supports services to persons experiencing homelessness. This report discusses (1) for selected recipients, the types of administrative activities performed and administrative costs incurred under the ESG program, and the extent to which grant proceeds cover these administrative costs; (2) how the ESG program's allowance for administrative costs compares with administrative cost allowances for selected other targeted federal homeless grant programs, plus selected other HUD formula-based grant programs; and (3) how the nature or amount of administrative costs might be different under changes Congress made to the ESG program in the HEARTH Act that expand the types of activities that may be funded. To address these issues, GAO reviewed relevant policies and documents, interviewed officials of HUD and other agencies, made site visits in four states, reviewed HUD and other available standards on eligible administrative costs for federal grants, and reviewed cost allowances for homeless programs of the Departments of Education, Labor, and Health and Human Services. GAO makes no recommendations in this report. ESG grantees and subgrantees we visited in four states performed a range of administrative activities, but the ESG program's allowance for administrative costs--currently 5 percent--did not fully cover the cost of these activities. Grantees generally focused their administrative activities on awarding subgrants and monitoring subgrantee performance, while subgrantees focused their administrative activities on operating their programs and reporting results to their respective grantees. To cover unfunded ESG administrative costs, grantees and subgrantees told us they used other sources, such as other grants or private donations. They added that these estimated unfunded administrative costs, which averaged 13.2 percent and ranged from amounts equal to 2.5 percent to 56 percent of their ESG grant proceeds, diminished their ability to support other program activities. In addition, we found minimal standards available for evaluating the appropriateness of ESG administrative costs, and grantees and subgrantees in the states we visited monitored ESG administrative costs in varying levels of detail. The funding and treatment of administrative costs varied across other targeted federal homeless grant programs we reviewed. For example, the maximum administrative allowance for grantees ranged from 4 percent to 50 percent for programs with such a provision; the ESG program's current 5 percent allowance is thus one of the lower amounts provided. Programs with similar funding structures varied in their requirements for grantees to share their administrative allowance with subgrantees; the ESG program generally does not require grantees to share their allowance. In addition, none of the programs we reviewed offered comprehensive direction on eligible and ineligible administrative activities. Overall, these and other varying program features make it difficult to make direct comparisons between the administrative cost provisions of the ESG program and those of other targeted federal homeless grant programs. A number of ESG grantees and subgrantees we visited told us they expect the new ESG activities authorized by the HEARTH Act will result in different kinds of administrative activities that in many cases will be more costly. They cited client screening and eligibility verification, technical assistance to subgrantees, number of grant applicants, and facility management and collaboration with third parties as among areas where administrative costs may increase. Although the HEARTH Act makes significant changes, including increasing the administrative cost allowance to 7.5 percent, it remains unclear when new program activities might be implemented. Uncertainty over how and when the new ESG program might be implemented, plus variation in administrative activities under the current program, complicate any attempt to determine the appropriate size of the ESG administrative allowance. HUD told us in comments on a draft of this report that some subgrantees appear to be confusing program and administrative costs, thus potentially overstating any need for a larger administrative allowance.
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series— managed by NOAA—and the Defense Meteorological Satellite Program (DMSP)—managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather 3 or more days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate their effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2026. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. The NPOESS program office is overseen by an Executive Committee, which is made up of the Administrators of NOAA and NASA and the Under Secretary of the Air Force. NPOESS is a major system acquisition that was originally estimated to cost about $6.5 billion over the 24-year life of the program from its inception in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and ground- based satellite data processing. These deliverables are grouped into four main categories: (1) the space segment, which includes the satellites and sensors; (2) the integrated data processing segment, which is the system for transforming raw data into environmental data records (EDR) and is to be located at four data processing centers; (3) the command, control, and communications segment, which includes the equipment and services needed to support satellite operations; and (4) the launch segment, which includes launch vehicle services. When the NPOESS engineering, manufacturing, and development contract was awarded in August 2002, the cost estimate was adjusted to $7 billion. Acquisition plans called for the procurement and launch of six satellites over the life of the program, as well as the integration of 13 instruments—consisting of 10 environmental sensors and 3 subsystems. Together, the sensors were to receive and transmit data on atmospheric, cloud cover, environmental, climatic, oceanographic, and solar-geophysical observations. The subsystems were to support nonenvironmental search and rescue efforts, sensor survivability, and environmental data collection activities. The program office considered 4 of the sensors to be critical because they provide data for key weather products; these sensors are in bold in table 1, which describes each of the expected NPOESS instruments. In addition, a demonstration satellite (called the NPOESS Preparatory Project or NPP) was planned to be launched several years before the first NPOESS satellite in order to reduce the risk associated with launching new sensor technologies and to ensure continuity of climate data with NASA’s Earth Observing System satellites. NPP is to host three of the four critical NPOESS sensors (VIIRS, CrIS, and ATMS), as well as one other noncritical sensor (OMPS). NPP is to provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. When the NPOESS development contract was awarded, the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. Early program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. Over the last few years, NPOESS has experienced continued cost increases and schedule delays, requiring difficult decisions to be made about the program’s direction and capabilities. In 2003, we reported that changes in the NPOESS funding stream led the program to develop a new program cost and schedule baseline. After this new baseline was completed in 2004, we reported that the program office increased the NPOESS cost estimate from about $7 billion to $8.1 billion; delayed key milestones, including the planned launch of the first NPOESS satellite—which was delayed by 7 months; and extended the life of the program from 2018 to 2020. At that time, we also noted that other factors could further affect the revised cost and schedule estimates. Specifically, the contractor was not meeting expected cost and schedule targets on the new baseline because of technical issues in the development of key sensors, including the critical VIIRS sensor. Based on its performance through May 2004, we estimated that the contractor would most likely overrun its contract at completion in September 2011 by $500 million—thereby increasing the projected life cycle cost to $8.6 billion. The program office’s baseline cost estimate was subsequently adjusted to $8.4 billion. In mid-November 2005, we reported that NPOESS continued to experience problems in the development of a key sensor, resulting in schedule delays and anticipated cost increases. At that time, we projected that the program’s cost estimate had grown to about $10 billion based on contractor cost and schedule data. We reported that the program’s issues were due, in part, to problems at multiple levels of management—including subcontractor, contractor, program office, and executive leadership. Recognizing that the budget for the program was no longer executable, the NPOESS Executive Committee planned to make a decision in December 2005 on the future direction of the program—what would be delivered, at what cost, and by when. This involved deciding among options involving increased costs, delayed schedules, and reduced functionality. We noted that continued oversight, strong leadership, and timely decision making were more critical than ever, and we urged the committee to make a decision quickly so that the program could proceed. However, we subsequently reported that, in late November 2005, NPOESS cost growth exceeded a legislatively mandated threshold that requires DOD to certify the program to Congress. This placed any decision about the future direction of the program on hold until the certification took place in June 2006. In the meantime, the program office implemented an interim program plan for fiscal year 2006 to continue work on key sensors and other program elements using fiscal year 2006 funding. The Nunn-McCurdy law requires DOD to take specific actions when a major defense acquisition program exceeds certain cost increase thresholds. The law requires the Secretary of Defense to notify Congress when a major defense acquisition is expected to overrun its project baseline by 15 percent or more and to certify the program to Congress when it is expected to overrun its baseline by 25 percent or more. In late November 2005, NPOESS exceeded the 25 percent threshold, and DOD was required to certify the program. Certifying a program entailed providing a determination that (1) the program is essential to national security, (2) there are no alternatives to the program that will provide equal or greater military capability at less cost, (3) the new estimates of the program’s cost are reasonable, and (4) the management structure for the program is adequate to manage and control costs. DOD established tri-agency teams—made up of DOD, NOAA, and NASA experts—to work on each of the four elements of the certification process. In June 2006, DOD (with the agreement of both of its partner agencies) certified a restructured NPOESS program, estimated to cost $12.5 billion through 2026. This decision approved a cost increase of $4 billion over the prior approved baseline cost and delayed the launch of NPP and the first two satellites by roughly 3 to 5 years. The new program also entailed establishing a stronger program management structure, reducing the number of satellites to be produced and launched from 6 to 4, and reducing the number of instruments on the satellites from 13 to 9—consisting of 7 environmental sensors and 2 subsystems. It also entailed using NPOESS satellites in the early morning and afternoon orbits and relying on European satellites for midmorning orbit data. Table 2 summarizes the major program changes made under the Nunn- McCurdy certification decision. The Nunn-McCurdy certification decision established new milestones for the delivery of key program elements, including launching NPP by January 2010, launching the first NPOESS satellite (called C1) by January 2013, and launching the second NPOESS satellite (called C2) by January 2016. These revised milestones deviated from prior plans to have the first NPOESS satellite available to back up the final POES satellite should anything go wrong during that launch. Delaying the launch of the first NPOESS satellite means that if the final POES satellite fails on launch, satellite data users would need to rely on the existing constellation of environmental satellites until NPP data become available—almost 2 years later. Although NPP was not intended to be an operational asset, NASA agreed to move NPP to a different orbit so that its data would be available in the event of a premature failure of the final POES satellite. However, NPP will not provide all of the operational capability planned for the NPOESS spacecraft. If the health of the existing constellation of satellites diminishes—or if NPP data are not available, timely, and reliable—then there could be a gap in environmental satellite data. Table 3 summarizes changes in key program milestones over time. In order to reduce program complexity, the Nunn-McCurdy certification decision decreased the number of NPOESS sensors from 13 to 9 and reduced the functionality of 4 sensors. Specifically, of the 13 original sensors, 5 sensors remain unchanged, 3 were replaced with less capable sensors, 1 was modified to provide less functionality, and 4 were cancelled. Table 4 shows the changes to NPOESS sensors, including the 4 identified in bold as critical sensors. The changes in NPOESS sensors affected the number and quality of the resulting weather and environmental products, called environmental data records or EDRs. In selecting sensors for the restructured program, the agencies placed the highest priority on continuing current operational weather capabilities and a lower priority on obtaining selected environmental and climate measuring capabilities. As a result, the revised NPOESS system has significantly less capability for providing global climate measures than was originally planned. Specifically, the number of EDRs was decreased from 55 to 39, of which 6 are of a reduced quality. The 39 EDRs that remain include cloud base height, land surface temperature, precipitation type and rate, and sea surface winds. The 16 EDRs that were removed include cloud particle size and distribution, sea surface height, net solar radiation at the top of the atmosphere, and products to depict the electric fields in the space environment. The 6 EDRs that are of a reduced quality include ozone profile, soil moisture, and multiple products depicting energy in the space environment. Since the June 2006 decision to revise the scope, cost, and schedule of the NPOESS program, the program office has made progress in restructuring the satellite acquisition; however, important tasks remain to be done. Restructuring a major acquisition program like NPOESS is a process that involves identifying time-critical and high- priority work and keeping this work moving forward, while reassessing development priorities, interdependencies, deliverables, risks, and costs. It also involves revising important acquisition documents including the memorandum of agreement on the roles and responsibilities of the three agencies, the acquisition strategy, the system engineering plan, the test and evaluation master plan, the integrated master schedule defining what needs to happen by when, and the acquisition program baseline. Specifically, the Nunn- McCurdy certification decision required the Secretaries of Defense and Commerce and the Administrator of NASA to sign a revised memorandum of agreement by August 6, 2006. It also required that the program office, Program Executive Officer, and the Executive Committee revise and approve key acquisition documents including the acquisition strategy and system engineering plan by September 1, 2006, in order to proceed with the restructuring. Once these are completed, the program office can proceed to negotiate with its prime contractor on a new program baseline defining what will be delivered, by when, and at what cost. The NPOESS program office has made progress in restructuring the acquisition. Specifically, the program office has established interim program plans guiding the contractor’s work activities in 2006 and 2007 and has made progress in implementing these plans. The program office and contractor also developed an integrated master schedule for the remainder of the program—beyond fiscal year 2007. This integrated master schedule details the steps leading up to launching NPP by September 2009, launching the first NPOESS satellite in January 2013, and launching the second NPOESS satellite in January 2016. Near-term steps include completing and testing the VIIRS, CrIS, and OMPS sensors; integrating these sensors with the NPP spacecraft and completing integration testing; completing the data processing system and integrating it with the command, control, and communications segment; and performing advanced acceptance testing of the overall system of systems for NPP. However, key steps remain for the acquisition restructuring to be completed. Although the program office made progress in revising key acquisition documents, including the system engineering plan, the test and evaluation master plan, and the acquisition strategy plan, it has not yet obtained the approval of the Secretaries of Commerce and Defense and the Administrator of NASA on the memorandum of agreement among the three agencies, nor has it obtained the approval of the NPOESS Executive Committee on the other key acquisition documents. As of June 2007, these approvals are over 9 months past due. Agency officials noted that the September 1, 2006, due date for the key acquisition documents was not realistic given the complexity of coordinating documents among three different agencies. Finalizing these documents is critical to ensuring interagency agreement and will allow the program office to move forward in completing other activities related to restructuring the program. These other activities include completing an integrated baseline review with the contractor to reach agreement on the schedule and work activities, and finalizing changes to the NPOESS development and production contract. Program costs are also likely to be adjusted during upcoming negotiations on contract changes—an event that the Program Director expects to occur by July 2007. Completion of these activities will allow the program office to lock down a new acquisition baseline cost and schedule. Until key acquisition documents are finalized and approved, the program faces increased risk that it will not be able to complete important restructuring activities in time to move forward in fiscal year 2008 with a new program baseline in place. This places the NPOESS program at risk of continued delays and future cost increases. The NPOESS program has made progress in establishing an effective management structure, but—almost a year after this structure was endorsed during the Nunn-McCurdy certification process—the Integrated Program Office still faces staffing problems. Over the past few years, we and others have raised concerns about management problems at all levels of the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. Two independent review teams also noted a shortage of skilled program staff, including budget analysts and system engineers. Since that time, the NPOESS program has made progress in establishing an effective management structure—including establishing a new organizational framework with increased oversight by program executives, instituting more frequent subcontractor, contractor, and program reviews, and effectively managing risks and performance. However, DOD’s plans for reassigning the Program Executive Officer in the summer of 2007 increase the program’s risks. Additionally, the program lacks a staffing process that clearly identifies staffing needs, gaps, and plans for filling those gaps. As a result, the program office has experienced delays in getting core management activities under way and lacks the staff it needs to execute day-to-day management activities. The NPOESS program has made progress in establishing an effective management structure and increasing the frequency and intensity of its oversight activities. Over the past few years, we and others have raised concerns about management problems at all levels of management on the NPOESS program, including subcontractor and contractor management, program office management, and executive-level management. In response to recommendations made by two different independent review teams, the program office began exploring options in late 2005 and early 2006 for revising its management structure. In November 2005, the Executive Committee established and filled a Program Executive Officer position, senior to the NPOESS Program Director, to streamline decision making and to provide oversight to the program. This Program Executive Officer reports directly to the Executive Committee. Subsequently, the Program Executive Officer and the Program Director proposed a revised organizational framework that realigned division managers within the Integrated Program Office responsible for overseeing key elements of the acquisition and increased staffing in key areas. In June 2006, the Nunn-McCurdy certification decision approved this new management structure and the Integrated Program Office implemented it. Figure 1 provides an overview of the relationships among the Integrated Program Office, the Program Executive Office, and the Executive Committee, as well as key divisions within the program office. Operating under this new management structure, the program office implemented more rigorous and frequent subcontractor, contractor, and program reviews, improved visibility into risk management and mitigation activities, and institutionalized the use of earned value management techniques to monitor contractor performance. In addition to these program office activities, the Program Executive Officer implemented monthly program reviews and increased the frequency of contacts with the Executive Committee. The Program Executive Officer briefs the Executive Committee in monthly letters, apprising committee members of the program’s status, progress, risks, and earned value, and the Executive Committee now meets on a quarterly basis—whereas in the recent past, we reported that the Executive Committee had met only five times in 2 years. Although the NPOESS program has made progress in establishing an effective management structure, this progress is currently at risk. We recently reported that DOD space acquisitions are at increased risk due in part to frequent turnover in leadership positions, and we suggested that addressing this will require DOD to consider matching officials’ tenure with the development or delivery of a product. In March 2007, NPOESS program officials stated that DOD is planning to reassign the recently appointed Program Executive Officer in the summer 2007 as part of this executive’s natural career progression. As of June 2007, the Program Executive Officer has held this position for 19 months. Given that the program is currently still being restructured, and that there are significant challenges in being able to meet critical deadlines to ensure satellite data continuity, such a move adds unnecessary risk to an already risky program. The NPOESS program office has filled key vacancies but lacks a staffing process that identifies programwide staffing requirements and plans for filling those needed positions. Sound human capital management calls for establishing a process or plan for determining staffing requirements, identifying any gaps in staffing, and planning to fill critical staffing gaps. Program office staffing is especially important for NPOESS, given the acknowledgment by multiple independent review teams that staffing shortfalls contributed to past problems. Specifically, these review teams noted shortages in the number of system engineers needed to provide adequate oversight of subcontractor and contractor engineering activities and in the number of budget and cost analysts needed to assess contractor cost and earned value reports. To rectify this situation, the June 2006 certification decision directed the Program Director to take immediate actions to fill vacant positions at the program office with the approval of the Program Executive Officer. Since the June 2006 decision to revise NPOESS management structure, the program office has filled multiple critical positions, including a budget officer, a chief system engineer, an algorithm division chief, and a contracts director. In addition, on an ad hoc basis, individual division managers have assessed their needs and initiated plans to hire staff for key positions. However, the program office lacks a programwide process for identifying and filling all needed positions. As a result, division managers often wait months for critical positions to be filled. For example, in February 2006, the NPOESS program estimated that it needed to hire up to 10 new budget analysts. As of September 2006, none of these positions had been filled. As of April 2007, program officials estimated that they still needed to fill 5 budget analyst positions, 5 systems engineering positions, and 10 technical manager positions. The majority of the vacancies—4 of the 5 budget positions, 4 of the 5 systems engineering positions, and 8 of the 10 technical manager positions— are to be provided by NOAA. NOAA officials noted that each of these positions is in some stage of being filled—that is, recruitment packages are being developed or reviewed, vacancies are being advertised, or candidates are being interviewed, selected, and approved. The program office attributes its staffing delays to not having the right personnel in place to facilitate this process, and it did not even begin to develop a staffing process until November 2006. Program officials noted that the tri-agency nature of the program adds unusual layers of complexity to the hiring and administrative functions because each agency has its own hiring and performance management rules. In November 2006, the program office brought in an administrative officer who took the lead in pulling together the division managers’ individual assessments of needed staff and has been working with the division managers to refine this list. This new administrative officer plans to train division managers in how to assess their needs and to hire needed staff, and to develop a process by which evolving needs are identified and positions are filled. However, there is as yet no date set for establishing this basic programwide staffing process. As a result of the lack of a programwide staffing process, there has been an extended delay in determining what staff is needed and in bringing those staff on board; this has resulted in delays in performing core activities, such as establishing the program office’s cost estimate and bringing in needed contracting expertise. Additionally, until a programwide staffing process is in place, the program office risks not having the staff it needs to execute day-to-day management activities. In commenting on a draft of our report, Commerce stated that NOAA implemented an accelerated hiring model. More recently, the NPOESS program office reported that several critical positions were filled in April and May 2007. However, we have not yet evaluated NOAA’s accelerated hiring model and, as of June 2007, over 10 key positions remain to be filled. Major segments of the NPOESS program—the space segment and ground systems segment—are under development; however, significant problems have occurred and risks remain. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall cost and schedule. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and developing, testing, and deploying the ground-based data processing systems, it will be important for the NPOESS Integrated Program Office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. The space segment includes the sensors and the spacecraft. Four sensors are of critical importance—VIIRS, CrIS, OMPS, and ATMS— because they are to be launched on the NPP satellite in September 2009. Initiating work on another sensor, the Microwave imager/sounder, is also important because this new sensor— replacing the cancelled CMIS sensor—will need to be developed in time for the second NPOESS satellite launch. Over the past year, the program made progress on each of the sensors and the spacecraft. However, two sensors, VIIRS and CrIS, have experienced major problems. The status of each of the components of the space segment is described in table 5. Program officials regularly track risks associated with various NPOESS components and work to mitigate them. Having identified both VIIRS and CrIS as high risk, OMPS as moderate risk, and the other components as low risk, the program office is working closely with the contractors and subcontractors to resolve sensor problems. Program officials have identified work-arounds that will allow them to move forward in testing the VIIRS engineering unit and have approved the flight unit to proceed to a technical readiness review milestone. Regarding CrIS, as of March 2007, a failure review board identified root causes of its structural failure, identified plans for resolving them, and initiated inspections of sensor modules and subsystems for damage. An agency official reported that there is sufficient funding in the fiscal year 2007 program office’s and contractor’s management reserve funds to allow for troubleshooting both VIIRS and CrIS problems. However, until the CrIS failure review board fully determines the amount of rework that is necessary to fix the problems, it is unknown if additional funds will be needed or if the time frame for CrIS’s delivery will be delayed. According to agency officials, CrIS is not on the program schedule’s critical path, and there is sufficient schedule margin to absorb the time it will take to conduct a thorough failure review process. Managing the risks associated with the development of VIIRS and CrIS is of particular importance because these components are to be demonstrated on the NPP satellite, currently scheduled for launch in September 2009. Any delay in the NPP launch date could affect the overall NPOESS program, because the success of the program depends on the lessons learned in data processing and system integration from the NPP satellite. Additionally, continued sensor problems could lead to higher final program costs. Development of the ground segment—which includes the interface data processing system, the ground stations that are to receive satellite data, and the ground-based command, control, and communications system—is under way and on track. However, important work pertaining to developing the algorithms that translate satellite data into weather products within the integrated data processing segment remains to be completed. Table 6 describes each of the components of the ground segment and identifies the status of each. The NPOESS program office plans to continue to address risks facing IDPS development. Specifically, the IDPS team is working to reduce data processing delays by seeking to limit the number of data calls, improve the efficiency of the data management system, increase the efficiency of the algorithms, and increase the number of processors. The program office also developed a resource center consisting of a logical technical library, a data archive, and a set of analytical tools to coordinate, communicate, and facilitate the work of algorithm subject matter experts on algorithm development and calibration/validation preparations. Managing the risks associated with the development of the IDPS system is of particular importance because this system will be needed to process NPP data. Because of the importance of effectively managing the NPOESS program to ensure that there are no gaps in the continuity of critical weather and environmental observations, in our accompanying report we made recommendations to the Secretaries of Defense and Commerce and to the Administrator of NASA to ensure that the responsible executives within their respective organizations approve key acquisition documents, including the memorandum of agreement among the three agencies, the system engineering plan, the test and evaluation master plan, and the acquisition strategy, as quickly as possible but no later than April 30, 2007. We also recommended that the Secretary of Defense direct the Air Force to delay reassigning the recently appointed Program Executive Officer until all sensors have been delivered to the NPOESS Preparatory Program; these deliveries are currently scheduled to occur by July 2008. We also made two additional recommendations to the Secretary of Commerce to (1) develop and implement a written process for identifying and addressing human capital needs and for streamlining how the program handles the three different agencies’ administrative procedures and (2) establish a plan for immediately filling needed positions. In written comments, all three agencies agreed that it was important to finalize key acquisition documents in a timely manner, and DOD proposed extending the due dates for the documents to July 2, 2007. Because the NPOESS program office intends to complete contract negotiations by July 4, 2007, we remain concerned that any further delays in approving the documents could delay contract negotiations and thus increase the risk to the program. In addition, the Department of Commerce agreed with our recommendation to develop and implement a written process for identifying and addressing human capital needs and to streamline how the program handles the three different agencies’ administrative procedures. The department also agreed with our recommendation to plan to immediately fill open positions at the NPOESS program office. Commerce noted that NOAA identified the skill sets needed for the program and has implemented an accelerated hiring model and schedule to fill all NOAA positions in the NPOESS program. Commerce also noted that NOAA has made NPOESS hiring a high priority and has documented a strategy— including milestones—to ensure that all NOAA positions are filled by June 2007. DOD did not concur with our recommendation to delay reassigning the Program Executive Officer, noting that the NPOESS System Program Director responsible for executing the acquisition program would remain in place for 4 years. The Department of Commerce also noted that the Program Executive Officer position is planned to rotate between the Air Force and NOAA. Commerce also stated that a selection would be made before the departure of the current Program Executive Officer to provide an overlap period to allow for knowledge transfer and ensure continuity. However, over the last few years, we and others (including an independent review team and the Commerce Inspector General) have reported that ineffective executive-level oversight helped foster the NPOESS program’s cost and schedule overruns. We remain concerned that reassigning the Program Executive at a time when NPOESS is still facing critical cost, schedule, and technical challenges will place the program at further risk. In addition, while it is important that the System Program Director remain in place to ensure continuity in executing the acquisition, this position does not ensure continuity in the functions of the Program Executive Officer. The current Program Executive Officer is experienced in providing oversight of the progress, issues, and challenges facing NPOESS and coordinating with Executive Committee members as well as the Defense acquisition authorities. Additionally, while the Program Executive Officer position is planned to rotate between agencies, the memorandum of agreement documenting this arrangement is still in draft and should be flexible enough to allow the current Program Executive Officer to remain until critical risks have been addressed. Further, while Commerce plans to allow a period of overlap between the selection of a new Program Executive Officer and the departure of the current one, time is running out. The current Program Executive Officer is expected to depart in early July 2007, and as of early June 2007, a successor has not yet been named. NPOESS is an extremely complex acquisition, involving three agencies, multiple contractors, and advanced technologies. There is not sufficient time to transfer knowledge and develop the sound professional working relationships that the new Program Executive Officer will need to succeed in that role. Thus, we remain convinced that given NPOESS current challenges, reassigning the current Program Executive Officer at this time would not be appropriate. In summary, NPOESS restructuring is well under way, and the program has made progress in establishing an effective management structure. However, key steps remain in restructuring the acquisition, including completing important acquisition documents such as the system engineering plan, the acquisition program baseline, and the memorandum of agreement documenting the three agencies’ roles and responsibilities. Until these key documents are finalized, the program is unable to finalize plans for restructuring the program. Additionally, the program office continues to have difficulty filling key positions and lacks a programwide staffing process. Until the program establishes an effective and repeatable staffing process, it will have difficulties in identifying and filling its staffing needs in a timely manner. Having insufficient staff in key positions impedes the program office’s ability to conduct important management and oversight activities, including revising cost and schedule estimates, monitoring progress, and managing technical risks. The program faces even further challenges if DOD proceeds with plans to reassign the Program Executive Officer this summer. Such a move would add unnecessary risk to an already risky program. In addition, the likelihood exists that there will be further cost increases and schedule delays because of technical problems on key sensors and pending contract negotiations. Major program segments—including the space and ground segments—are making progress in their development and testing. However, two critical sensors have experienced problems and are considered high risk, and risks remain in developing and implementing the ground-based data processing system. Given the tight time frames for completing key sensors, integrating them, and getting the ground-based data processing systems developed, tested, and deployed, continued close oversight of milestones and risks is essential to minimize potential cost increases and schedule delays. Mr. Chairmen, this concludes my statement. I would be happy to answer any questions that you or members of the committee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Other key contributors to this testimony include Colleen Phillips (Assistant Director), Carol Cha, and Teresa Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) is a tri-agency acquisition--managed by the Departments of Commerce and Defense and the National Aeronautics and Space Administration--which experienced escalating costs, schedule delays, and technical difficulties. These factors led to a June 2006 decision to restructure the program thereby decreasing its complexity, increasing its estimated cost to $12.5 billion, and delaying the first two satellites by 3 to 5 years. GAO was asked to summarize a report being released today that (1) assesses progress in restructuring the acquisition, (2) evaluates progress in establishing an effective management structure, and (3) identifies the status and key risks on the program's major segments. The NPOESS program office has made progress in restructuring the acquisition by establishing and implementing interim program plans guiding contractors' work activities in 2006 and 2007; however, important tasks remain to be done. Executive approvals of key acquisition documents are about 9 months late--due in part to the complexity of navigating three agencies' approval processes. Delays in finalizing these documents could hinder plans to complete contract negotiations by July 2007 and could keep the program from moving forward in fiscal year 2008 with a new program baseline. The program office has also made progress in establishing an effective management structure by adopting a new organizational framework with increased oversight from program executives and by instituting more frequent and rigorous program reviews; however, plans to reassign the recently appointed Program Executive Officer will likely increase the program's risks. Additionally, the program lacks a process and plan for identifying and filling staffing shortages, which has led to delays in key activities such as cost estimating and contract revisions. As of June 2007, key positions remain to be filled. Development and testing of major NPOESS segments--including key sensors and ground systems--are under way, but significant risks remain. For example, while work continues on key sensors, two of them--the visible/infrared imager radiometer suite and the cross-track infrared sounder--experienced significant problems and are considered high risk. Continued sensor problems could cause further cost increases and schedule delays. Additionally, while progress has been made in reducing delays in the data processing system, work remains in refining the algorithms needed to translate sensor observations into usable weather products. Given the tight time frames for completing this work, it will be important for program officials and executives to continue to provide close oversight of milestones and risks.
The United States along with six allies established the Missile Technology Control Regime (MTCR) in 1987. The Regime is a voluntary agreement among member countries to limit the proliferation of missiles capable of delivering nuclear, biological, and chemical weapons and their associated equipment and technology. The Regime consists of common export policy guidelines and a list of controlled items that include complete missile systems (rocket and unmanned air vehicle systems) and missile-related components and technologies that may have civilian applications. The list, known as the Equipment, Software, and Technology Annex (hereafter, referred to as the Regime Annex), is periodically updated to reflect technological advances. Member countries agree to control exports of Regime items in accordance with their respective national laws. The United States fulfills its MTCR commitments primarily through the export control systems of the Departments of Commerce and State. These two systems were founded on different premises. The Commerce Department, through its Bureau of Export Administration, controls exports of most dual-use items and technologies under the authority of the Export Administration Act of 1979. As such, the Commerce Department is charged with weighing U.S. economic and trade interests along with national security and foreign policy interests. Dual-use items subject to the Commerce Department’s export controls are identified in the Commerce Control List of the Export Administration Regulations. In contrast, the State Department, through its Office of Defense Trade Controls, controls exports of defense articles and services under the authority of the Arms Export Control Act. The State Department’s export control system is designed primarily to further national security and foreign policy interests. The items controlled by the State Department can be found in the International Traffic in Arms Regulations, specifically within the U.S. Munitions List, which the State Department develops with the concurrence of the Department of Defense. The Departments of State and Defense are reviewing and revising different portions of the U.S. Munitions List on an annual basis, as part of the Defense Trade Security Initiative, to ensure that coverage of items on the list is appropriate.Exporters are responsible for determining whether an item they seek to export is on the Commerce Control List and, therefore, subject to the Commerce Department’s jurisdiction, or on the U.S. Munitions List and subject to the State Department’s jurisdiction. With the passage of the National Defense Authorization Act for Fiscal Year 1991, the Congress amended both the Export Administration Act and the Arms Export Control Act to include restrictions on the export of Regime items. Under the amended Export Administration Act, the Secretary of Commerce, in consultation with the Secretaries of State and Defense and other officials, is required to establish and maintain as part of the Commerce Control List, a list of all dual-use goods and technologies that appear on the Regime Annex. Under the amended Arms Export Control Act, the Secretary of State, in consultation with the Secretary of Defense and others, is to establish and maintain as part of the U.S. Munitions List, a list of Regime items that are not controlled under the Export Administration Act. Thus, under these statutes, individual Regime items are to be listed on either the Commerce Control List or the U.S. Munitions List—but not both lists. The Commerce Control List identifies a variety of controlled dual-use items, some of which are designated as being controlled for missile technology reasons, and includes Regime items. In contrast, the U.S. Munitions List contains a separate section that identifies Regime items subject to the State Department’s jurisdiction. Forty-seven of 196 Regime items appear subject to the export control jurisdictions of both the Commerce Department and the State Department. For these 47 items, either (1) the description of the item is the same on both the Commerce Control List and the U.S. Munitions List or (2) one Department claims jurisdiction over an item even though the item does not explicitly appear on its export control list but does appear on the other Department’s list. Appendix I contains descriptions of the 47 Regime items and identifies where they are covered on the Commerce and State control lists. Table 1 provides examples of Regime items that appear on both export control lists with either identical descriptions or overlapping performance parameters. Neither the Commerce Control List nor the U.S. Munitions List provides criteria to differentiate when these items are subject to the Commerce Department’s jurisdiction and when they are subject to the State Department’s jurisdiction. The Commerce Control List sometimes provides a cross-reference to the U.S. Munitions List when the State Department controls certain items meeting particular parameters.However, Commerce Department officials said that the Commerce Control List does not always include such references because the regulations would become too voluminous. The State Department’s control list generally does not indicate that an item may be subject to the Commerce Department’s control since the U.S. Munitions List is supposed to identify only those items subject to the State Department’s jurisdiction. In other cases, the State Department claims jurisdiction over software and technologies related to missile production equipment and facilities, although these items do not explicitly appear on the U.S. Munitions List. These items, however, appear on the Commerce Control List. Two factors have contributed to unclear jurisdiction for Regime items. First, officials at the Departments of Commerce and State have expressed different understandings of how to define which Regime items are Commerce Department-controlled and which are State Department- controlled. Second, consultations between the Departments of Commerce and State on Regime-related changes to their regulations have not ensured that items are clearly subject to the jurisdiction of one Department or the other. The State Department office responsible for maintaining the U.S. Munitions List has not formally participated in reviews of proposed changes to the Commerce Control List. Furthermore, the State Department has not updated the MTCR section of the U.S. Munitions List since the mid-1990s, precluding the opportunity to consult with the Commerce Department. “Is specifically designed, developed, configured, adapted, or modified for a military (i) Does not have predominant civil applications, and (ii) Does not have performance equivalent (defined by form, fit and function) to those of an article or service used for civil applications; ….” Conversely, according to Commerce Department officials, if the item does not meet these criteria—even if it appears in the MTCR section of the U.S. Munitions List—it should be subject to the Commerce Department’s export controls. However, a senior State Department official disagreed with the Commerce Department officials’ interpretation of the State Department’s regulations. The official explained that the criteria cited by Commerce Department officials is used by the State Department, in consultation with the Defense Department, to determine which items will appear on the U.S. Munitions List and should not be used by exporters and others to determine whether an item is subject to the State Department’s export controls. Instead, exporters are to consult the U.S. Munitions List to determine which Regime items are under the State Department’s jurisdiction. Consultations between the Departments of Commerce and State have been limited. According to the Commerce Department, it coordinates its regulations and proposed changes for the control of Regime items with the Departments of State, Defense, and Energy and, therefore, these Departments should be aware of which Regime items appear on the Commerce Control List. However, officials from the State Department’s Office of Defense Trade Controls, which maintains the U.S. Munitions List, said they are not formally consulted to ensure that Regime items do not appear on both export control lists. Within the State Department, the Bureau of Nonproliferation formally reviews and comments on the Commerce Department’s regulations for the control of Regime items. A senior Bureau official said that the review is to ensure that Regime items are controlled, without concern for which Department has jurisdiction. Further, the State Department has not consulted with the Commerce Department in recent years regarding the Regime items covered by its export control list. According to a senior official with the Office of Defense Trade Controls, the Commerce Department was provided an opportunity to review the section of the U.S. Munitions List that identifies the Regime items subject to the State Department’s controls before the section was added to the International Traffic in Arms Regulations in 1994. However, this section of the State Department’s regulations has not been updated or revised since then to incorporate the periodic changes made to the Regime Annex. State Department officials maintain that the U.S. Munitions List does not have to be regularly revised to ensure that new items added to the Regime Annex are controlled, as those items are already controlled under the U.S. Munitions List’s broad categories. However, as a result of this lack of revision, the Commerce Department has not been provided another opportunity to review and comment on the Regime items covered by the U.S. Munitions List to ensure that items do not appear on both export control lists. The appearance of an item on both the Commerce Control List and the U.S. Munitions List and disagreements between the Departments over which one has jurisdiction may result in the same Regime item being subject to different restrictions and reviews, which may affect U.S. national interests and companies’ ability to export Regime items. While the Commerce Department’s export control system seeks to balance U.S. national security and foreign policy interests with economic interests, the State Department’s export control system was designed to primarily further national security and foreign policy interests. The differences in the underlying premises of the two Department’s export control systems are reflected in their restrictions on where Regime items can be exported and processes to review export licensing applications. A key difference between the Departments’ export control systems is that some sanctions and embargoes only apply to items on the U.S. Munitions List and not to those on the Commerce Control List. For example, under U.S. law, licenses cannot be issued for the export of most missile technology and other items on the U.S. Munitions List to China. As a result, the State Department generally denies license applications involving the export of items on the U.S. Munitions List to China. This same restriction does not apply to items on the Commerce Control List. Missile technology items on the Commerce Control List may be licensed for export to China provided that certain legal requirements are met.Additionally, the State Department generally denies license applications involving exports of U.S. Munitions List items to Indonesia and Yugoslavia. The Commerce Department does not have a comparable policy for exports of Regime items to these countries. Because of these policy differences, the State Department could deny a license to an exporter seeking to export a Regime item to one of these countries, whereas the Commerce Department could approve a license to export the same item to these countries. Other sanctions apply to both export control lists, but the Departments have enforced these sanctions differently. For example, under the MTCR sanction provisions of the Export Administration Act and the Arms Export Control Act, the President generally is to impose sanctions on U.S. and foreign parties who improperly transferred Regime items. For the improper transfer of Regime-controlled components, equipment, material, and technology, the Departments of Commerce and State are to deny export licenses to the involved parties for all Regime items subject to their respective controls for a 2-year period. In applying MTCR sanctions, the Commerce Department has allowed Regime items to be exported to sanctioned parties if these items were incorporated into larger items not subject to these sanctions. The State Department, however, has prohibited the export to sanctioned parties of non-Regime items on the U.S. Munitions List if they contain Regime items. As a result, exporters have been subject to different levels of scrutiny and restrictions at the Departments of Commerce and State. Finally, the Commerce Department’s regulations do not require licenses for the export of Regime items on the Commerce Control List to Canada, while the Department of State’s regulations require licenses for the export of Regime items on the U.S. Munitions List to all countries. The exporter consulting the Commerce Control List could export an item to Canada without a license, while the exporter consulting the U.S. Munitions List would have to go through the Department of State’s license application process. The U.S. government may or may not have an opportunity to review and approve a Regime item exported to Canada, depending on whether the exporter consults the Commerce Control List or the U.S. Munitions List. Because of differences in the export control systems of the Departments of Commerce and State, it is critical that exporters properly determine whether their items are controlled on the Commerce Control List or the U.S. Munitions List. However, some of the companies we spoke with did not understand U.S. export controls as applied to missile technology items. For example, an official from one company stated the company’s product is not exported for use in missiles and, therefore, did not understand why this product is controlled for missile technology reasons, even though it is on the Regime Annex. At another company, an official said that the State Department controls all Regime items and did not realize that the Commerce Department controls dual-use Regime items. Export licensing officials with another company said that companies acquired by their company had incorrectly determined that certain Regime items were Commerce Department-controlled when the items were State Department-controlled. An export licensing official from another company stated that when there is uncertainty as to which Department has jurisdiction over a particular Regime item, the company submits the license application to the Commerce Department with the expectation that the Commerce Department would send the license application to the State Department if the item were State Department-controlled. Officials from other companies said they relied on past experience, familiarity with a particular Department, and their own interpretations of the regulations when deciding where to submit an export license application. Some of the companies expressed uncertainty of the meaning of certain terms in the regulations, which sometimes made it difficult to determine whether to submit their license applications to the Commerce Department or the State Department. For example, officials from several companies indicated that they did not understand what the regulations mean when referring to items as specifically designed or modified for a military application. These officials noted that the Departments of Commerce and State do not provide either a regulatory definition or sufficient guidance for what constitutes being specifically designed or modified. As a result, an official with one company said there is room for interpretation on the part of exporters. Officials from these companies stated that if they make any modifications to an item for use by the military, they submit the license application to the State Department to ensure that they do not violate the State Department’s regulations and governing statute. The U.S. government has committed internationally to controlling Regime items because of its concerns about the threat missile proliferation poses to U.S. interests. The lack of clarity over which Department has jurisdiction over some Regime items may lead an exporter to seek a Commerce Department license for a militarily sensitive item controlled on the U.S. Munitions List or a State Department license for a dual-use item controlled on the Commerce Control List. The Commerce Department and State Department would review these license applications according to different criteria and restrictions and possibly reach different determinations on whether the item may be exported. Because there is unclear jurisdiction for critical Regime items, exporters are left to decide which Department should review their exports and, by default, the policy interests that are to be considered and acted upon. To ensure that proposed exports of Missile Technology Control Regime items are subject to the appropriate review process, we recommend that the Secretaries of Commerce and State direct the offices responsible for the Commerce Control List and the U.S. Munitions List, in consultation with others as appropriate, to jointly review the Regime Annex, determine the appropriate jurisdiction for items on the Annex, and revise their respective export control lists accordingly; the Secretary of Commerce ensure that, when a Regime item generally controlled by the Commerce Department becomes subject to the State Department’s control if it meets certain parameters, the Commerce Control List specify those parameters and provide a cross-reference to the U.S. Munitions List; and the Secretary of State update the section of the U.S. Munitions List that identifies the Regime items subject to the State Department’s jurisdiction to ensure that it is consistent with the current version of the Regime Annex and provide a cross-reference to the Commerce Control List for those Regime items that would be subject to the Commerce Department’s control when certain parameters are met. The annual review of the U.S. Munitions List, which is being conducted as part of the Defense Trade Security Initiative, may provide a vehicle to implement these recommendations. In written comments on a draft of this report, the Commerce Department concurred with our recommendation to review the Commerce Control List and the U.S. Munitions List to provide additional clarity to exporters. However, the Commerce Department commented that jurisdiction for Regime items is generally clear and the current export control system is not a risk to U.S. nonproliferation interests. The Commerce Department stated that it refers its export license applications for Regime items to the State Department and other agencies for their review. According to the Commerce Department, the State Department has an opportunity to indicate that an item cannot be licensed under the Commerce Department because it is State Department-controlled. However, making a jurisdiction determination during the license review process delays the exporter from obtaining an approved license from the appropriate Department. By clarifying the regulations, the Departments would minimize such occurrences that can impact the workloads of both the exporters and the U.S. government. The Commerce Department’s comments are reprinted in appendix II, along with our evaluation of them. In written comments on a draft of this report, the State Department concurred with our recommendation to update the section of the U.S. Munitions List that identifies the Regime items subject to the State Department’s jurisdiction. The State Department said that, as part of this update, it will work with the Commerce Department in an effort to eliminate unclear jurisdiction for Regime items. According to the State Department, the process of updating this section has already begun and should be completed before the end of 2001. The State Department also provided technical comments to clarify which Regime items are subject to its jurisdiction and we revised the report to reflect those comments. The State Department’s comments are reprinted in appendix III, along with our evaluation of them. To determine the division of jurisdiction over Regime items between the Departments of Commerce and State, we compared the Regime Equipment, Software, and Technology Annex of October 2000 with the January 2001 Commerce Control List and the April 2000 U.S. Munitions List (and subsequent updates made to each list). We then confirmed with officials from the Department of State’s Office of Defense Trade Controls and the Department of Commerce’s Bureau of Export Administration the Regime items that they claim as subject to their respective export controls. To identify the factors that contribute to unclear jurisdiction for Regime items, we interviewed officials with the Department of Defense’s Defense Threat Reduction Agency, the Department of State’s Bureau of Nonproliferation and Office of Defense Trade Controls, and the Department of Commerce’s Bureau of Export Administration. We also reviewed Commerce Department and State Department policies and practices for revising the export control lists. To identify the potential effects of unclear jurisdiction, we conducted structured interviews with 24 companies that export Regime items to discuss how they determine which Department controls their exports of Regime items and how they are affected by differences in the export control systems. These companies were selected on the basis of the number of license applications for the export of Regime items they had submitted to either the Commerce Department or the State Department from fiscal year 1997 through fiscal year 2000. We also interviewed officials with the Department of Defense’s Defense Threat Reduction Agency, the Department of State’s Bureau of Nonproliferation and Office of Defense Trade Controls, and the Department of Commerce’s Bureau of Export Administration. Additionally, we reviewed our prior reports and reports from the Inspectors General of the Departments of Defense and Commerce. We conducted our review from January through July 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after its issuance. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs; Senate Committee on Foreign Relations; House Committee on International Relations; House Committee on Armed Services; the Secretaries of Commerce, Defense, and State; the Director, Office of Management and Budget; and the Assistant to the President for National Security Affairs. We will also make the report available to others upon request. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. Others making key contributions to this report were Thomas J. Denomme, Anne-Marie Lasowski, Johana R. Ayers, Richard K. Geiger, and John Van Schaik. Forty-seven of the 196 items listed in the Missile Technology Control Regime (MTCR) Equipment, Software, and Technology Annex appear subject to the export control jurisdictions of both the Departments of Commerce and State. These 47 Regime items are described in table 2, along with an identification of where they are controlled on the Commerce Control List and the State Department’s U.S. Munitions List. In some cases, Regime items are described on both export control lists with either identical or overlapping performance parameters. For these items, we have identified the category and Export Control Classification Number where they appear on the Commerce Control List and the category where they appear on the U.S. Munitions List. The remaining items, which are software and technologies related to Regime production facilities and equipment, have been claimed by Department of State officials as subject to the State Department’s jurisdiction, although the items do not explicitly appear on the U.S. Munitions List but do appear on the Commerce Control List. For these items, we have indicated on the table where State Department officials claim these items are controlled on the U.S. Munitions List and where they appear on the Commerce Control List. 1. Text revised for clarification. 2. We believe the text reflects what Commerce Department officials told us during our review and is not substantively different than the Commerce Department’s proposed change. We, therefore, do not believe a revision is needed. 3. We did not revise the report to include a discussion of the license review process. We believe that jurisdictional determinations should be made before a company submits an export license application for review. Clarification of the regulations would help ensure that a company submits its license application for a Regime item to the appropriate Department. 4. Text revised. 5. As discussed in the report, the State Department did not agree that exporters should use the criteria contained in section 120.3 of the State Department’s regulations to determine whether an item is subject to the State Department’s export controls. In addition, the Commerce Department refers to section 120.3 as containing the definition of a defense article. However, the definition of a defense article appears in section 120.6 of the State Department’s regulations. According to the definition in section 120.6, a defense article is any item or technical data designated on the U.S. Munitions List. 6. We believe the text reflects what Commerce Department officials told us during our review and is not substantively different than the Commerce Department’s proposed change. We, therefore, do not believe a revision is needed. 7. The Commerce Department’s example highlights the difference between how the Departments of Commerce and State enforce sanctions. We do not believe additional clarification is needed. 8. As discussed in the report, some of the exporters we spoke with did not understand the export control system or certain terms in the regulations, thereby making it sometimes difficult to determine where to apply for a license to export Regime items. We point out in one example that a company submits license applications to the Commerce Department when uncertain as to which Department has jurisdiction, but do not discuss how Commerce licensing officers respond in such a situation. 9. Text revised for clarification. 1. We believe our draft report reflected information provided to us by State Department officials during the course of our review. However, we have revised the report to reflect the State Department’s position as indicated in its comments.
The U.S. government has long been concerned about the growing threat posed by the proliferation of missiles and related technologies that can deliver weapons of mass destruction. The United States is working with other countries through the Missile Technology Control Regime to control the export of missile-related items. The Departments of Commerce and State share primary responsibility for controlling exports of Regime items. The Commerce Department is required to control Regime items that are dual-use on its export control list--the Commerce Control List. All other Regime items are to be controlled by the State Department on its export control list--the U.S. Munitions List. However, the two departments have not clearly established which of them has jurisdiction for almost 25 percent of the items the United States agreed to control. The Departments disagree on how to determine which Regime items are controlled by Commerce and which are controlled by State. Consultations between the departments about respective control lists have not resolved these jurisdiction issues. Unclear jurisdiction may result in the same Regime item being subject to different export control restrictions and processes at the two departments.
Threats to systems supporting critical infrastructure and federal information systems are evolving and growing. Advanced persistent threats—where adversaries possess sophisticated levels of expertise and significant resources to pursue their objectives repeatedly over an extended period of time—pose increasing risks. In 2009, the President declared the cyber threat to be “ne of the most serious economic and national security challenges we face as a nation” and stated that “America’s economic prosperity in the 21st century will depend on cybersecurity.” The Director of National Intelligence has also warned of the increasing globalization of cyber attacks, including those carried out by foreign militaries or organized international crime. In January 2012, he testified that such threats pose a critical national and economic security concern.2012, the Secretary of Defense stated that the collective result of attacks To further highlight the importance of the threat, on October 11, on our nation’s critical infrastructure could be “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.” The evolving array of cyber-based threats facing the nation pose threats to national security, commerce and intellectual property, and individuals. These threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or defective equipment that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources. These sources include business competitors, corrupt employees, criminal groups, hackers, and foreign nations engaged in espionage and information warfare. Such threat sources vary in terms of the types and capabilities of the actors, their willingness to act, and their motives. Table 1 shows common sources of adversarial cybersecurity threats. These sources of cybersecurity threats make use of various techniques to compromise information or adversely affect computers, software, a network, an organization’s operation, an industry, or the Internet itself. Table 2 provides descriptions of common types of cyber attacks. The unique nature of cyber-based attacks can vastly enhance their reach and impact, resulting in the loss of sensitive information and damage to economic and national security, the loss of privacy, identity theft, and the compromise of proprietary information or intellectual property. The increasing number of incidents reported by federal agencies, and the recently reported cyber-based attacks against individuals, businesses, critical infrastructures, and government organizations have further underscored the need to manage and bolster the cybersecurity of our government’s information systems and our nation’s critical infrastructures. The number of cyber incidents affecting computer systems and networks continues to rise. Over the past 6 years, the number of cyber incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team (US-CERT) has increased from 5,503 in fiscal year 2006 to 48,562 in fiscal year 2012, an increase of 782 percent (see fig. 1). Of the incidents occurring in 2012 (not including those that were reported as under investigation), improper usage, malicious code, and unauthorized access were the most widely reported types across the federal government. As indicated in figure 2, which includes a breakout of incidents reported to US-CERT by agencies in fiscal year 2012, improper usage, malicious code, and unauthorized access accounted for 55 percent of total incidents reported by agencies. In addition, reports of cyber incidents affecting national security, intellectual property, and individuals have been widespread, with reported incidents involving data loss or theft, economic loss, computer intrusions, and privacy breaches. Such incidents illustrate the serious impact that cyber attacks can have on federal and military operations; critical infrastructure; and the confidentiality, integrity, and availability of sensitive government, private sector, and personal information. For example, according to US-CERT, the number of agency-reported incidents involving personally identifiable information increased 111 percent from fiscal year 2009 to fiscal year 2012—from 10,481 to 22,156. The federal government’s information security responsibilities are established in law and policy. The Federal Information Security Management Act of 2002 (FISMA) sets forth a comprehensive risk- based framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. In order to ensure the implementation of this framework, FISMA assigns specific responsibilities to agencies, the Office of Management and Budget (OMB), the National Institute of Standards and Technology (NIST), and inspectors general: Each agency is required to develop, document, and implement an agency-wide information security program and to report annually to OMB, selected congressional committees, and the U.S. Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. OMB’s responsibilities include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies (except with regard to national security systems). It is also responsible for reviewing, at least annually, and approving or disapproving agency information security programs. NIST’s responsibilities under FISMA include the development of security standards and guidelines for agencies that include standards for categorizing information and information systems according to ranges of risk levels, minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system. Agency inspectors general are required to annually evaluate the information security program and practices of their agency. The results of these evaluations are to be submitted to OMB, and OMB is to summarize the results in its reporting to Congress. In the 10 years since FISMA was enacted into law, executive branch oversight of agency information security has changed. As part of its FISMA oversight responsibilities, OMB has issued annual guidance to agencies on implementing FISMA requirements, including instructions for agency and inspector general reporting. However, in July 2010, the Director of OMB and the White House Cybersecurity Coordinator issued a joint memorandum stating that the Department of Homeland Security (DHS) was to exercise primary responsibility within the executive branch for the operational aspects of cybersecurity for federal information systems that fall within the scope of FISMA. The OMB memo also stated that in carrying out these responsibilities, DHS is to be subject to general OMB oversight in accordance with the provisions of FISMA. In addition, the memo stated that the Cybersecurity Coordinator would lead the interagency process for cybersecurity strategy and policy development. Subsequent to the issuance of M-10-28, DHS began issuing annual reporting instructions to agencies in addition to OMB’s annual guidance. Regarding federal agencies operating national security systems, National Security Directive 42 established the Committee on National Security Systems, an organization chaired by the Department of Defense (DOD), to, among other things, issue policy directives and instructions that provide mandatory information security requirements for national security systems. In addition, the defense and intelligence communities develop implementing instructions and may add additional requirements where needed. An effort is underway to harmonize policies and guidance for national security and non-national security systems. Representatives from civilian, defense, and intelligence agencies established a joint task force in 2009, led by NIST and including senior leadership and subject matter experts from participating agencies, to publish common guidance for information systems security for national security and non-national security systems. Various laws and directives have also given federal agencies responsibilities relating to the protection of critical infrastructures, which are largely owned by private sector organizations. The Homeland Security Act of 2002 created the Department of Homeland Security. Among other things, DHS was assigned with the following critical infrastructure protection responsibilities: (1) developing a comprehensive national plan for securing the critical infrastructures of the United States, (2) recommending measures to protect those critical infrastructures in coordination with other groups, and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of, or response to, terrorist attacks. Homeland Security Presidential Directive 7 (HSPD-7) was issued in December 2003 and defined additional responsibilities for DHS, sector- specific agencies, and other departments and agencies. The directive instructed sector-specific agencies to collaborate with the private sector to identify, prioritize, and coordinate the protection of critical infrastructures to prevent, deter, and mitigate the effects of attacks. It also made DHS responsible for, among other things, coordinating national critical infrastructure protection efforts and establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. On February 12, 2013, the President issued an executive order on Among other improving the cybersecurity of critical infrastructure.things, it stated that the policy of the U.S. government is to increase the volume, timeliness, and quality of cyber threat information shared with U.S. private sector entities and ordered the following actions to be taken: The Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence are, within 120 days of the date of the order, to issue instructions for producing unclassified reports of cyber threats and establish a process for disseminating these reports to targeted entities. Agencies are to coordinate their activities under the order with their senior agency officials for privacy and civil liberties and ensure that privacy and civil liberties protections are incorporated into such activities. In addition, DHS’s Chief Privacy Officer and Officer for Civil Rights and Civil Liberties are to assess the privacy and civil liberties risks and recommend ways to minimize or mitigate such risks in a publicly available report to be released with 1 year of the date of the order. The Secretary of Homeland Security is to establish a consultative process to coordinate improvements to the cybersecurity of critical infrastructure. The Secretary of Commerce is to direct the Director of NIST to lead the development of a framework to reduce cyber risks to critical infrastructure. The framework is to include a set of standards, methodologies, procedures, and processes that align policy, business, and technological approaches to address cyber risks and incorporate voluntary consensus standards and industry best practices to the fullest extent possible. The Director is to publish a preliminary version of the framework within 240 days of the date of the order, and a final version within 1 year. The Secretary of Homeland Security, in coordination with sector- specific agencies, is to establish a voluntary program to support the adoption of the Cybersecurity Framework by owners and operators of critical infrastructure and any other interested entities. Further, the Secretary is to coordinate the establishment of a set of incentives designed to promote participation in the program and, along with the Secretaries of the Treasury and Commerce, make recommendations to the President that include analysis of the benefits and relative effectiveness of such incentives, and whether the incentives would require legislation or can be provided under existing law and authorities. The Secretary of Homeland Security, within 150 days of the date of the order, is to use a risk-based approach to identify critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security. Agencies with responsibilities for regulating the security of critical infrastructure are to consult with DHS, OMB, and the National Security Staff to review the preliminary cybersecurity framework and determine if current cybersecurity regulatory requirements are sufficient given current and projected risks. If current regulatory requirements are deemed to be insufficient, agencies are to propose actions to mitigate cyber risk, as appropriate, within 90 days of publication of the final Cybersecurity Framework. In addition, within 2 years after publication of the final framework, these agencies, in consultation with owners and operators of critical infrastructure, are to report to OMB on any critical infrastructure subject to ineffective, conflicting, or excessively burdensome cybersecurity requirements. Also on February 12, 2013, the White House released Presidential Policy Directive (PPD) 21, on critical infrastructure security and resilience. This directive revokes HSPD-7, although it states that plans developed pursuant to HSPD-7 shall remain in effect until specifically revoked or superseded. PPD-21 sets forth roles and responsibilities for DHS, sector- specific agencies, and other federal entities with regard to the protection of critical infrastructure from physical and cyber threats. It also identifies three strategic imperatives to refine and clarify functional relationships across the federal government (which includes two national critical infrastructures centers for physical and cyber infrastructure), enable efficient information exchange by identifying baseline data and systems requirements, and implement an integration and analysis function to inform planning and operational decisions. The directive calls for a number of specific implementation actions, along with associated time frames, which include developing a description of the functional relationships within DHS and across the federal government related to critical infrastructure security and resilience; conducting an analysis of the existing public-private partnership model; identifying baseline data and system requirements for the efficient exchange of information and intelligence; demonstrating a near real-time situational awareness capability for critical infrastructure; updating the National Infrastructure Protection Plan; and developing a national critical infrastructure security and resilience research and development plan. Finally, the directive identifies 16 critical infrastructure sectors and their designated federal sector-specific agencies. We and federal agency inspector general reports have identified challenges in a number of key areas of the federal government’s approach to cybersecurity, including those related to protecting the nation’s critical infrastructure. While actions have been taken to address aspects of these challenges, issues remain in each of the following areas. Designing and implementing risk-based cybersecurity programs at federal agencies. Shortcomings persist in assessing risks, developing and implementing security controls, and monitoring results at federal agencies. Specifically, for fiscal year 2012, 19 of 24 major federal agencies reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting. Further, inspectors general at 22 of 24 agencies cited information security as a major management challenge for their agency. Most of the 24 major agencies had information security weaknesses in most of five key control categories: implementing agency- wide information security management programs that are critical to identifying control deficiencies, resolving problems, and managing risks on an ongoing basis; limiting, preventing, and detecting inappropriate access to computer resources; managing the configuration of software and hardware; segregating duties to ensure that a single individual does not control all key aspects of a computer-related operation; and planning for continuity of operations in the event of a disaster or disruption (see fig. 3). As we noted in our October 2011 report on agencies’ implementation of FISMA requirements, an underlying reason for these weaknesses is that agencies have not fully implemented their information security programs. As a result, they have limited assurance that controls are in place and operating as intended to protect their information resources, thereby leaving them vulnerable to attack or compromise. Accordingly, we have continued to make numerous recommendations to address specific weaknesses in risk management processes at individual federal agencies. Recently, some agencies have demonstrated improvement in this area. For example, we reported in November 2012 that during fiscal year 2012, the Internal Revenue Service (IRS) continued to make important progress in addressing numerous deficiencies in its information security controls over its financial reporting systems.applying effective controls over agency information and information systems remains an area of significant concern. Establishing and identifying standards for critical infrastructures. As we reported in December 2011, DHS and other agencies with responsibilities for specific critical infrastructure sectors have not yet identified cybersecurity guidance applicable to or widely used in each of the sectors. required by law or regulation to comply with specific cybersecurity requirements. Within the energy sector, for example, experts have identified a lack of clarity in the division of responsibility between federal and state regulators as a challenge in securing the U.S. electricity grid. We have made recommendations aimed at furthering efforts by sector- specific agencies to enhance critical infrastructure protection. The recently issued executive order is also intended to bolster efforts in this challenge area. Moreover, sectors vary in the extent to which they are Detecting, responding to, and mitigating cyber incidents. DHS has made progress in coordinating the federal response to cyber incidents, but challenges remain in sharing information among federal agencies and key private-sector entities, including critical infrastructure owners. Difficulties in sharing information and the lack of a centralized information- sharing system continue to hinder progress. The February executive order contains provisions aimed at addressing these difficulties by, for example, establishing a process for disseminating unclassified reports of threat information. Challenges also persist in developing a timely cyber analysis and warning capability. While DHS has taken steps to establish a timely analysis and warning capability, we have reported that it had yet to establish a predictive analysis capability and recommended that the department establish such capabilities. According to DHS, tools for predictive analysis are to be tested in fiscal year 2013. GAO, Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use, GAO-12-92 (Washington, D.C.: Dec. 9, 2011). Promoting education, awareness, and workforce planning. In November 2011, we reported that federal agencies leading strategic planning efforts for cybersecurity education and awareness had not identified details for achieving planned outcomes and that specific tasks and responsibilities were unclear. We recommended, among other things, that these agencies collaborate to clarify responsibilities and processes for planning and monitoring their activities. We also reported that only two of eight agencies in our review had developed cyber workforce plans, and only three of the eight agencies had a department- wide training program for their cybersecurity workforce. We recommended that these agencies take steps to improve agency and government-wide cybersecurity workforce efforts. Agencies concurred with the majority of our recommendations and outlined steps to address them. Supporting cyber research and development. The support of targeted cyber research and development (R&D) has been impeded by implementation challenges among federal agencies. In June 2010, we reported that R&D initiatives were hindered by limited sharing of detailed information about ongoing research, including the lack of a process for sharing results of completed projects or a repository to track R&D projects funded by the federal government. To help facilitate information sharing about planned and ongoing R&D projects, we recommended establishing a mechanism for tracking ongoing and completed federal cybersecurity R&D projects and their funding, and that this mechanism be used to develop an ongoing process to share R&D information among federal agencies and the private sector. As of September 2012, this mechanism had not been fully developed. GAO, Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development, GAO-10-466 (June 3, 2010). computing security, which agencies have begun to implement. Further, we reported in June 2011 that federal agencies did not always have adequate policies in place for managing and protecting information they access and disseminate through social media platforms such as Facebook and Twitter and recommended that agencies develop such policies. Most of the agencies agreed with our recommendations. In September 2012, we reported that the U.S. Federal Communications Commission could do more to encourage mobile device manufacturers and wireless carriers to implement a more complete industry baseline of mobile security safeguards. The commission generally concurred with our recommendations. Managing risks to the global information technology supply chain. Reliance on a global supply chain for information technology products and services introduces risks to systems, and federal agencies have not always addressed these risks. Specifically, in March 2012, we reported that four national security-related agencies varied in the extent to which they had defined supply chain protection measures for their information systems and were not in a position to develop implementing procedures We recommended that and monitoring capabilities for such measures. the agencies take steps as needed to address supply chain risks, and the departments generally concurred. GAO, IT Supply Chain: National Security-Related Agencies Need to Better Address Risks, GAO-12-361 (Washington, D.C.: Mar. 23, 2012). agencies, developing a national strategy, coordinating policy among key federal entities, ensuring that international technical standards and policies do not impose unnecessary trade barriers, participating in international cyber-incident response efforts, investigating and prosecuting international cybercrime, and developing international models We recommended that the government develop and norms for behavior.a global cyberspace strategy to help address these challenges. While such a strategy has been developed and includes goals such as the development of international cyberspace norms, it does not fully specify outcome-oriented performance metrics or timeframes for completing activities. The federal government has issued a variety of documents over the last decade that were intended to articulate a national cybersecurity strategy. The evolution of the nation’s cybersecurity strategy is summarized in figure 4. These strategy documents address aspects of the above-mentioned challenge areas. For example, they address priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructures within the private sector. However, as we noted in our February 2013 report, the government has not developed an overarching national cybersecurity strategy that synthesizes the relevant portions of these documents or provides a comprehensive description of the current strategy. The Obama administration’s 2009 Cyberspace Policy Review recommended a number of actions, including updating the 2003 National Cybersecurity Strategy. However, no updated strategy document has been issued. In May 2011, the White House announced that it had completed all the near- term actions outlined in the 2009 policy review, including the update to the 2003 national strategy. According to the administration’s fact sheet on cybersecurity accomplishments, the 2009 policy review itself serves as the updated strategy. The fact sheet stated that the direction and needs highlighted in the Cyberspace Policy Review and the previous national cybersecurity strategy were still relevant, and it noted that the administration had updated its strategy on two subordinate cyber issues, identity management and international engagement. Nonetheless, these actions do not fulfill the recommendation that an updated strategy be prepared for the President’s approval. As a result, no overarching strategy exists to show how the various goals and activities articulated in current documents form an integrated strategic approach. In addition to lacking an integrated strategy, the government’s current approach to cybersecurity lacks key desirable characteristics of a national strategy. In 2004, we developed a set of desirable characteristics that can enhance the usefulness of national strategies in allocating resources, defining policies, and helping to ensure accountability. Table 3 summarizes these key desirable characteristics. Existing cybersecurity strategy documents have included selected elements of these desirable characteristics, such as setting goals and subordinate objectives, but have generally lacked other key elements. The missing elements include the following: Milestones and performance measures. The government’s strategy documents include few milestones or performance measures, making it difficult to track progress in accomplishing stated goals and objectives. This lack of milestones and performance measures at the strategic level is mirrored in similar shortcomings within key programs that are part of the government-wide strategy. For example, in 2011 the DHS inspector general recommended that the department develop and implement performance measures to track and evaluate the effectiveness of actions defined in its strategic plan,January 2012. which the department had yet to do as of Cost and resources. While past strategy documents linked certain activities to federal agency budget requests, none have fully addressed cost and resources, including justifying the required investment, which is critical to gaining support for implementation. Specifically, none of the strategy documents provided full assessments of anticipated costs and how resources might be allocated to meet them. Roles and responsibilities. Cybersecurity strategy documents have assigned high-level roles and responsibilities but have left important details unclear. Several GAO reports have likewise demonstrated that the roles and responsibilities of key agencies charged with protecting the cyber assets of the United States are inadequately defined. For example, the chartering directives for several offices within the Department of Defense assign overlapping roles and responsibilities for preparing for and responding to domestic cyber incidents. In an October 2012 report, we recommended that the department update its guidance on preparing for and responding to domestic cyber incidents to include a description of roles and responsibilities. Further, in March 2010, we reported that agencies had overlapping and uncoordinated responsibilities within the Comprehensive National Cybersecurity Initiative and recommended that OMB better define roles and responsibilities for all key participants. In addition, while the law gives OMB responsibility for oversight of federal information security, OMB transferred several of its oversight responsibilities to DHS. OMB officials stated that enlisting DHS to perform these responsibilities has allowed OMB to have more visibility into agencies’ cybersecurity activities because of the additional resources and expertise provided by DHS. While OMB’s decision to transfer these responsibilities is not consistent with FISMA, it may have had beneficial practical results, such as leveraging resources from DHS. Nonetheless, with these responsibilities now divided between the two organizations, it is remains unclear how they are to share oversight of individual departments and agencies. Additional legislation could clarify these responsibilities. Linkage with other key strategy documents. Existing cybersecurity strategy documents vary in terms of priorities and structure, and do not specify how they link to or supersede other documents. Nor do they describe how they fit into an overarching national cybersecurity strategy. For example, in 2012, the Obama administration identified three cross- agency cybersecurity priorities, but no explanation was given as to how these priorities related to those established in other strategy documents. Given the range and sophistication of the threats and potential exploits that confront government agencies and the nation’s cyber critical infrastructure, it is critical that the government adopt a comprehensive strategic approach to mitigating the risks of successful cybersecurity attacks. In our February report, we recommended that the White House Cybersecurity Coordinator develop an overarching federal cybersecurity strategy that includes all key elements of the desirable characteristics of a national strategy. Such a strategy, we believe, will provide a more effective framework for implementing cybersecurity activities and better ensure that such activities will lead to progress in securing systems and information. This strategy should also better ensure that federal government departments and agencies are held accountable for making significant improvements in cybersecurity challenge areas by, among other things, clarifying how oversight will be carried out by OMB and other federal entities. In the absence of such an integrated strategy, the documents that comprise the government’s current strategic approach are of limited value as a tool for mobilizing actions to mitigate the most serious threats facing the nation. In addition, many of the recommendations previously made by us and agency inspectors general have not yet been fully addressed, leaving much room for more progress in addressing cybersecurity challenges. In many cases, the causes of these challenges are closely related to the key elements that are missing from the government’s cybersecurity strategy. For example, the persistence of shortcomings in agency cybersecurity risk management processes indicates that agencies have not been held accountable for effectively implementing such processes and that oversight mechanisms have not been clear. It is just such oversight and accountability that is poorly defined in cybersecurity strategy documents. In light of this limited oversight and accountability, we also stated in our report that Congress should consider legislation to better define roles and responsibilities for implementing and overseeing federal information security programs and protecting the nation’s critical cyber assets. Such legislation could clarify the respective responsibilities of OMB and DHS, as well as those of other key federal departments and agencies. In commenting on a draft of the report, the Executive Office of the President agreed that more needs to be done to develop a coherent and comprehensive strategy on cybersecurity but did not believe producing another strategy document would be beneficial. Specifically, the office stated that remaining flexible and focusing on achieving measurable improvements in cybersecurity would be more beneficial than developing “yet another strategy on top of existing strategies.” We agree that flexibility and a focus on achieving measurable improvements in cybersecurity is critically important and that simply preparing another document, if not integrated with previous documents, would not be helpful. The focus of our recommendation is to develop an overarching strategy that integrates the numerous strategy documents, establishes milestones and performance measures, and better ensures that federal departments and agencies are held accountable for making significant improvements in cybersecurity challenge areas. The Executive Office of the President also agreed that Congress should consider enhanced cybersecurity legislation that addresses information sharing and baseline standards for critical infrastructure, among other things. In summary, addressing the ongoing challenges in implementing effective cybersecurity within the government, as well as in collaboration with the private sector and other partners, requires the federal government to define and implement a coherent and comprehensive national strategy that includes key desirable elements and provides accountability for results. Recent efforts, such as the 2012 cross-agency priorities and the executive order on improving cybersecurity for critical infrastructure, could provide parts of a strategic approach. For example, the executive order includes actions aimed at addressing challenges in developing standards for critical infrastructure and sharing information, in addition to assigning specific responsibilities to specific individuals that are to be completed within specific timeframes, thus providing clarity of responsibility and a means for establishing accountability. However, these efforts need to be integrated into an overarching strategy that includes a clearer process for oversight of agency risk management and a roadmap for improving the cybersecurity challenge areas in order for the government to make significant progress in furthering its strategic goals and lessening persistent weaknesses. Chairmen Rockefeller and Carper, Ranking Members Thune and Coburn, and Members of the Committees, this concludes my statement. I would be happy to answer any questions you may have. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Dr. Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Other key contributors to this statement include John de Ferrari (Assistant Director), Richard B. Hung (Assistant Director), Nicole Jarvis, Lee McCracken, David F. Plocher, and Jeffrey Woodward. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Information Security: Federal Communications Commission Needs to Strengthen Controls over Enhanced Secured Network Project. GAO-13-155. Washington, D.C.: January 25, 2013. Information Security: Actions Needed by Census Bureau to Address Weaknesses. GAO-13-63. Washington, D.C.: January 22, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Mobile Device Location Data: Additional Federal Actions Could Help Protect Consumer Privacy. GAO-12-903. Washington, D.C.: September 11, 2012. Medical Devices: FDA Should Expand Its Consideration of Information Security for Certain Types of Devices. GAO-12-816. August 31, 2012. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. Washington, D.C.: July 17, 2012. Electronic Warfare: DOD Actions Needed to Strengthen Management and Oversight. GAO-12-479. Washington, D.C.: July 9, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Information Security: IRS Needs to Further Enhance Internal Control over Financial Reporting and Taxpayer Data. GAO-12-393. Washington, D.C.: March 16, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid. GAO-12-507T. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November 29, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Information Security: Weaknesses Continue Amid New Federal Efforts to Implement Requirements. GAO-12-137. Washington, D.C.: October 3, 2011. Personal ID Verification: Agencies Should Set a Higher Priority on Using the Capabilities of Standardized Identification Cards. GAO-11-751. Washington, D.C.: September 20, 2011. Information Security: FDIC Has Made Progress, but Further Actions Are Needed to Protect Financial Data. GAO-11-708. Washington, D.C.: August 12, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Defense Department Cyber Efforts: DOD Faces Challenges in Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Information Security: State Has Taken Steps to Implement a Continuous Monitoring Application, but Key Challenges Remain. GAO-11-149. Washington, D.C.: July 8, 2011. Social Media: Federal Agencies Need Policies and Procedures for Managing and Protecting Information They Access and Disseminate. GAO-11-605. Washington, D.C.: June 28, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure and Federal Information Systems. GAO-11-463T. Washington, D.C.: March 16, 2011. Information Security: IRS Needs to Enhance Internal Control Over Financial Reporting and Taxpayer Data. GAO-11-308. Washington, D.C.: March 15, 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to Be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Information Security: National Nuclear Security Administration Needs to Improve Contingency Planning for Its Classified Supercomputing Operations. GAO-11-67. Washington, D.C.: December 9, 2010. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Information Security: Federal Deposit Insurance Corporation Needs to Mitigate Control Weaknesses. GAO-11-29. Washington, D.C.: November 30, 2010. Information Security: National Archives and Records Administration Needs to Implement Key Program Elements and Controls. GAO-11-20. Washington, D.C.: October 21, 2010. Cyberspace Policy: Executive Branch Is Making Progress Implementing 2009 Policy Review Recommendations, but Sustained Leadership Is Needed. GAO-11-24. Washington, D.C.: October 6, 2010. Information Security: Progress Made on Harmonizing Policies and Guidance for National Security and Non-National Security Systems. GAO-10-916. Washington, D.C.: September 15, 2010. Information Management: Challenges in Federal Agencies’ Use of Web 2.0 Technologies. GAO-10-872T. Washington, D.C.: July 22, 2010. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Information Security: Governmentwide Guidance Needed to Assist Agencies in Implementing Cloud Computing. GAO-10-855T. Washington, D.C.: July 1, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development. GAO-10-466. Washington, D.C.: June 3, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. Washington, D.C.: May 27, 2010. Information Security: Opportunities Exist for the Federal Housing Finance Agency to Improve Control. GAO-10-528. Washington, D.C.: April 30, 2010. Information Security: Concerted Response Needed to Resolve Persistent Weaknesses. GAO-10-536T.Washington, D.C.: March 24, 2010. Information Security: IRS Needs to Continue to Address Significant Weaknesses. GAO-10-355. Washington, D.C.: March 19, 2010. Information Security: Concerted Effort Needed to Consolidate and Secure Internet Connections at Federal Agencies. GAO-10-237. Washington, D.C.: March 12, 2010. Information Security: Agencies Need to Implement Federal Desktop Core Configuration Requirements. GAO-10-202. Washington, D.C.: March 12, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Department of Veterans Affairs’ Implementation of Information Security Education Assistance Program. GAO-10-170R. Washington, D.C.: December 18, 2009. Cybersecurity: Continued Efforts Are Needed to Protect Information Systems from Evolving Threats. GAO-10-230T. Washington, D.C.: November 17, 2009. Information Security: Concerted Effort Needed to Improve Federal Performance Measures. GAO-10-159T. Washington, D.C.: October 29, 2009. Critical Infrastructure Protection: OMB Leadership Needed to Strengthen Agency Planning Efforts to Protect Federal Cyber Assets. GAO-10-148. Washington, D.C.: October 15, 2009. Information Security: NASA Needs to Remedy Vulnerabilities in Key Networks. GAO-10-4. Washington, D.C.: October 15, 2009. Information Security: Actions Needed to Better Manage, Protect, and Sustain Improvements to Los Alamos National Laboratory’s Classified Computer Network. GAO-10-28. Washington, D.C.: October 14, 2009. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Information Security: Federal Information Security Issues. GAO-09-817R. Washington, D.C.: June 30, 2009. Information Security: Concerted Effort Needed to Improve Federal Performance Measures. GAO-09-617. Washington, D.C.: September 14, 2009. Information Security: Agencies Continue to Report Progress, but Need to Mitigate Persistent Weaknesses. GAO-09-546. Washington, D.C.: July 17, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Privacy: Lessons Learned about Data Breach Notification. GAO-07-657. Washington, D.C.: April 30, 2007. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (http://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to http://www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts Visit GAO on the web at www.gao.gov. . Please Print on Recycled Paper.
Federal government agencies and the nation's critical infrastructures have become increasingly dependent on computerized information systems and electronic data to carry out their operations. While creating significant benefits, this can also introduce vulnerabilities to cyber-threats. Pervasive cyber attacks against the United States could have a serious impact on national security, the economy, and public health and safety. The number of reported cyber incidents has continued to rise, resulting in data theft, economic loss, and privacy breaches. Federal law and policy assign various entities responsibilities for securing federal information systems and protecting critical infrastructures. GAO has designated federal information security as a high-risk area since 1997 and in 2003 expanded this to include cyber critical infrastructure protection. GAO was asked to testify on its recent report on challenges facing the government in effectively implementing cybersecurity and the extent to which the national cybersecurity strategy includes desirable characteristics of a national strategy. In preparing this statement, GAO relied on the report, as well as related previous work. The federal government continues to face challenges in a number of key areas in effectively implementing cybersecurity; these challenge areas include the following, among others: Designing and implementing risk-based cybersecurity programs at federal agencies. Shortcomings persist in assessing risks, developing and implementing security programs, and monitoring results at federal agencies. This is due in part to the fact that agencies have not fully implemented information security programs, resulting in reduced assurance that controls are in place and operating as intended to protect their information resources. Establishing and identifying standards for critical infrastructures. Agencies with responsibilities for critical infrastructure have not yet identified cybersecurity guidance widely used in their respective sectors. Moreover, critical infrastructure sectors vary in the extent to which they are required by law or regulation to comply with specific cybersecurity requirements. Detecting, responding to, and mitigating cyber incidents. Sharing information among federal agencies and key private-sector entities remains a challenge, due to, for example, the lack of a centralized information-sharing system. In addition, the Department of Homeland Security (DHS) has yet to fully develop a capability for predictive analysis of cyber threats. The federal cybersecurity strategy has evolved over the past decade, with the issuance of several strategy documents and other initiatives that address aspects of these challenge areas. However, there is no overarching national cybersecurity strategy that synthesizes these documents or comprehensively describes the current strategy. In addition, the government's existing strategy documents do not always incorporate key desirable characteristics GAO has identified that can enhance the usefulness of national strategies. Specifically, while existing strategy documents have included elements of these characteristics--such as setting goals and subordinate objectives--they have generally lacked other key elements. These include milestones and performance measures to gauge results; costs of implementing the strategy and sources and types of resources needed; and a clear definition of the roles and responsibilities of federal entities. For example, although federal law assigns the Office of Management and Budget (OMB) responsibility for oversight of federal government information security, OMB recently transferred several of these responsibilities to DHS. This decision may have had practical benefits, such as leveraging additional resources and expertise, but it remains unclear how OMB and DHS are to share oversight of individual departments and agencies. Additional legislation could clarify these responsibilities. Further, without an integrated strategy that includes key characteristics, the federal government will be hindered in making further progress in addressing cybersecurity challenges. In its report, GAO recommended that an integrated national strategy be developed that includes milestones and performance measures; costs and resources; and a clear definition of roles and responsibilities. It also stated that Congress should consider clarifying federal cybersecurity oversight roles through legislation.
Under the Social Security Act of 1935, as amended, SSA administers two federal disability programs—DI and SSI—intended to provide benefits to individuals with disabilities who are unable to work. The DI insurance program provides monthly cash benefits to individuals who have a Social Security work record and the amount of benefits is related to prior earnings. The DI program is funded primarily through a payroll tax required by the Federal Insurance Contributions Act (FICA) and is levied on most workers, and DI benefits are based on an individual’s career earnings. The SSI program is a means-tested entitlement program that provides monthly benefits to aged, blind, or disabled individuals who have very limited income and assets. The SSI program is funded through general revenues. Unlike the DI benefit, the federal SSI benefit is a flat amount (adjusted for other income the individual may have) and is not related to prior earnings. During the 1970s, as the number of disability awards and costs were increasing significantly for the DI program, Congress enacted legislation providing various work incentives to encourage beneficiaries to return to work and, potentially, leave the benefit rolls. To further these efforts, in 1980, Congress provided SSA with the authority to conduct demonstration projects to evaluate the effectiveness of policy alternatives that could encourage both DI and SSI beneficiaries to re-enter the workforce. Under this authority, SSA can temporarily waive DI and SSI program rules, including rules regarding program eligibility and benefit administration, in order to test the effect certain program changes would have on beneficiaries’ return-to-work rates and the size of the DI and SSI benefit rolls. Because Congress has historically granted SSA DI demonstration authority on a temporary basis, it is subject to periodic review and renewal. Since first providing this authority in 1980, Congress has renewed it in 1986, 1989, 1994, 1999, and 2004. However, in 2004, Congress only extended SSA’s DI demonstration authority through December 2005. As a result, SSA cannot initiate new DI demonstration projects but can continue those projects that were initiated on or before the December 2005 expiration date. However, SSA can continue to initiate demonstration projects under its SSI authority. In 2008, SSA requested that Congress reauthorize its DI demonstration authority, and a bill was introduced to do so. SSA’s DI demonstration projects—unlike other SSA research activities— are paid for via the DI trust fund. Therefore, SSA is not required to obtain congressional approval for DI demonstration expenditures, although it is required to obtain approval from the Office of Management and Budget for an annual apportionment of the trust funds for these demonstrations. Unlike the DI projects, SSI demonstration projects are funded from SSA’s overall congressional research appropriation. Although SSA’s DI and SSI demonstration authorities are separate, the agency’s disability demonstration projects are sometimes jointly authorized when they involve both DI and SSI beneficiaries and applicants. When a demonstration project is conducted jointly under the DI and SSI demonstration authorities, funding for the project is split between trust fund (i.e., DI) and appropriated (i.e., SSI) sources. SSA’s Office of Program Development and Research (OPDR) provides program analysis in support of DI and SSI programs. As part of their responsibilities, OPDR—sometimes with the assistance of outside research organizations—identifies the requirements for individual disability program demonstration projects, including the basic objectives, scope, and methodological standards for these projects. OPDR project officers are primarily responsible for overseeing the projects to ensure that they meet SSA’s technical and programmatic requirements. As we have previously reported, demonstration projects examining the impact of social programs aim to provide evidence of the feasibility or effectiveness of a new approach or practice and are inherently complex and difficult to conduct. Measuring outcomes, ensuring the consistency and quality of data collected at various site locations, establishing a causal connection between outcomes and program activities, and separating out the influence of extraneous factors can raise formidable technical and logistical problems. Although the legislation granting SSA its demonstration authority does not require the use of particular methodological approaches, SSA has historically recognized that the law’s general requirement for its demonstration projects requires SSA to conduct its projects in a rigorous manner that provides the agency with a reliable basis for making policy recommendations. According to professional research standards, a rigorous study should include a clearly stated research question and methodology, including plans for data collection and evaluation, as well as appropriate controls to determine if a relationship exists between observed outcomes and the program change under examination (see app. I). As part of our prior work related to SSA DI demonstration authority we reviewed two DI demonstration projects that SSA conducted in the late 1980s and early 1990s. At that time, we found that SSA had not used its demonstration authority to extensively evaluate a wide range of DI policy areas dealing with return to work and found that the demonstration projects had little impact on SSA’s and Congress’s consideration of DI policy issues. To facilitate close congressional oversight and provide greater assurance that SSA will make effective use of its authority, we recommended that SSA develop a formal agenda for its demonstration projects, establish an expert panel to guide the design and implementation of its demonstration projects, and establish formal processes to ensure full consideration of demonstration project results. We also identified several matters for Congress to consider, including continuation of DI demonstration authority on a temporary basis, establishment of additional reporting requirements for demonstrations, and clearer specification of the methodological and evaluation requirements of demonstrations. Over the last decade, SSA has initiated 14 demonstration projects to test policy and program changes, of which SSA has completed 4, cancelled 5, and had 5 projects in progress as of August 2008. In total, SSA had spent about $155 million on these demonstration projects as of April 2008, and officials anticipate spending another $220 million in the coming years on those projects currently in progress. However, these projects have yielded limited information for influencing program and policy decisions. We found that SSA did not conduct impact evaluations for two of its completed projects and cancelled five projects prior to conducting formal evaluations; thus, limited information is available. Since 1998, SSA has initiated 14 projects under its demonstration authority to test both DI and SSI program changes—6 related to DI, 6 related to SSI, and 2 examining both programs jointly (see table 1 for an overview of each project). As of April 2008, SSA spent $80.3 million on its completed projects, $7.1 million on cancelled projects, and $68.2 million on those currently in progress. While SSA initiated 14 projects over the past 10 years, the agency has only completed 4 of them to date. These completed projects generally focused on reducing individuals’ dependency on the SSI program by primarily testing program waivers and other changes in program administration, as outlined in its SSI demonstration authority. We also found that SSA cancelled five projects during this period, citing significant challenges that would have limited the agency’s ability to obtain reliable information from them. SSA had five projects in progress as of August 2008. These projects generally addressed topics outlined in the authorizing legislation for DI demonstrations and included strategies to return individuals to work and reduce the growth of certain subgroups of beneficiaries. For example, the legislation required projects to test various incentives to increase DI beneficiaries work activity. In addition, the Ticket to Work and Work Incentives Improvement Act of 1999 provided demonstration authority for a benefit reduction, rather than complete benefit termination, when beneficiaries had earnings that exceeded a certain level. To address this provision, SSA initiated the Benefit Offset National Demonstration (BOND) shortly after passage of the statute. Another project in progress, the Mental Health Treatment Study, is focused on identifying strategies for providing mental health treatment and employment supports for certain DI beneficiaries with mental illnesses. As of April 2008, officials estimated that the total costs for the five projects currently in progress would be about $288 million—about $220 million more than the $68 million already expended (see table 2). Despite using its demonstration authority to examine various issues, SSA’s demonstration projects have yielded limited information for influencing program and policy decisions. As required under its demonstration authority, SSA’s demonstration projects should be conducted in such a way to permit a thorough evaluation of alternative methods under consideration. However, we found that SSA had not conducted impact evaluations—assessments of a project’s effects compared to what would have happened in its absence—for two of its completed projects, the Disability Program Navigator and the Florida Freedom Initiative. Thus, no information about the impacts of the program and policy changes being tested was available for making decisions about disability policy. The Disability Program Navigator project, which SSA conducted with the Department of Labor (DOL), was not evaluated because the evaluation contractor could not meet SSA’s data security requirements established after the project was already in progress, and thus could not access the necessary data. SSA developed a plan to evaluate the Florida Freedom Initiative after they became concerned about the state’s evaluation plans. However, SSA did not conduct an evaluation because staff at the state level conducting the project did not enroll enough participants in the project to meet sample size requirements. Thus, there was not enough data available to conduct a reliable evaluation. Furthermore, SSA intended to evaluate the impacts of policies and programs being tested in five other projects but could not do so because the significant challenges those projects faced led SSA to cancel them in the early stages. Specifically, four of these projects were cancelled prior to implementation, and thus no data was available to conduct the evaluations of those policy and programs being tested. The other cancelled project— the Pediatric Medical Unit demonstration—was partially implemented but not evaluated because the project did not establish the comparison group needed for the analysis. The project also did not it enroll enough participants at some implementation sites to meet the sample size requirements needed to generate data for a reliable evaluation. However, SSA was able to obtain some preliminary information on how the project’s strategy appeared to be working at two site locations and is considering how to use it. Although SSA did conduct evaluations for two of the completed projects— the Homeless Outreach Projects and Evaluation (HOPE) project and the State Partnership Initiative project—we found that these projects also yielded little information about the impacts of the strategies being tested because the reported evaluation results could not reliably demonstrate the projects’ effects. For example, an outcome evaluation of the HOPE project showed that although disability program applicants assisted by the project received faster decisions from SSA about whether to allow or deny benefits, another federal agency initiated a similar project even though the HOPE project was under way. Therefore, SSA’s evaluation results were weakened, in part because researchers could not separate the effects of the SSA project from the effects of the other federal project. While SSA did not obtain reliable impact evaluation results from this project, agency officials told us that they did obtain a great deal of information about the process of conducting this type of demonstration project. For the State Partnership Initiative, we found that SSA did conduct an impact evaluation when the project ended, but data available at that time were incomplete, and thus information about the impact of the project may not be a reliable indicator of the project’s long-term effects. SSA’s contractors recommended that a final evaluation be conducted once all the data were collected to assess whether the preliminary results were valid. However, SSA management chose not to pursue further evaluation because the preliminary results indicated that the project was not successful at increasing earnings enough to allow individuals returning to work to exit the rolls and no longer be dependent on disability benefits. Nonetheless, SSA’s contractors and agency officials said that lessons learned from implementing the State Partnership Initiative have influenced the agency’s subsequent approach to return beneficiaries to work. For example, SSA used the job descriptions of benefits planners, as well as data systems from this project, to design the agency’s national Benefits Planning and Outreach program. SSA has also begun to obtain some information from one of the five projects currently under way. SSA has used preliminary results of the Benefit Offset - 4 State Pilot to aid in the design of the BOND project. Each of the four states conducting this pilot has provided an interim report to SSA detailing lessons learned from the implementation of this project. Because the pilot and BOND both test a benefit offset in conjunction with other DI program changes, SSA officials and the BOND project contractor believe that states’ experiences implementing this pilot will help SSA identify and resolve operational issues before rolling BOND out nationally. In addition, the four states have conducted preliminary impact evaluations for the pilot project and expect to complete final evaluations once the project’s implementation and data collection phases are over. SSA also plans to conduct impact evaluations of the other demonstration projects it had in progress as of August 2008. While they have the potential to yield reliable results, it is too early to tell whether they will ultimately be useful for informing DI and SSI policy and program changes. These projects address issues outlined in the demonstration authority statutes and disability programs more broadly, and SSA officials believe they will yield useful information. For example, SSA officials anticipate that the results of the Accelerated Benefits demonstration project could help policymakers determine whether to eliminate the 24 month waiting period for Medicare that DI beneficiaries encounter under current law. SSA officials also anticipate that demonstration projects in progress could yield key information on how to improve outcomes for certain subgroups of beneficiaries. For example, SSA officials said that the Youth Transition Demonstration, which targets young people with disabilities as they transition from school to work, could identify strategies for improving the self-sufficiency of these beneficiaries and thus reduce their dependence on the disability programs. Most of SSA’s current demonstration projects are expected to continue until 2010 or later before generating final evaluation results that could inform changes to disability program policy. SSA has taken steps to improve its demonstration projects, in part by applying more rigorous methodologies than it did for the projects SSA initiated prior to 1998; however, it has not fully implemented GAO’s recommendations from 2004 and does not have written policies and procedures in place to ensure that projects are routinely reviewed and effectively managed so that they yield reliable information about their impacts. As a result, some projects faced challenges, such as low participation rates or data collection problems, which were significant enough to hinder the agency’s ability to evaluate the projects’ impacts as planned. In addition, without comprehensive written policies and procedures governing how SSA manages and operates its demonstration program, the project objectives, designs, and evaluation plans may be impacted during times of organizational change. SSA has improved its demonstration projects by applying more rigorous methodologies than it did prior to 1998, contracting with professional researchers and appointing new management for the program. Specifically, SSA is applying more rigorous evaluation methodologies to the projects it has initiated since 1998 than it did to the projects initiated in the late 1980s and early 1990s. At the time of our prior report, SSA officials acknowledged that the limited rigor of those earlier projects reduced their usefulness and indicated that the agency had placed a new emphasis on ensuring that its projects going forward would be more rigorously designed. Of the 14 projects that SSA has initiated since 1998, 13 were early enough in the planning or design stages at that time to give SSA an opportunity to make such improvements. Since that time, SSA has completed much of the design work for its 14 projects and provided us with detailed design information for 12 of them, enabling us to assess the rigor of these projects’ designs for our current review. Our current analysis shows that SSA did use more rigorous methodologies for the projects initiated over the last decade than for its earlier projects. SSA is now using methodologies known as experimental or quasi- experimental designs, which are commonly used by research professionals conducting demonstration projects to estimate the impacts of program or policy changes. On the basis of our assessment, we determined that 11 of the 12 projects’ designs were strong or reasonable when assessed against professional research standards (see table 3). We compared each project’s design against GAO and recognized academic criteria for conducting evaluation research, which were also consistent with statutory requirements that DI projects be generally sufficient in scope and planned in such a way to permit a thorough evaluation of the program or policy changes under consideration. We also determined that the projects currently under way could provide some reliable results if implemented and evaluated as designed. Despite this progress, we found that SSA did not always meet additional DI and SSI statutory requirements regarding the general applicability of the projects’ results and the use of expert advice, respectively. The authorizing statute for DI demonstration projects requires that the results derived from the projects will obtain generally in the operation of the disability program. While one of the six DI projects, the BOND project, has been designed to yield nationally representative information about the impacts of the project, the statute does not require that the results be applicable to all DI beneficiaries nationwide. However, the results should apply to a larger group of beneficiaries than just those that participated in the demonstration project, and SSA may be able to apply the results from three other DI projects—the Accelerated Benefits demonstration project, the Benefit Offset - 4 State Pilot, and the Mental Health Treatment Study— more generally because it plans to implement and evaluate the projects in a consistent manner at multiple sites. In addition, one of the two jointly authorized projects—the State Partnership Initiative—did not yield generally applicable results because the projects were not implemented consistently across each state. The authorizing statute for SSI projects requires the Commissioner of SSA to obtain the advice and recommendations of specialists who are competent to evaluate the proposed projects as to the soundness of their design, the possibilities of securing productive results, the adequacy of resources to conduct the proposed research or demonstrations, and their relationship to other similar research or demonstrations already completed or in process before entering into a contract, grant, or cooperative agreement for the project. However, SSA obtained advice from experts for only two of the six SSI projects. Finally, SSA generally met other design criteria required by statute for the BOND project (see app. II). To further improve the demonstration projects’ planning and methodological rigor, SSA has used external research professionals to work with the agency on the design, implementation, or evaluation of 12 of the 14 projects. SSA officials have acknowledged the need for additional expertise to design and implement methodologically rigorous demonstration projects. Thus, SSA has awarded, or planned to award, contracts and cooperative agreements to research consultants and universities with such expertise to evaluate 12 of its 14 projects (see app. III). In nine cases, these researchers also worked on the design or implementation of the projects. For example, for the Accelerated Benefits demonstration project, an SSA research contractor also designed how and where the project would be conducted and managed its implementation so that the data needed for the evaluation it plans to conduct will be available. We also found that SSA and most of these researchers communicated regularly when collaborating on these projects and researchers submitted monthly or quarterly progress reports to SSA, which included information on expenditures, progress, and areas of concern that needed to be addressed. SSA also appointed new program management in 2007. Since that time, the new management team has conducted an internal review of the 10 demonstration projects that were under way at the time of their appointment. SSA officials told us that all projects underwent a thorough review that was conducted by the Acting Associate Commissioner for Program Development and Research and others with appropriate expertise. Documents we obtained indicate that the review identified the projects’ strengths, weaknesses, and whether they were likely to yield reliable, useful results. For example, SSA considered whether a project’s sample size and site selection were appropriate, if it had been implemented in accordance with its design, and if it faced any challenges that would prevent researchers from conducting a rigorous evaluation of its results. SSA concluded that five of its projects would continue, expecting that they were likely to yield reliable impact information; these five projects are currently in progress. However, the agency did need to make significant changes to strengthen one of the five projects’ designs— the BOND project. At an earlier point in the design phase, SSA expanded the BOND project’s scope to include multiple components in addition to the benefit offset, such as a health benefits package. The new management team subsequently determined that the cost estimates and program complexity associated with several of those components raised questions about the feasibility of implementing the project and significantly scaled back the scope of the study. For the other five projects the new management was reviewing, SSA determined they faced significant limitations or challenges—such as poorly chosen implementation sites and low participation—that made it highly unlikely for them to obtain reliable results or would have been duplicative of other ongoing research (see table 4). Thus, SSA cancelled those projects in 2007 and 2008. Although the only information SSA has obtained from these projects are some lessons learned from the Pediatric Medical Unit project, the agency projected that it would have spent another $82 million had it not canceled them. Based on the information SSA provided us about the challenges facing the projects and the expected future costs of conducting them, its decisions to cancel the five projects appear to have been data-driven and reasonable. Further, SSA consolidated its research expertise by merging the Office of Disability and Income Security Programs (DISP) with SSA’s Office of Policy in February 2008, creating the Office of Retirement and Disability Policy. As of June 2008, each office’s research unit remains intact and no formal organizational changes were made to the demonstration program, but agency officials told us that the merger has facilitated communication and strengthened relationships between researchers within the agency. For example, experts from the former Office of Policy’s research unit routinely review and provide input on the demonstration projects’ designs and evaluations. While SSA is taking steps to generally improve its demonstration projects’ designs and address specific project limitations, it does not have policies, procedures, and mechanisms to ensure that demonstration projects will yield reliable information about the impacts of the programs they are testing. According to internal control standards in the federal government, federal agencies should have policies, procedures, and mechanisms in place to provide reasonable assurance that a program’s objectives are being achieved. However, we found that, as of August 2008, SSA had not fully implemented the recommendations we made in 2004 to help ensure the effectiveness of the demonstration projects. Specifically, SSA continues to lack a formal, comprehensive, long-term agenda for conducting demonstration an expert panel to review and provide regular input on the design and implementation of demonstration projects from the early stages of a project through its final evaluation; and a formal process for fully considering the potential policy implications of its demonstration projects’ results and fully apprising Congress of the results and their policy implications. We did find that SSA has developed a limited research agenda for its projects that lacks basic details about the projects, including their objectives, schedules, and costs. The agenda was also developed without broadly consulting key internal and external stakeholders to obtain their input. SSA officials told us that they do not plan to update the agenda to reflect that some of the projects have been cancelled. In contrast, other federal agencies that conduct research have published much more detailed research agendas and update them regularly. For example, the Department of Education’s National Institute on Disability and Rehabilitation Research publishes a 5-year plan that outlines priorities for rehabilitation research, demonstration projects, training and related activities, explains the basis for such activities, and publishes the plan for public comment before submitting it to Congress. In addition, agency officials said that SSA planned to continue using experts only on an ad hoc basis, citing how potential conflicts of interest could pose challenges to serve on an expert panel as we had recommended. SSA established ad hoc panels or consulted with outside experts for 8 of its 14 demonstration projects, including 4 of those that are currently in progress. Many of SSA’s project officers and contractors reported that this was a positive experience, and SSA management has told us that they plan to continue using experts in the future. However, under this approach SSA may miss the opportunity to obtain advice more broadly on the demonstration program. For example, the panels and experts that SSA used were brought on board after the agency decided to initiate the demonstration projects; therefore, they were not in position at an early enough point in time to help SSA consider whether a demonstration project or an alternative research approach was, in fact, the best way to meet the agency’s needs. Furthermore, SSA did not regularly seek input from the Social Security Advisory Board or the National Council on Disability, which both play key roles in federal disability policy and could be in a position to advise SSA more broadly on the demonstration projects. In addition, our prior work found that SSA had not sufficiently provided information on the status and results of its demonstration projects to Congress. In our current review, we found that SSA regularly submits annual reports about the DI demonstration projects to its congressional oversight committees. While SSA meets its statutory requirements by submitting these reports, the information in them is generally limited to descriptions of the projects’ objectives and the dates of upcoming milestones. Similarly, the information that SSA reports about its SSI demonstration projects is limited to brief descriptions in the agency’s annual congressional budget justifications. Key information that could help Congress monitor the progress of the demonstration projects— including project costs, potential risks and obstacles to their success, or the policy implications of their results—was rarely included in the annual reports or budget justifications. However, SSA officials also told us that they sometimes share additional information with Congress about the demonstration projects. For example, SSA officials told us they met with congressional committees in October and November 2007 to share information about its design plans for the BOND project. In addition to not fully addressing our prior recommendations, SSA does not have written policies and procedures governing how it should review and operate its demonstration program. Accordingly, SSA does not have a written policy requiring SSA management to review its project officer’s demonstration projects on a regular basis. Standards for internal control in the federal government state that managers should compare a program’s actual performance against expected targets and analyze significant differences. Although the new program management team reviewed each of the demonstration projects at the time of their appointment, SSA does not have a written policy requiring such a review process periodically throughout the design, implementation, and evaluation phases of each project. SSA’s lack of a policy to systematically review each project on a periodic basis contributed to problems sometimes going undetected after the projects were implemented, and they did not yield the data needed for their evaluations. For example, because SSA was not actively involved in implementing or monitoring the Florida Freedom Initiative demonstration project, it was not in a position to take steps to ensure that the project proceeded as planned, and staff at the state level failed to enroll enough participants to generate data for the evaluation that SSA planned. Therefore, no evaluation was conducted for this project. In addition, we found that SSA does not have written procedures for its project officers to follow as they design, implement, and evaluate its demonstration projects. Such procedures could be used to ensure that standard research practices, such as conducting pilot phases and including internal and external stakeholders, are applied when planning and implementing the demonstration projects. Specifically, SSA does not require staff to regularly use pilots to test projects’ underlying assumptions, operational logistics, or feasibility before they are implemented. As a result, SSA planned or conducted pilots or phased implementations for only 8 of its 14 projects, although GAO criteria for evaluation research emphasize the importance of conducting pilots, as they are a critical test of a project’s design. At least four of the projects that did not include pilot phases experienced the type of logistical challenges that pilots are intended to identify. For example, the Homeless Outreach Projects and Evaluation demonstration project experienced start-up delays because of compatibility problems between the contractor’s online data collection system and the computer systems at the 41 sites where the project was implemented. If SSA had conducted a pilot phase for this project, it may have detected these issues at a smaller number of sites and developed a plan to resolve them prior to implementing the full project. In addition, SSA does not have written procedures directing its project officers or contractors to routinely consult with internal and external stakeholders when planning the demonstration projects. We found that at least 11 of the 14 projects experienced challenges or limitations because SSA had not obtained sufficient input from, or coordinated effectively with, internal and external stakeholders. For example, SSA officials told us that they were aware that the Benefit Offset - 4 State Pilot project and the Benefit Offset National Demonstration project required a change to internal SSA processes for calculating DI benefits but did not coordinate with key internal stakeholders early on to determine how to make this change, and no systematic process was put in place. As a result, SSA had to calculate payments by hand for the Benefit Offset - 4 State Pilot, and the BOND project’s implementation has been delayed while SSA now works with its internal stakeholders to determine how to make the needed changes. Furthermore, SSA officials, contractors, and representatives from the various demonstration projects’ implementation sites told us that there was little input from internal stakeholders and that internal coordination problems existed on at least seven of these projects, including three of those that were cancelled. We also found that lack of coordination or communication with external stakeholders led to challenges in at least seven of these projects, including four of those that were cancelled. For example, coordination problems between the two contractors for the California Rise project resulted in the project’s components being designed in isolation from each other, which complicated the evaluation plans and eventually contributed to the project’s cancellation. In addition, SSA did not always include the prospective implementation sites in the planning and design of its projects, although they could have provided insight into the feasibility and logistical requirements of the project. While SSA has periodically provided direction informally to its project officers, some project officers told us that more formal guidance would have helped them to better understand what steps were necessary and expected, and we concluded from our discussions with others that such guidance would have been helpful. SSA recognizes that the program’s lack of written procedures is a limitation and is drafting a guidebook on standard research practices for staff to follow when planning and designing demonstration projects. Although an SSA official told us that this document is a work in progress, it appears that the handbook will include key procedures for designing a project, such as identifying what data is needed for the evaluation and how it should be obtained. It also directs project officers to assemble a team for project development that includes staff from across the agency who will be able to collaborate and provide the input necessary to address the multiple components of the demonstration project. The draft guidebook also includes provisions for assembling a research panel composed of internal and external experts. This panel would review proposed research projects and identify those that present the most promising opportunities, taking into consideration the extent to which prior research has already addressed the topic. However, in its current form, the guidebook provides little direction for the implementation or evaluation phases of the demonstration projects, and SSA officials had not finalized it as of May 2008. Without comprehensive written policies and procedures governing how SSA manages and operates its demonstration programs, the project objectives, designs, and evaluation plans may be impacted during times of organizational change. Because government operating conditions continually change, agencies should have mechanisms in place to identify and address any special risks arising from such changes, especially those caused by hiring new personnel to occupy key positions in the agency. However, because SSA lacks mechanisms such as a standing advisory panel or written policies and procedures to provide continuity for its demonstration program when organizational changes occur, it cannot guarantee that institutional knowledge about the projects is shared or that the impacts of such changes are considered as the projects progress. SSA has experienced several organizational changes since the first of these projects was initiated in 1998, which have included the demonstration program’s relocation from the Office of Policy to the Office of Disability and Income Support Programs in 2002, program management’s replacement in 2007, and the Office of Policy and Office of Disability and Income Support Program’s merger in 2008. At least six of SSA’s projects experienced schedule delays and cancellations, in part because newly appointed officials made significant changes to some projects or determined that because others faced significant limitations or potential challenges it was not in the agency’s interests to continue them (see fig. 1). While certain management actions may be reasonable, SSA’s lack of written policies and procedures governing how such steps are taken leaves current and future projects vulnerable to disruption. For example, we found that the Benefit Offset National Demonstration project is still in the design phase after 9 years, during which time it has gone through numerous revisions by different program managers and was moved from one office to another. As of August 2008, an interagency working group was determining how to implement this administratively complex project. SSA has put the project’s implementation and evaluation on hold until this issue has been resolved. For over two decades, SSA has had the authority to conduct demonstration projects to test strategies that could address the challenges posed by the low rate of return into the workforce and the growing number of applicants and disabled beneficiaries. However, the agency has missed opportunities to identify ways to modernize DI and SSI programs and policies because it has generally not conducted the demonstration projects effectively. Since 1998 alone, SSA has spent over $150 million on 14 demonstration projects; yet these projects have generated limited information about the impacts of the strategies that were being tested. Although many of these projects were generally well designed, SSA’s lack of written polices, procedures, and mechanisms for managing and operating these projects is one of the key reasons the projects SSA has completed and cancelled to date were generally not implemented and evaluated in a way that yielded reliable, data-driven impact information. As a result, Congress, SSA, and other organizations that play a critical role in federal disability policy continue to lack key information about important issues, such as the impact of providing health care or employment supports to DI and SSI beneficiaries as a means to help beneficiaries achieve self sufficiency and leave the rolls. SSA’s five demonstration projects currently in progress have the potential to identify solutions to some of SSA’s challenges. However, if SSA does not address the limitations in the way it manages and operates demonstration projects, these projects may encounter the same challenges that past projects have faced, and SSA could again have little to show for its efforts. Given that SSA estimates it will spend approximately $220 million over the next several years to complete these projects, it is important that steps be taken to make the projects less vulnerable to the challenges and organizational changes they could encounter in the future. SSA’s actions to review its demonstration projects and to begin drafting guidance to help staff better plan and design its projects are encouraging first steps. As SSA officials work toward finalizing this guidance, it is also necessary for the agency to address its lack of written policies and procedures for managing and operating its projects during their implementation and evaluation phases. SSA should also take action to fully implement the recommendations we made in 2004. Implementing those recommendations by fully developing its research agenda, establishing an expert panel to advise it about the projects on a regular basis, and improving its communications with Congress could help improve the effectiveness and transparency of its demonstration program going forward. To improve SSA’s management of its demonstration projects we recommend that the Commissioner of Social Security direct the Deputy Commissioner for the Office of Retirement and Disability Policy to establish written policies, procedures, and mechanisms for managing and operating its demonstration projects that are consistent with standard research practices and internal control standards in the federal government, including those for coordinating with internal and external stakeholders and sharing information with Congress. We obtained written comments on a draft of this report from SSA, which are reproduced in appendix IV. We incorporated technical comments we received throughout the report, as appropriate. In response to our draft report, SSA generally agreed with our recommendation and acknowledged the need to develop a guidebook to assist its staff in the design, implementation, and evaluation phases of its demonstration projects. SSA further discussed its existing processes and written procedures for managing and reviewing its programs, including the demonstration project program. While noteworthy, we continue to believe SSA needs to establish written procedures that incorporate professional research standards and internal control mechanisms for ensuring that the demonstration projects yield reliable information about their impacts. SSA considers its current guidebook a work in progress. Further, SSA stated that the agency has taken steps in recent years to address our prior recommendations. While we acknowledge SSA’s efforts, the agency needs to take additional steps to fully implement them. For example, SSA continues to lack a standing expert panel to review and provide regular input on the demonstration projects overall, even though it has employed subject matter experts for some of its demonstration projects. Although SSA officials have raised concerns about the difficulty of establishing an expert panel because research contractors serving on the panel would be precluded from working on individual projects, we continue to believe that such a panel could be established. As previously recommended, this panel should also include SSA’s key research personnel and outside disability experts in addition to researchers. We are sending copies of this report to the Commissioner of SSA and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine how SSA has used its current demonstration authority, we reviewed legislation authorizing the Social Security Administration (SSA) to conduct Disability Insurance (DI) and Supplemental Security Income (SSI) demonstration projects, prior GAO report and SSA’s Office of the Inspector General (OIG) reports, and reports to Congress on the demonstration authority. We reviewed documents from SSA and from publicly available sources, including the Federal Register and reports by other research organizations. We interviewed current and former SSA officials in the Office of Disability and Income Security Programs, specifically, the Office of Program Development and Research (OPDR) and the Office of Research, Evaluation and Statistics who had responsibility for, or involvement in, the demonstration projects. We also interviewed research contractors that worked on the demonstration projects and individuals from organizations that have a key role in federal disability policy. In addition, we interviewed staff from sites where SSA implemented 9 of the 14 demonstration projects. We selected sites that included ongoing, cancelled and completed projects, and represented diverse geographic regions throughout the United States. To better understand SSA’s DI and SSI demonstration projects, we reviewed SSA documents describing the purpose, design, and status for all demonstration projects that were in progress, completed, or had been cancelled prior to completion. These documents included requests for proposals, project plans and schedules, interim or final project reports, and reports to Congress from 1998 to 2008. We used these documents to identify key characteristics of the projects, including the policy issues addressed, use of contractors, the authority used to conduct each project, project timelines, and information resulting from each project. We also examined the issues SSA tested or was in the process of testing in its demonstration program. We reviewed the authorizing statutes for the DI and SSI demonstration programs, as well as requirements for specific demonstrations included in the Ticket to Work and Work Incentives Improvement Act of 1999 to determine the extent to which the projects in SSA’s demonstration program address statutory requirements. For projects cancelled during this time period, we collected cancellation memos and other documentation to determine SSA’s reasons for the cancellations. To describe the costs associated with the program, we collected expenditures data from SSA for each project—including funds spent to date for each project and total anticipated funding for the projects that are currently in progress. We assessed the reliability of the budget data by (1) manually checking the required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. When we found discrepancies, we brought them to the attention of SSA officials and worked with them to correct the discrepancies before conducting our analyses. Based on these efforts, we determined that the data were sufficiently reliable for the purposes of this report. To assess the extent to which SSA’s demonstration projects were designed in accordance with professional research standards and statutory criteria, we reviewed the most current information that SSA provided about each project, either an evaluation design or final evaluation report. These design and evaluation methodologies were assessed against professional research standards, consistent with the authorizing statutes’ methodological requirements and GAO’s and recognized academic criteria for conducting evaluation research. Key components of the professional research standards include methodological rigor of the project’s design and evaluation and their appropriateness given the purpose of the research (e.g. use of an experimental or quasi-experimental design for an impact evaluation); appropriate handling of any problems encountered when implementing the evaluation’s design, such as participant attrition or insufficient sample sizes; appropriate handling of any problems encountered with the data, such as missing values or variables; appropriate variables to ensure internal and external validity, given the evaluation’s design; appropriate data analysis and statistical models, such as frequencies or multivariate analysis, given the evaluation’s design; and overall strength of the evaluation design and analysis. To assess the appropriateness of each study’s methodology for answering the research questions, we developed two data collection instruments based on these professional research standards—one for evaluation designs and one for the final evaluation reports. We then examined the strengths and weaknesses of the evaluation designs and final reports, taking into consideration the project’s objectives, resource constraints, methodological approach, technical adequacy of plans for data collection and analysis, and when available, the presentation of the findings. A social scientist read and coded each evaluation design or final report. A second social scientist reviewed each completed data collection instrument and the relevant documentation to verify the accuracy of every coded item. For each DI demonstration project, we also reviewed the reports to ascertain whether they met statutory requirements that the project’s results be broadly applicable to relevant segments of the DI beneficiary population, not just the project participants. For each SSI demonstration project, we also interviewed agency officials to determine whether SSA met its statutory obligation to obtain the advice and recommendations of specialists who are competent to evaluate the projects as to the soundness of their design, the possibilities of securing productive results, the adequacy of resources to conduct them, and their relationship to other similar research or demonstrations already in progress. We also identified key provisions of the demonstration authority statutes to assess SSA’s compliance with congressional reporting requirements. To address SSA’s planning and management of its demonstration projects, we interviewed SSA management and staff about the agency’s policies, guidance and processes on developing and implementing demonstration projects, and collected supporting documentation where available. We assessed the adequacy of SSA’s internal controls using the criteria in GAO’s Standards for Internal Control in the Federal Government, GAO/AIMD 00-21.3.1, dated November 1999. These standards, issued pursuant to the requirements of the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), provide the overall framework for establishing and maintaining internal control in the federal government. Also pursuant to FMFIA, the Office of Management and Budget issued Circular A-123, revised December 21, 2004, to provide the specific requirements for assessing the reporting on internal controls. Internal control standards and the definition of internal control in Circular A-123 are based on the GAO Standards for Internal Control in the Federal Government. In addition to meeting professional research standards and the mandated methodological requirements for the DI demonstration projects, the Benefit Offset National Demonstration (BOND) project is required to address a number of specific issues. We found that BOND met all of these requirements except the one to test the project’s effect on induced entry to and reduced exit from the DI rolls (table 5). In addition, while not included as part of the BOND demonstration, SSA plans to conduct research on variations in the amount of the offset as a proportion of earned income to determine the appropriate offset disregard, in accordance with the DI authorizing statute’s requirements. Projects conducted under DI demonstration authority (prime contractor) Mathematica Policy Research (subcontractor) SSA awarded MDRC a contract in 2006 for the project’s design, implementation, and evaluation. MDRC’s design for the project and evaluation expanded on preliminary design criteria SSA developed between 2004 and 2005, and published in a Request for Proposal. MDRC finalized the design in collaboration with Mathematica Policy Research and SSA. MDRC and Mathematica began enrolling participants in the project in October 2007. Implementation will continue through 2011. MDRC is conducting the impact evaluation and Mathematica is conducting the process evaluation. SSA expects the evaluation to be completed in 2011. SSA awarded Abt Associates a contract in 2004 for the project’s design and will award Abt a second contract for its implementation and evaluation contingent on successful completion of the design. Abt’s design for the project and evaluation is based on preliminary design criteria SSA developed between 1999 and 2004 and published in a Request for Proposal. Abt is finalizing the design and planning for implementation in collaboration with SSA. SSA awarded Mathematica Policy Research a contract in 2006 to design and implement the health services part of the project and to design and conduct the evaluation. Mathematica’s design for the heath services component and the evaluation is based on preliminary design criteria SSA developed between 2004 and 2006 and published in a Request for Proposal. Mathematica worked on the design in 2006 and 2007, and SSA reviewed and provided input on it. Mathematica expected to implement the project in 2007 and complete its evaluation in 2011, but SSA cancelled the project in 2007 before implementation began. Disability Research Institute (Rutgers University) SSA awarded the Disability Research Institute a contract in 2000 for Rutgers University to design, implement, and evaluate an Early Intervention pilot project. Rutgers proposed a design for the project and evaluation and submitted an evaluation design report for SSA’s approval in 2002. Rutgers planned to implement the project in 2003 and begin its evaluation in 2004, but the project remained under design until the contract expired in 2005. SSA modified Abt’s contract for the BOND project in 2005 to incorporate Early Intervention into BOND’s design, implementation, and evaluation. Abt proposed options to incorporate the Early Intervention project, but SSA later decided to cancel it and eliminated it from Abt’s contract in 2007. SSA awarded a contract to Westat in 2005 to implement and evaluate a project SSA designed between 2003 and 2005. Westat began working with the project sites in 2005 to prepare for implementation and began enrolling participants in the project in 2006. Implementation is scheduled to continue through 2010. Westat began the evaluation in 2008 and will complete it in 2011. DOL awarded a contract to the University of Iowa College of Law, Law Health Policy and Disability Center to provide technical assistance for the project’s implementation and to evaluate the project. SSA provided partial funding for the contract through an Interagency Agreement with DOL in 2002, 2003, and 2004. Implementation began in 2003. The University of Iowa did not evaluate the impact of the project because it could not meet SSA’s data security requirements in order to obtain data needed for the evaluation. SSA officials told us that DOL plans to evaluate the project under a new contract with Mathematica Policy Research, but SSA is not funding that evaluation. SSA awarded a contract to George Washington University in 2004 to conduct preliminary research related to the Early Identification and Intervention project. SSA designed the demonstration project between 2004 and 2006 and issued a Request for Proposal for its implementation in 2007, but did not award a contract to implement or evaluate the project. SSA funded a contract for Mathematica to design an evaluation for the project through an Interagency Agreement with the Department of Health and Human Services in 2004. Due to low enrollment, data needed for the evaluation was not collected by the State of Florida, which designed and implemented the project between 2003 and 2007, and the evaluation was not conducted. SSA awarded a contract to Westat in 2004 to evaluate a project SSA designed between 2003 and 2004. SSA implemented the project in 2004. Westat completed its evaluation in 2007. SSA planned to issue a Request for Proposal to hire a professional research organization to design, implement, and evaluate the project but cancelled it before it awarded the contract. University Centers on Disabilities (AUCD) SSA awarded a contract to AUCD in 2006 to design, implement, and evaluate the project; however, AUCD’s design for the project began in 2004 under an extension of another contract. AUCD partially implemented the project in 2006 and 2007. SSA cancelled the project in 2008 before AUCD completed the implementation or evaluation. Projects jointly authorized under DI and SSI authorities Virginia Commonwealth University (VCU) (prime contractor) Mathematica Policy Research (subcontractor) SSA awarded multiple contracts to VCU starting in 1998 to provide technical assistance to the states that were implementing the project and to conduct an evaluation. VCU completed its evaluation in 2006. SSA awarded three contracts executed by VCU’s subcontractor, Mathematica: one in 1998 to provide technical assistance to the states that were implementing the project, another in 1999 to design the evaluation, and one in 2003 to conduct the evaluation. Mathematica completed its evaluation in 2005. Mathematica Policy Research (prime contractor) MDRC (subcontractor) SSA originally designed and implemented the project in 2003, but due to methodological limitations, SSA awarded a contract to Mathematica in 2005 to redesign, implement, and evaluate the project. Mathematica began implementing the new design in 2006. Implementation will continue until 2013. Mathematica will complete its evaluation in 2014. In addition to the individual named above, key contributions to this report were made by Michael Collins, Assistant Director; Jason Holsclaw and Anne Welch, Analysts-in-Charge; Dana Hopings; Annamarie Lopata; Jean McSween. Additional support was provided by Kenneth Bombara; Daniel Concepcion; Jennifer Cook; Cindy Gilbert; Sharon Hermes; Joanie Lofgren; Joel Marus; Mimi Nguyen; Patricia Owens; Daniel Schwimer; Kris Trueblood; Kathy White; Charles Willson; Elizabeth Wood; and Jill Yost. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Social Security Disability: Better Planning, Management, and Evaluation Could Help Address Backlogs. GAO-08-40. Washington, D.C.: December 7, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Federal Disability Assistance: Wide Array of Programs Needs to be Examined in Light of 21st Century Challenges. GAO-05-626. Washington, D.C.: June 2, 2005. Social Security Disability: Improved Processes for Planning and Conducting Demonstrations May Help SSA More Effectively Use Its Demonstration Authority. GAO-05-19. Washington, D.C.: November 4, 2004. SSA’s Rehabilitation Programs. GAO/HEHS-95-253R. Washington, D.C.: September 7, 1995. Impact of Vocational Rehabilitation Services on the Social Security Disability Insurance (DI) Program. GAO/HRD-T-88-16. Washington, D.C.: May 26, 1988. Social Security: Little Success Achieved in Rehabilitating Disabled Beneficiaries. GAO/HRD-88-11. Washington, D.C.: December 7, 1987. Social Security: Observations on Demonstration Interviews with Disability Claimants. GAO/HRD-88-22BR. Washington, D.C.: December 3, 1987. Social Security: Demonstration Projects Concerning Interviews with Disability Claimants. GAO/HRD-87-35. Washington, D.C.: February 19, 1987.
Since 1980, Congress has required the Social Security Administration (SSA) to conduct demonstration projects to test the effectiveness of possible changes to its Social Security Disability Insurance (DI) and Supplemental Security Income (SSI) programs that could decrease individuals' dependence on benefits or improve program administration. However, in 2004, GAO reported that SSA had not used its demonstration authority effectively. This follow-up report assesses (1) how SSA has used its demonstration authority to test DI and SSI program changes and what information these efforts have yielded and (2) what steps SSA has taken to improve the planning and management of its demonstration projects. To do this, GAO reviewed documents related to SSA's demonstration project management and the steps it took to implement the recommendations in the 2004 report, as well as the projects' designs, evaluations, and costs. GAO also interviewed officials from SSA, its contractors and project sites, and disability experts. Over the last decade, SSA has initiated 14 demonstration projects under its authority to test possible DI and SSI policy and program changes; however, these projects have yielded limited information for influencing program and policy decisions. Of the 14 projects, SSA has completed 4, cancelled 5, and had 5 projects in progress as of June 2008. In total, SSA spent about $155 million on its projects as of April 2008, and officials anticipate spending another $220 million in the coming years on those projects currently under way. Yet, these projects have yielded limited information on the impacts of the program and policy changes they were testing. SSA did not conduct impact evaluations for two of its completed projects, and intended to evaluate five other projects, but could not do so because significant challenges led SSA to cancel them. SSA officials believe the five projects currently under way will yield useful information, but it is too early to tell. SSA has taken steps to improve its demonstration projects but continues to lack management controls to ensure that the projects yield reliable information for making disability policy decisions. SSA has used methodological designs that GAO determined were strong or reasonable when assessed against professional research standards for 11 of its 14 projects. SSA has also used external research professionals to work with the agency on the design, implementation, or evaluation of 12 of the projects, and appointed new program management to oversee its demonstration program. However, as of August 2008, SSA had not fully implemented the recommendations GAO made in 2004 and did not have written policies and procedures governing how it should review and operate its demonstration project program. Specifically, SSA does not have written policies and procedures for its managers and project officers to follow as they design, implement, and evaluate its demonstration projects. Absent such protocols, SSA did not always apply standard research practices, such as conducting pilot phases or obtaining sufficient stakeholder input, which led to data limitations and project cancellations.
To satisfy our objectives, we discussed the IGs’ plans, policies, procedures, and MLR audit guidelines with IG officials and bank regulators. We assessed the MLR audit guidelines for their completeness, detail, and relevance to the IGs’ audit objectives. We compared the MLR audit guidelines to audit guidelines that we developed and used in our earlier reports on the causes of bank failures and the adequacy of bank supervision. In addition, we verified the information contained in the two MLR reports completed between July 1, 1993, and June 30, 1994, the first year that section 38(k) was in effect. A detailed description of our objectives, scope, and methodology is provided in appendix I. The IGs for FDIC, the Federal Reserve, and Treasury provided written comments on a draft of this report, which are discussed on pages 21 and 22 and are reprinted in appendixes III, IV, and V. We did our work between April and October 1994 in Washington, D.C.; Irvine, CA; San Francisco; and Denver in accordance with generally accepted government auditing standards. The FDIC, the Federal Reserve, and the Office of the Comptroller of the Currency (OCC) and the Office of Thrift Supervision (OTS)—which are part of the Department of the Treasury—share responsibility for regulating and supervising banks and thrifts in the United States. FDIC regulates state-chartered banks that are not members of the Federal Reserve system while the Federal Reserve regulates state-chartered banks that are members of the system. OCC regulates nationally chartered banks, while OTS regulates thrifts. The regulators carry out their oversight responsibilities through, among other things, conducting annual examinations and issuing enforcement actions for unsafe and unsound banking practices. Congress amended FDIA in 1991 after the failures of about 1,000 banks between 1986 and 1990 had resulted in billions of dollars in losses to the Bank Insurance Fund (BIF). The amendments were designed largely to strengthen bank supervision and to help avoid a taxpayer bailout of the BIF similar to the nearly $105 billion in taxpayer funds that Congress provided between 1989 and 1993 to the Resolution Trust Corporation to protect the depositors of failed thrifts. The amendments require the banking regulators to take specified supervisory actions when they identify unsafe or unsound practices or conditions. For example, the regulators can close banks whose capital levels fall below predetermined levels. Congress also added section 38(k) to FDIA to (1) ensure that the regulators learn from any weaknesses in the supervision of banks whose failures cause material losses and (2) make improvements as needed in the supervision of depository institutions. The IGs for the Federal Reserve, FDIC, and the Treasury—which is responsible for auditing OCC and OTS—are officials responsible for identifying fraud, waste, and abuse and recommending improvements in agency operations. Each IG oversees a staff of auditors and investigators to assist in carrying out its mission. The staff engages in a range of activities, including criminal investigations, financial audits, and audits of the economy and efficiency of agency programs and operations. Section 38(k) of FDIA requires the IGs to review the failures of depository institutions when the estimated loss to a deposit insurance fund becomes “material”: i.e., when the loss exceeds $25 million and a specified percentage of the institutions’ assets. (See table 1.) The MLR reports must be completed within 6 months of the date that it becomes apparent that the loss on a bank or thrift failure will meet the criteria established by the section. Before July 1, 1993, when the section’s requirements went into effect, the IGs had each done pilot studies of previous bank or thrift failures to gain experience in this type of audit. Between July 1, 1993, and February 28, 1995, four banks that met the section’s requirements failed. The Federal Reserve IG issued MLR reports on JBT in Lakewood, CO, and Pioneer Bank in Fullerton, CA; the FDIC IG issued MLR reports on TBSD and The Bank of San Pedro, CA. The Treasury IG had not initiated any MLRs as of February 28, 1995, because there had not been any failures of nationally chartered banks or thrifts that met the section’s requirements. Our review indicated that all of the IGs made substantial efforts in preparation for meeting their MLR responsibilities under section 38(k). In a coordinated effort, the IGs entered into a SOU that outlined their approach to conducting the MLRs which, among other things, specified when IGs should initiate a MLR. The IGs also initiated pilot studies of depository institution failures before the effective date of section 38(k) (July 1, 1993) to develop and refine audit procedures and to familiarize their staffs with this type of review. Finally, the Federal Reserve and FDIC IGs hired additional staff with banking and financial audit expertise to meet anticipated demands for conducting MLRs. All three IGs enrolled their staff in relevant training courses. The FDIC, Federal Reserve, and Treasury IGs entered into a SOU in preparation for conducting MLRs. The SOU is intended to ensure that (1) statutory requirements for doing a MLR are met as effectively as possible, (2) the IGs’ work is consistent relative to MLRs, (3) mutual cooperation and efficient use of resources are maximized, and (4) privileged and confidential information contained in failed bank records is protected from unauthorized disclosure. The SOU was finalized on August 18, 1994. Among other provisions, the SOU details how FDIC’s Division of Finance (DOF) is to notify each IG office that a bank failure is expected to result in a material loss, thereby documenting that a MLR must be initiated. The FDIC IG is to be the primary liaison between the FDIC DOF and the Federal Reserve and Treasury IGs. The FDIC DOF is to notify the FDIC IG by letter when it “books” a material loss to BIF on a bank failure. If the bank was regulated by FDIC, the date of the letter starts the 6-month clock for the FDIC IG to complete its MLR. If the bank was regulated by the Federal Reserve or OCC, the FDIC IG is to notify the responsible IG by letter of the material loss. The date of this letter starts the 6-month clock for the Federal Reserve or Treasury IG office to complete its MLR. Each of the three IG offices we contacted conducted pilot studies on banks or thrifts that failed before July 1, 1993, the effective date of section 38(k). The officials we contacted said they did the pilot studies to develop policies and procedures to do MLRs after section 38(k) went into effect. The officials also said that they wanted to train their new staff in how to do this type of review and to establish contacts with officials in the bank regulatory agencies. The Treasury IG office did pilot studies on two California institutions, the Mission Viejo National Bank and the County Bank of Santa Barbara; the FDIC IG office did pilot studies on Coolidge Bank and Trust, located in Boston, and Union Savings Bank, located in Patchogue, NY; and the Federal Reserve IG office did a pilot study on the Independence Bank of Plano, located in Texas. The FDIC and Federal Reserve IGs also hired additional staff in 1992 and 1993 to assist in performing MLRs and to fully staff their agency oversight functions. The FDIC IG hired 12 additional staff members for a total of 37 staff members to conduct MLRs and other program audits. In addition to persons with auditing experience, the new staff included four banking specialists. These more experienced staff were hired to provide training to junior staff on banking examination procedures, including loan reviews to assess a bank’s asset quality. According to FDIC IG officials, all of the staff had enrolled in the FDIC’s examiner training program to learn more about the bank supervisory process. The Federal Reserve IG hired 5 additional staff members in 1993 to give it a total of 11 staff for completing MLRs and other audit work on bank supervision. These five staff persons have expertise in areas such as bank loan analysis, consumer compliance regulations, and auditing computer systems. The Federal Reserve IG had also sent these individuals to banking classes conducted by the American Institute of Certified Public Accountants and the Federal Reserve’s examiner classes. In addition, two IG officials enrolled in the American Bankers Association banking course at Stonier College in Delaware. At the time that these hirings occurred, numerous costly bank failures were projected to occur between 1993 and 1995. A FDIC IG official said that additional staff were needed to meet the anticipated workload associated with these potential MLRs. However, the number of bank failures declined substantially in 1993 and 1994 as a result of low interest rates and an improving economy. The number of bank failures fell from 122 in 1992 to 42 in 1993 and 13 in 1994. Only four banks failed between July 1, 1993, and February 28, 1995, with losses exceeding the statutory threshold, thereby prompting the IGs to initiate MLRs. FDIC and Federal Reserve IG officials we contacted said that MLRs represent only a part of their overall efforts to assess bank supervision. The officials plan to use the staff hired in 1992 and 1993 to do future MLRs and other audit work on the economy and efficiency of agency supervisory operations. Treasury IG officials said that the organization did not receive additional resources to hire more staff for conducting MLRs. Although the Treasury IG office plans to divert staff to work on MLRs as needed, other mandated work could limit their ability to do so. For example, the IG is required to do audit work on the Treasury Department’s compliance with the Chief Financial Officers Act of 1990 (CFO). The Treasury IG also has developed a comprehensive training module on how to conduct MLRs for its current staff. The module includes separate student and teacher instructions so Treasury IG staff with banking experience can train staff with limited banking experience. The modules are also designed to be self-taught and can be used without assistance. Some of the issues covered in the training module include an introduction to banking; a section on how to analyze and evaluate causes of bank failures; and an assessment of enforcement actions, including the effectiveness and timeliness of regulator enforcement actions. We reviewed the MLR guidelines that the IGs had developed for their completeness and relevance for satisfying the MLR objectives and compared the guidelines to our audit guidelines for investigating costly bank failures. On the basis of this analysis, we believe the IGs’ audit guidelines, if effectively implemented, represent a comprehensive approach to identifying the causes of bank failures and assessing the adequacy of their supervision. Although many provisions of the guidelines are similar, the Federal Reserve and FDIC IG audit guidelines differ from the Treasury IG guidelines in that they generally call for doing extensive loan portfolio reviews in every case when loan losses are determined to be the primary causes of failure. By contrast, the Treasury IG is to perform such loan reviews on a case-by-case basis. Our review of the IGs’ audit guidelines to do MLRs found that the guidelines represent a comprehensive approach for assessing the causes of bank failures. Under established guidelines, senior IG officials are to maintain contact with the bank regulators to identify troubled banks whose failures could cause material losses. The guidelines direct the IG staff to obtain and review basic documents about these troubled banks—such as examination reports dating back several years; enforcement actions; and historical financial data, such as asset growth over time. When a bank fails and causes a material loss, the IG staff are to interview responsible bank examiners and other regulatory officials and meet with former bank officials and FDIC closing personnel. Through reviewing these documents and interviewing knowledgeable officials, the IGs are to identify and document the major reasons for the banks’ failures. These reasons may include rapid growth; poor loan underwriting and documentation; loan concentrations, such as in real estate; and insider abuses. The Federal Reserve and FDIC IGs’ MLR audit guidelines differ from the Treasury IG audit guidelines in that they generally call for the staff to review failed bank loan portfolios when loan losses are determined to be the primary cause of failure. FDIC IG officials we contacted said that they need to review loan portfolios to arrive at an independent judgment as to why the banks failed. The officials said that they do not rely solely on documents generated by the bank regulators—such as examination reports and supporting workpapers—to determine the cause of failure. Although the Federal Reserve IG follows a similar procedure to the FDIC IG for selecting a sample of loans to review, Federal Reserve IG officials said that they perform loan reviews primarily to assess the quality of bank supervision. These Federal Reserve IG loan review procedures are discussed later in this section. Under the loan review audit guidelines, FDIC IG staff are to select a sample of the loans on the books of banks whose failures result in material losses. The sample is to include classified (troubled) loans and nonclassified loans as well as a mixture of commercial, real-estate, and consumer loans. Once a bank fails and causes a material loss, the staff are to visit the bank and review the sampled loans. The guidelines direct the staff to comment on, among other things, the quality of the bank’s loan underwriting standards. The IG staff are to use lending standards that the regulators have issued to bank examiners to assist in making these assessments. According to an FDIC IG official, the staff review the loan files to identify the management strategy or lending weaknesses that ultimately caused the bank to fail. By reviewing information in the loan files dating back several years, for example, he said the staff could determine whether bank management had adopted an aggressive growth strategy without adequate regard for maintaining credit standards. Unlike the FDIC IG guidelines, the MLR audit guidelines developed by the Treasury IG do not call for loan reviews even if loan losses were the primary cause of failure. As a result of the time and resources necessary to complete a MLR, the guidelines state that the Treasury IG staff should generally rely on OCC examination reports and workpapers and discussions with examiners to assess the causes of a bank’s failure. However, the guidelines do direct the IG staff to initiate loan reviews similar to those done by the FDIC and Federal Reserve IG staff in certain situations. The Treasury IG staff are to do a loan review if they determine that OCC’s records do not adequately address or develop the problem(s) that resulted in a bank’s failure. For example, it may be necessary to do a loan review or examine the bank’s records if it appears that insider abuse caused the bank to fail and the OCC examiners did not adequately develop the related issues. Our reviews of the IG MLR guidelines showed that they call for the IG staff to assess the timeliness and effectiveness of bank supervisory activities. Based on reviews of examination reports and supporting workpapers, as well as discussions with bank examiners, the IG staff are to assess the adequacy of the supervision of failed banks. For example, the IG staff are to determine whether the regulators complied with their policies and procedures in supervising the banks. Among other requirements, the IG staff are directed to determine whether the bank regulators selected an adequate sample of loans to evaluate at each bank examination and made determinations about the bank’s financial condition. The IG staff are also to review the supporting workpapers for each examination to determine whether the regulators had adequate support for their findings on the quality of each bank’s loan portfolio. The guidelines also direct the IG staff to determine whether the bank regulators had taken timely and effective enforcement actions—such as Memorandums of Understanding, Cease and Desist Orders (C&D), and Civil Money Penalties—against banks that engage in unsafe or unsound practices. For example, the Treasury IG guidelines direct the staff to focus their analysis on problems that the bank regulators identified during the course of examinations, particularly those that resulted in the bank’s failure. The IG staff are to determine what enforcement actions, if any, were taken against the bank by OCC to get the bank to correct these problems, and the guidelines direct the IG staff to determine why OCC did not take particular enforcement actions against a bank. Moreover, the guidelines call for the IG staff to evaluate OCC’s oversight of banks that are subject to enforcement actions to ensure that bank managers comply with the provisions of such actions. Once this analysis has been completed, the IG staff are to reach a conclusion about the timeliness and forcefulness of the OCC’s enforcement actions. The FDIC IG and Federal Reserve IG MLR guidelines contain similar provisions. Federal Reserve IG officials we contacted said they primarily use the loan review process discussed earlier to assess the adequacy of the Federal Reserve’s examinations of bank lending activities. In the recent MLR audit of Pioneer Bank, the officials said they reviewed a sample of 40 large commercial and commercial real estate loans in the bank’s portfolio. The staff reviewed these loans in a manner similar to that done by bank examiners. For example, the staff determined, from a review of information in the files, whether they believed each loan should be classified as “substandard,” “doubtful,” or “loss.” The staff used the regulatory examination standards that were in place at the time the loans were originated to make these classifications. Next, the staff compared their loan review findings to the findings of the Federal Reserve examiners who actually examined the bank in the years before the bank’s failure. The IG officials said they tried to determine the reasons that their loan classifications differed from those of the Federal Reserve examiners and assess whether the examiners had adequate justification for their classifications. The IG staff concluded that the Federal Reserve examiners overlooked substantial weaknesses in the bank’s lending practices over the years. Although the Federal Reserve IG officials said this type of analysis is complicated and time consuming, they believe it is often necessary for assessing the overall quality of the bank’s supervision. However, FDIC IG officials said that when they conduct a loan review they use it for the purposes of determining the causes of bank failures rather than determining the adequacy of bank supervision. The officials said that they generally do not use loan reviews to assess bank supervision because it is difficult to replicate the conditions that existed when FDIC examined banks in the past. During the first year that section 38(k) went into effect, the Federal Reserve and FDIC IGs each used the audit guidelines discussed above to do a MLR report. We believe that these reports fully describe and support the causes of the banks’ failures. The IGs also assessed the supervisory efforts of the bank regulators and recommended specific steps the regulators could take to improve their oversight efforts. However, the FDIC IG could have more fully evaluated the effectiveness of FDIC’s supervisory enforcement actions in the TBSD report. On December 27, 1993, the Federal Reserve IG issued a report on JBT, which failed on July 2, 1993. The report concluded that the bank failed as a result of a massive securities fraud perpetrated by its investment adviser; the fraud resulted in a $43 million loss for the bank. The IG staff decided not to do a loan review for the JBT investigation because trading in government securities, rather than loan losses, caused the bank’s failure. Instead, the IG staff focused its investigation on reviewing JBT’s securities trading activities and the Federal Reserve’s oversight of this trading. On April 29, 1994, the FDIC IG issued a report on TBSD, which failed on October 29, 1993. The report concluded that the bank failed as a result of poor loan underwriting, excessive real estate lending, high expenses, and poor management. As part of the MLR, FDIC staff reviewed a sample of 60 of TBSD’s loans, including 41 real-estate loans. The IG staff identified many of the deficiencies in the bank’s lending practices through the loan analysis. We reviewed the workpapers the IGs developed to support the JBT and TBSD reports to (1) ensure that the IGs complied with the MLR guidelines and (2) verify the basis for the reports’ conclusions about the causes of the banks’ failures. We also interviewed officials from the IGs’ offices, as well as examination officials from the Federal Reserve and FDIC, respectively. On the basis of our review, we believe that the reports fully describe and support the causes for each bank’s failure. See appendix II for more information about each report. Our review of the JBT and TBSD reports and their supporting workpapers also found that the Federal Reserve and FDIC IGs generally complied with their guidelines on assessing the quality of bank supervision. As examples, the IGs obtained copies of bank examination reports dating back several years, collected economic data about the regions in which the banks were located, and interviewed bank regulators. In addition, IG audit teams traveled to the banks’ locations to review bank records and interview bank officials. The IGs also identified certain deficiencies in Federal Reserve and FDIC supervisory practices. For example, the Federal Reserve IG, in the JBT report, identified specific steps that the Federal Reserve could take to improve its oversight of bank securities trading activities. Moreover, the FDIC IG, in the TBSD report, recommended that FDIC evaluate on a case-by-case basis the need to collect better data about the quality of bank assets before approving the merger of weak banks. The FDIC IG further recommended that FDIC develop examination guidance to ensure that banks place reasonable limits on the financing of speculative real-estate projects. The IGs also obtained and reviewed copies of enforcement action documents that were taken against JBT and TBSD and summarized those actions in the MLR reports. However, we found that the FDIC IG did not fully evaluate whether FDIC ensured that TBSD complied with outstanding enforcement actions as provided in the MLR audit guidelines. We did such an analysis of FDIC supervision’s follow-up efforts of its enforcement actions against TBSD. From our review, we determined that TBSD continued its aggressive real-estate lending activities even though FDIC had initiated an enforcement action intended to limit the bank’s exposure. We also found that FDIC supervision did not ensure its enforcement actions were effective to get bank management to better control its real-estate lending. These additional insights may have strengthened the FDIC IG’s recommendations to include supervisory follow-up of the effectiveness of actions taken. In 1985, FDIC issued a C&D against TBSD that, among other provisions, required the bank to improve its lending standards. On May 9, 1988, FDIC lifted the C&D, but the bank continued to have problems, such as high loan losses and high overhead expenses. According to the TBSD report workpapers, in September 1988, a FDIC examiner recommended that FDIC sign a Memorandum of Understanding with TBSD that would require the bank to correct its lending and operational problems. However, in April 1989, FDIC agreed to a resolution by TBSD’s Board of Directors, in lieu of a Memorandum of Understanding, that required changes in the bank’s operations. For example, the resolution called on the bank to assess its loan exposure to the commercial real-estate construction industry and the financial consequences for the bank in the event of a downturn in that industry. The resolution further directed TBSD management to consider capping its commercial real-estate loans as a percentage of the bank’s total loans, assets, and capital. Despite the board resolution, bank management continued to pursue an aggressive commercial real-estate lending strategy, and FDIC did not take forceful actions to correct these problems for 2 years. The FDIC TBSD report showed that the bank’s construction and commercial real-estate loans increased by nearly 75 percent from about $47 million to $82 million between year-end 1988 and year-end 1991. Many of these real-estate loans contributed to the bank’s failure in 1993. California state banking regulators examined TBSD in 1989 and 1990 and gave its overall operations relatively high ratings: i.e., an overall CAMEL rating of “2” in 1990. FDIC officials did not begin to discover the extent of TBSD’s loan loss problems until their examinations of the bank in late 1990 and in 1991. On the basis of these exam findings, FDIC signed a Memorandum of Understanding with TBSD in April 1991 that required the bank to improve its operations. TBSD also disregarded the board resolution’s provisions that it consider capping total commercial real-estate loans as a percentage of its assets and capital. Specifically, commercial real-estate loans grew from 35 percent of banks total assets to 42 percent between year-end 1988 and year-end 1991. In the same period, commercial real-estate loans increased from 423 percent of TBSD’s total capital plus reserves to 597 percent. The insights we gained from this analysis may have been beneficial to the FDIC IG in assessing FDIC’s oversight of TBSD and in making its recommendations for improving bank supervision. For this report, we focused our assessment on the plans, policies, and audit guidelines that the IGs have developed for complying with the MLR mandate. Although the current MLR process produces important benefits in understanding the circumstances surrounding individual bank failures, the benefits so far may have had a limited impact in improving bank supervision overall. We do not make any recommendations for improving bank supervision in this report since only two MLR reports were issued in the first year that the mandate went into effect and only two more reports had been issued as of February 1995. Further, certain costs associated with producing MLR reports should be considered; these costs include IG financial and personnel expenditures, some temporary disruptions to IG office operations, and duplication of effort among investigators. In addition, our continued annual reviews of the MLR process may not add value beyond this initial assessment. We conclude this section by providing a discussion of the reasons for and against various options that could be considered to address the MLR requirement. IG officials we contacted said that the JBT and TBSD reports had produced important benefits. These IG officials said that MLRs initiated to date had generated significant information about the causes of individual bank failures, the quality of these banks’ supervision, and the opportunity to train the IG staff in the bank supervisory process. A senior FDIC IG official also said that MLRs provided important information about other areas of bank management and supervision that may need to be evaluated. For example, he told us that as a result of the MLR investigation of the Bank of San Pedro, the FDIC IG may review banks’ use of “money desks” to fund their lending operations. In addition, an IG official said that the MLR process provides the office with a strong justification for assessing other aspects of bank supervision. For example, before the MLR requirement, the office had not yet established an overall program for assessing bank supervision. However, the official said that the MLR process provided the IG with a formal basis for assessing the regulators’ supervisory efforts and allowed the IG to establish working relationships with supervisory officials. Although the JBT and TBSD reports provided valuable information about the circumstances surrounding these banks’ failures, we are not making any recommendations for improving bank supervision on the basis of these reports. We do not believe that the two cases done during the first year or the total of four cases that had been completed as of February 1995 represent a sufficient base of evidence to arrive at conclusions about the overall quality of bank supervision. To make recommendations, we would need to review a larger sample of MLR reports. This larger sample would allow us to identify any common problems or trends in bank regulation that need to be corrected. Some IG officials we contacted said that the MLRs completed to date provide little basis for identifying supervisory trends. For example, a FDIC IG official said that there has not been an adequate number of MLR reports issued to draw overall conclusions about the adequacy of bank supervision. In addition, a Federal Reserve IG official said that it is difficult to convince agency supervisory officials to accept recommendations contained in a MLR report since the recommendations would be based on only one bank’s failure. In its MLR report on the failure of the Pioneer Bank, the Federal Reserve IG chose not to make any recommendations for improving overall bank supervision even though the report identified certain supervisory weaknesses. It should be pointed out that MLRs are just one part of the IGs’ overall efforts to evaluate the quality of bank supervision nationwide. For example, in September 1994, the FDIC IG issued a report on FDIC’s efforts to implement provisions in the Federal Deposit Insurance Corporation Improvement Act (FDICIA) that require the prompt closure of capital-deficient banks. At the time of our review, the Federal Reserve IG was doing audits of the Federal Reserve’s examinations of commercial real-estate loans and its bank examination program. In addition, the Treasury IG was doing studies on OTS’ implementation of various sections of FDICIA and the effectiveness of OCC’s examinations of national banks. The benefits of the MLR reports completed to date have been achieved at certain costs to the IG offices. IG officials said that a significant amount of financial and personnel resources are needed to do MLRs. In the 4 MLRs initiated as of February 28, 1995, by the FDIC and Federal Reserve IGs, between 5 to 11 staff visited the banks’ premises within the first several weeks of their failures. These staff conducted initial interviews with regulatory and bank personnel, reviewed bank examination records, and conducted loan reviews. For example, in one recent FDIC IG MLR, six staff spent 2 weeks reviewing loan files on the bank’s premises. A FDIC IG official said that the number of staff needed to perform MLRs should decline in the future as the organization gains experience in this type of work. IG officials also said that the resources necessary to complete a MLR report within the 6-month deadline can have temporary but disruptive effects on their normal operations. Treasury IG officials said that approximately 30 percent of their staff are already dedicated to assessing executive agency financial systems as required by the CFO Act. The Treasury IG officials estimate that 50 percent of their staff resources would be dedicated to CFO work by 1996. Therefore, an increasing MLR workload could hinder the Treasury IG’s ability to devote sufficient staff resources to meet its CFO Act and other audit obligations. Similarly, Federal Reserve and FDIC IG officials said that the resources necessary to complete the JBT and TBSD reports within the 6-month deadline caused certain operational challenges. For example, these officials said they had to pull staff from other ongoing studies to assist in the material loss investigations. We believe that these disruptive effects on the IGs’ operations could be magnified should there be a substantial increase in the number of costly bank failures, particularly in the case of the Treasury IG, which did not receive additional staffing to do MLRs. Although the Federal Reserve and FDIC IGs have increased their staffing in recent years, they could also face substantial pressures to complete MLR reports within 6 months should numerous banks fail simultaneously. For example, a FDIC IG official estimated that the organization could handle a maximum of about 14 MLRs per year. Another potential limitation of the current MLR process is that it does not always allow time for the IGs to review reports prepared by FDIC’s Division of Asset Services (DAS) investigators who also investigate the causes of bank failures. Like MLR reports, these reports provide information about individual bank failures. However, Federal Reserve and FDIC IG officials question whether it would be beneficial to review these DAS reports and mentioned that these reports were often not available until months after banks failed. DAS is responsible for recovering a portion of FDIC’s outlays to resolve bank failures by selling each failed bank’s assets to private sector bidders. DAS also sends investigators to failed banks to determine whether FDIC could pursue civil claims against any bank officials culpable for the losses to help offset the costs of the failure. It is the policy of the DAS investigators to issue a report within 90 days of a bank failure—although the process can take longer—that documents their findings. This report is called a Post Closing Report (PCR). DAS has issued PCRs on JBT and TBSD. In our discussions, a Treasury IG official said that PCRs provide information that could be useful in doing MLR reports. The official also said it may make sense for the IGs to wait until DAS has issued PCRs before initiating MLRs. If the IGs initiated MLRs after PCRs, this could allow the IGs to avoid duplicating the work of the DAS investigators and it would allow the IGs to plan the scope of their MLR audit work on the basis of information contained in the PCRs. However, Federal Reserve and FDIC IG officials said that there is no significant relationship between MLRs and the DAS investigations. The officials said that DAS investigations are more narrowly focused than MLRs and, therefore, have limited use. For example, Federal Reserve IG officials pointed out that PCRs, unlike MLR reports, do not assess the quality of bank supervision. IG officials also said that they consult with FDIC DAS investigators during the course of MLRs to obtain information. Finally, the FDIC and Federal Reserve officials said that PCRs were often not available until months after a bank failure. For example, the JBT PCR was completed nearly 4 months after the failure, and the TBSD PCR was completed about 7 months after the failure. As discussed earlier, MLR reports must be completed within 6 months of a bank’s failure. Although the PCR’s primary purpose is to assess whether FDIC should pursue civil actions against former bank officials, the reports contain some information that is similar to that found in MLR reports. For example, we reviewed PCRs that were issued for JBT and TBSD. Like the MLR reports, these PCRs provide historical information about each bank and the results of regulator exam findings. The PCRs also established the causes of the banks’ failures and documented the provisions in any enforcement actions taken against the banks. As discussed earlier in this report, the IGs have generally positioned themselves effectively to meet their responsibilities under the MLR requirement. In addition, if bank failures continue at a relatively low rate as projected over the next several years, MLR reports will not provide either the IGs or us with an adequate basis for assessing the overall quality of bank supervision and making needed recommendations for improvement. Therefore, our annual reviews of the MLR process may no longer add value to either the MLR or supervisory processes. Several options are available concerning the current MLR process that we discussed with IG officials. Specifically, the current MLR process could be maintained, repealed, or amended so that the IGs have more discretion on the number and timing of MLRs to perform each year. Table 2 presents several reasons for and against each of these options. Congress added section 38(k) to FDIA so that the regulators would learn from any weaknesses in the supervision of costly bank failures and possibly avoid such weaknesses in the future. We believe that MLR reports can provide important information about individual bank failures and that the IGs have generally positioned themselves effectively to meet their responsibilities. However, the current MLR requirements may not be the most cost-effective means of achieving improved bank supervision. The Federal Reserve, FDIC, and Treasury IGs have made substantial efforts in preparation for performing MLRs as required by section 38(k). The IGs have also developed detailed and comprehensive MLR guidelines that, if effectively implemented, are adequate for meeting the IGs’ responsibilities under section 38(k). The Federal Reserve and FDIC IGs have each used the guidelines to prepare MLR reports that fully described the causes of the JBT and TBSD failures. However, the FDIC IG could have gained greater insights on bank supervision if it had expanded its analysis of the effectiveness of the enforcement actions that FDIC took against TBSD. Although the MLR process can produce important benefits in understanding the circumstances surrounding individual bank failures, these benefits have been limited and are achieved at certain costs. IG officials we contacted said that the two MLR reports completed during the first year that section 38(k) went into effect did not provide an adequate base of evidence to assess the overall quality of bank supervision. The limited benefits may have been outweighed by the costs associated with producing the MLR reports, which include IG personnel and financial expenditures; temporary disruptions in IG office operations; and potential duplication of effort among the IGs and FDIC DAS. However, if the IGs had more flexibility to determine the number and timing of MLRs to perform each year, they could (1) have more flexibility to utilize their resources, particularly in years when there are numerous bank failures; (2) potentially take advantage of PCRs issued by DAS; and (3) do broader analysis of the overall quality of bank supervision. A more flexible approach could still maintain the original intent of section 38(k), which was to hold the bank regulators accountable for their actions. Thus, Congress may wish to consider whether the currently required approach remains the best available. Similarly, we believe that requiring us to perform annual reviews of MLRs may no longer add sufficient value to the MLR or bank supervisory processes to warrant continuation. We do not make any recommendations for improving overall bank supervision in this report because we agree with the IG officials that the limited number of reports produced so far does not provide an adequate base for identifying improvements. We recommend that the Inspector General of FDIC, in future MLR reports, take steps to more fully assess the effectiveness of FDIC’s enforcement actions. Congress may wish to consider whether the current MLR requirement, which requires the IGs to report on bank and thrift failures costing the deposit insurance funds in excess of $25 million, is a cost-effective means of achieving the requirement’s intended benefit—to help improve bank supervision. If it determines that the requirement is not cost effective, Congress can choose to either repeal or amend the requirement. Of these options, amending the current MLR requirement may be more desirable because it would allow the IGs to continue their bank supervision work and also provide them greater flexibility in managing their resources. Moreover, Congress should consider repealing our mandate to review MLRs on an annual basis. The IGs for the Federal Deposit Insurance Corporation, the Board of Governors of the Federal Reserve System, and the Department of the Treasury provided written comments on our draft report, which are reprinted in appendixes III, IV, and V. The three IGs agreed with the report’s overall conclusions that the IGs have effectively positioned themselves to carry out their responsibilities and have developed comprehensive and detailed audit guidelines. In response to our recommendation, the FDIC IG agreed to take steps to more fully evaluate the effectiveness of FDIC’s supervisory enforcement actions in future MLRs, even though he did not necessarily agree that our analysis of TBSD’s compliance with FDIC enforcement actions provided additional insights into the effectiveness of FDIC’s supervision of TBSD. The IGs also agreed that Congress should consider amending section 38(k) of FDIA so that the IGs have more discretion on the number, timing, and scope of MLRs to initiate each year. The Federal Reserve IG stated that, although MLR reports may not be the most cost-effective means of achieving improved bank supervision, they allow the staff to focus their analysis on the implementation of bank supervision policies and procedures over time relative to a particular bank. He also said that the Federal Reserve IG office may be able to make broader recommendations with respect to bank supervision as additional MLRs are completed and that additional flexibility with regard to the MLR requirement would allow the organization to better manage its resources while preserving the intent of the legislation. The IGs also provided comments that were generally technical in nature and are incorporated in this report where appropriate. We are sending copies of this report to the Inspectors General for the Federal Deposit Insurance Corporation, the Board of Governors of the Federal Reserve System, the Department of the Treasury, and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Mark J. Gillen, Assistant Director, Financial Institutions and Markets Issues. Other major contributors to this review are listed in appendix VI. If you have any questions about this report, please call me on (202) 512-8678. In accordance with section 38(k) of the Federal Deposit Insurance Act as amended, our objectives were to (1) assess the adequacy of the preparations, procedures, and audit guidelines that the Inspectors General (IG) have established for performing material loss reviews (MLR) to ensure compliance with their responsibilities under the section; (2) verify the information contained in the MLR reports upon which the IGs based their conclusions; (3) recommend improvements, if necessary, in bank supervision based on a review of the MLR reports issued between July 1, 1993, and June 30, 1994; and (4) assess the economy and efficiency of the current MLR process. To accomplish these objectives, we interviewed staff from the Federal Reserve, Federal Deposit Insurance Corporation (FDIC) and Treasury IG offices on the plans, policies, and procedures they had established to perform MLRs, including their audit guidelines, staffing, and training programs for employees assigned to perform MLRs. We also conducted a round table discussion session with representatives from each of the IG offices to share their views on some of the MLR issues and concerns. Additionally, we met with bank supervision officials and bank examiners from the Federal Reserve and FDIC to obtain their views on the MLR process. We also reviewed the legislative history of the Federal Deposit Insurance Corporation Improvement Act of 1991, pilot studies completed by the IGs, and our previous reports on bank failures and bank supervision. To assess the adequacy of the IGs MLR audit guidelines, we reviewed them for their completeness and relevance to the MLR objectives. We also compared the MLR audit guidelines to audit guidelines that we had developed for investigating costly bank failures. We developed these guidelines to (1) understand why so many depository institutions failed in the late 1980s and early 1990s causing substantial Bank Insurance Fund (BIF) losses and (2) recommend improvements in depository institution supervision. These guidelines produced report findings that were praised as complete and accurate even by bank regulators whose examination practices were sometimes criticized in the reports. The guidelines involve obtaining and reviewing copies of historical financial data, which is available from the bank regulators, showing information such as the growth in the bank’s loan portfolio over time; regulatory examinations and their supporting workpapers that had been done on a particular bank 5 to 10 years before its failure; enforcement actions that the regulators had taken against the bank for unsafe and unsound practices, such as Memorandums of Understanding or Cease and Desist Orders; correspondence between the bank and the regulator primarily responsible for its supervision; and the Post Closing Reports that identify both the causes of bank failures and determine whether FDIC should pursue civil claims against bank officials to help compensate the BIF for any losses incurred in resolving the failures. Moreover, we reviewed the two MLR reports issued by the Federal Reserve IG and FDIC IG during the first year that section 38(k) went into effect: the reports on Jefferson Bank and Trust in Colorado and The Bank of San Diego in California, respectively. We substantiated the accuracy of the MLR reports’ findings and recommendations on the causes of the banks’ failures by generally following our audit guidelines discussed above. We reviewed the reports’ supporting workpapers and interviewed Federal Reserve and FDIC examination officials. We also reviewed the two MLR reports to identify potential recommendations that we could make to improve the overall quality of bank supervision. Although we reviewed the MLR reports on The Bank of San Pedro and Pioneer Bank, we did not verify the information contained in these reports because they were issued in the second year that section 38(k) went into effect. We did our work between April and October 1994 in Washington, D.C.; Irvine, CA; San Francisco; and Denver in accordance with generally accepted government auditing standards. In the first year that section 38(k) went into effect—July 1, 1993, to June 30, 1994—two banks failed and caused material losses. The Federal Reserve Inspector General (IG) issued a material loss review (MLR) report on the Jefferson Bank and Trust (JBT), and the Federal Deposit Insurance Corporation (FDIC) IG issued a MLR report on The Bank of San Diego (TBSD). We read these reports, reviewed their supporting workpapers, and interviewed Federal Reserve IG and FDIC IG officials and agency officials responsible for the supervision of these banks. This appendix summarizes the MLR reports’ findings and recommendations. On December 27, 1993, the Federal Reserve IG issued a MLR report on JBT of Lakewood, CO, which failed on July 2, 1993. The report concluded that JBT failed as the result of a massive securities fraud perpetrated by its investment adviser. The investment adviser diverted approximately $43 million worth of JBT’s government securities for his own benefit and provided fictitious records to the bank so that it was not aware of the securities’ diversion. In December 1991, JBT liquidated its account with the investment adviser. However, JBT was subsequently sued by the Iowa Trust, another client of the investment adviser, which claimed that a portion of its securities had been diverted to pay JBT. A U.S. District Court ruled in favor of Iowa Trust, and JBT was forced to turn over approximately $43 million in government securities. Colorado closed JBT on July 2, 1993, because the bank was no longer solvent. The investment adviser pled guilty to defrauding the bank, and other investors, and was sentenced to a federal prison term. The Federal Reserve IG’s report on JBT also recommended steps that the Federal Reserve could take to improve its oversight of bank securities trading. For example, the report recommended that the Federal Reserve ensure compliance with a policy the IG contends limits the percentage of assets, such as government securities, that a bank can keep with a securities dealer. This policy, which is one of the recommendations for a bank’s selection of a securities dealer included in the Board of Governors’ Commercial Bank Examination Manual, sets guidelines for limiting the aggregate value of securities a bank should keep with a selling dealer. The IG concluded that if JBT had followed this recommendation with respect to the government securities diverted by its investment adviser, the bank would have sustained a loss of approximately $1.6 million instead of its loss of approximately $43 million. The Board disagreed with the IG’s recommendation, contending that the policy does not apply to pure safekeeping arrangements, but only to those involving a credit risk arising from transactions between a bank and a securities dealer. Specifically, the Board maintained that the policy is an attempt to “limit banks’ exposures to questionable securities transactions involving credit risks—not safekeeping risks.” Thus, according to the Board, the policy did not apply to the arrangement between JBT and its broker-dealer because they had purely a safekeeping relationship, rather than a credit relationship. On April 29, 1994, the FDIC IG issued a MLR report on TBSD, which failed on October 29, 1993. In December 1992, TBSD, with the approval of FDIC, merged with its two affiliates, Coast Bank and American Valley Bank, to form the consolidated TBSD. The report concluded that TBSD failed as a result of weak loan underwriting; concentrations in high-risk, real-estate loans; high overhead expenses; and inadequate oversight by bank management. The report stated that, in the 1980s, TBSD adopted a strategy of making high-risk loans to real-estate developers in southern California. By 1991, high-risk, real-estate loans comprised more than 50 percent of the consolidated bank’s loan portfolio. The IG concluded that many developers defaulted on their loans in the early 1990s when the real-estate market declined in California. TBSD had inadequate capital and loan loss reserves to cover these losses, and California subsequently closed the bank. The report concluded that FDIC’s supervision of TBSD was in compliance with applicable laws and regulations and that it properly identified and addressed the conditions that caused the bank to fail. The report recommended that FDIC issue regulations to implement provisions in FDICIA that are designed to improve bank lending practices. The report also concluded that the FDIC’s decision in 1992 to approve the merger between TBSD and its affiliates was reasonable. FDIC approved the merger so that the banks could reduce their expenses and so that managers responsible for their condition could be removed. However, the IG found that it may have been appropriate for FDIC to have obtained more current information about the banks’ asset quality problems before it approved the merger. These asset quality problems proved to be more substantial than originally believed in December 1992 and resulted in the bank’s failure the following October. FDIC generally concurred with the IGs’ conclusions and recommendations. Bruce K. Engle, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO: (1) assessed the preparations, procedures, and audit guidelines that certain Inspectors General (IG) have established for material loss reviews (MLR); (2) verified the information contained in the MLR reports; (3) recommended improvements in bank supervision based on MLR reports issued between July 1, 1993, and June 30, 1994; and (4) assessed the economy and efficiency of the MLR process. GAO found that the IG reviewed have satisfied their MLR responsibilities by: (1) establishing a statement of understanding that coordinates their performance of MLR; (2) initiating and completing several pilot studies; (3) hiring staff with bank and audit experience; and (4) developing relevant training programs and comprehensive audit guidelines. In addition, GAO found that: (1) if MLR guidelines are implemented correctly, they will be adequate to determine the causes of bank failures and the quality of bank supervision; (2) the costs associated with producing MLR reports can be considerable and may cause temporary operational disruptions to IG offices; and (3) MLR requirements do not always give IG sufficient time to review reports prepared by other Federal Deposit Insurance Corporation (FDIC) officials who investigated causes of bank failures.
Once a U.S. agency determines that a ship is obsolete and no longer useful for the purposes intended, that agency must find a way to properly dispose of it. Ships that are no longer needed are screened for other uses, including transfer to another country under proper legal authority, use by another federal agency, and donation to a state or private recipient for appropriate public use. Ships may also be sunk as part of naval training exercises. Ships not used for any of these purposes are considered available for scrapping. According to a July 1997 MARAD study, ship scrapping is a labor-intensive industry with extremely high risks with respect to environmental and worker safety issues. Ships typically contain environmentally hazardous materials such as asbestos, polychlorinated biphenyls (PCB), lead, mercury, and cadmium. A ship is normally dismantled from the top down and from one end to the other with torches that cut away large parts of the ship. Pieces of the ship are lifted by crane to the ground where they are cut into the shapes and sizes required by the foundry or smelter to which the scrap is to be shipped. Remediation of hazardous materials takes place prior to, as well as during, the dismantling process. If done improperly, ship scrapping can pollute the land and water surrounding the scrapping site and jeopardize the health and safety of the people involved in the scrapping process. Ship scrapping is subject to federal, state, and local government rules and regulations on the protection of the environment and worker safety. These rules and regulations implement pertinent laws in these areas. In the environmental area, these laws include the Toxic Substances Control Act, the Resource Conservation and Recovery Act, the Clean Air Act, and the Federal Water Pollution Control Act. In the worker safety area, the primary law is the Occupational Safety and Health Act. Various federal and state regulatory agencies work to enforce these laws. (See app. I for more information about these laws.) Historically, government-owned surplus ships have been scrapped both domestically and overseas. As shown in table 1, MARAD has relied primarily on overseas scrapping, while the Navy has relied primarily on the domestic industry to scrap its ships. From 1983 through 1994, MARAD sold almost all of its ships for overseas scrapping. Since 1982, the Navy has not directly sold any ships for overseas scrapping. Federal agencies report that there are about 200 ships awaiting disposal or scrapping and that they are stored at various locations throughout the United States. As shown in table 2, the Navy and MARAD have the majority of ships to be scrapped, but the Coast Guard and the National Oceanic and Atmospheric Administration also have some. The Navy reports that, as of August 1, 1998, it had 127 surplus ships available to be sold for scrap. Seventy-two of these ships are expected to be sold though the Defense Logistics Agency’s Defense Reutilization and Marketing Service (DRMS). The remaining 55 ships are expected to be transferred to MARAD for sale. MARAD, which is the U.S. government’s disposal agent for surplus merchant-type ships of 1,500 tons or more, reports that it had 63 ships available for scrapping. By law, MARAD is required to dispose of all obsolete ships by September 30, 2001. The combined tonnage of Navy and MARAD surplus ships amounts to about 1 million tons—about 600,000 tons for the Navy and 400,000 tons for MARAD. Navy and MARAD officials have estimated that it will cost them at least $58 million (in fiscal year 1997 dollars) for storage, maintenance, and security of surplus ships between fiscal year 1999 and 2003 if they are not scrapped. Some ships are in such poor condition that they may need dry-docking for repairs to keep them afloat until they can be scrapped. MARAD estimates that its dry-docking and repair costs could be as high as $800,000 per ship. A number of factors have caused the current backlog of federal surplus ships awaiting scrapping. They include (1) reductions in the Navy’s force structure following the collapse of the former Soviet Union and the Warsaw Pact; (2) unavailability of overseas scrapping; (3) difficulties experienced by some domestic scrappers in complying with environmental, worker safety, and other contract performance provisions; and (4) a shortage of qualified domestic bidders. Navy force structure reductions following the collapse of the former Soviet Union and the Warsaw Pact have resulted in an increased number of ships to be scrapped. Since 1990, the Navy has reduced its active fleet from 570 ships to 333 ships. The Navy’s inactive fleet has increased by 82 percent since 1990 and the number of ships to be scrapped increased from about 25 in 1991 to 127 as of August 1, 1998. Overseas scrapping by MARAD was suspended in 1994 in response to an April 1993 Environmental Protection Agency (EPA) letter advising the agency that the export for disposal of PCB materials with concentrations of 50 parts per million or greater was prohibited. In accordance with the Toxic Substances Control Act, EPA regulates all aspects of the manufacture, processing, distribution in commerce, use, and disposal of PCBs. In 1980, EPA banned the export of PCBs for disposal. In 1989, the Navy became aware of the presence of PCBs in solid materials on board some of its older ships and sought EPA’s advice on how to properly handle and dispose of these materials. Subsequently, EPA confirmed that surplus ships could not be exported for scrapping if they contained solid materials with concentrations of PCBs at 50 parts per million or greater. In 1997, the Navy and MARAD, each negotiated an agreement with EPA to allow for the export of ships for scrapping provided (1) all liquid PCBs are removed prior to export, (2) items containing solid PCBs that are readily removable and do not affect the structural integrity of the ship are also removed, and (3) countries to which the ships may be exported for scrapping are notified so that they have the opportunity to refuse to accept the ships if they so choose. The Navy and MARAD sought these agreements principally because they recognized a need to reduce their backlogs of surplus ships and the limitations of domestic scrapping efforts. Despite the agreement with EPA, Navy officials decided in December 1997 to temporarily suspend any export of ships for scrapping due to (1) continuing concerns regarding environmental pollution and worker safety in foreign ship scrapping countries and (2) potential impacts on the domestic ship scrapping industry. In January 1998, MARAD also suspended the export of ships. As of August 1998, the voluntary suspension on exports was still in effect. Specific environmental concerns revolve around the export of PCBs and other hazardous materials that could be dumped along the shorelines of developing nations and about the health and safety of foreign workers. For example, domestic industry representatives have stated that foreign ship scrapping operations would not be in compliance with the strict U.S. safety and environmental regulations. U.S. government officials have also stated that many of the major overseas ship scrapping countries have less stringent laws and regulations regarding environmental and worker safety issues than exist in the United States. Domestic industry concerns are related to the history of foreign scrappers bidding significantly higher prices to scrap ships overseas. This is due, in part, to a greater demand and higher selling price for scrap metal in foreign countries and lower costs of overseas operations because of the less restrictive environmental and worker safety regulations and lower labor rates. Between 1991 and 1996, the Navy repossessed 20 of the 62 ships it had sold to domestic firms for scrapping due to environmental pollution and safety compliance problems and other contractor performance issues. For example, the former aircraft carrier, U.S.S. Oriskany, and five other ships located at a contractor’s facility in the former Mare Island Naval Shipyard at Vallejo, California, were repossessed by the Navy due to the contractor’s not obtaining the necessary environmental permits and the dissolution of the contractor’s partnership. Some of these repossessions were costly. For example, according to a Navy official, it had to spend about $2 million to tow 14 ships back to federal storage facilities in Philadelphia from North Carolina and Rhode Island when a ship scrapping contract was terminated due to contractor noncompliance with environmental and safety regulations. Also, the Navy and DRMS incurred additional costs for maintaining, storing, and reselling these ships. The domestic ship scrapping industry has historically been small. During the 1970s, when hundreds of ships were scrapped domestically, the industry was comprised of about 30 firms. However, given the small number of ships available for domestic scrapping since then, many of the firms exited the industry. Currently, there are four private ship scrappers in the United States actively scrapping federal surplus ships. In addition, for national security reasons, one naval shipyard is scrapping nuclear submarines. The typical U.S. private sector ship scrapping site is located in an urban industrial area coincident with other industrial and maritime related facilities. The facilities area is generally small, fewer than 10 acres, and most of the firms, until recently, worked on only one ship at a time. According to a July 1997 MARAD study, ship scrapping companies tend to be thinly capitalized. The study concluded that the industry is a risky, highly speculative business. Following the Navy’s experience with high rates of ship repossessions between 1991 and 1996, both the Navy and MARAD considered fewer firms to be technically and financially acceptable. For example, in response to MARAD’s 1996 solicitation for scrapping eight ships, the agency received only five positive bids, and only one of these was considered technically acceptable by the agency. MARAD awarded the bidder only two ships, in part, because of the bidder’s limited scrapping capacity. Similarly, Navy/DRMS solicitations in 1996 and 1997, for a total of 11 ships, resulted in only two technically acceptable proposals for each solicitation and the award of only two ships. Both the MARAD and Navy awards were made to the same firm. Recent testimony to Congress and statements made by domestic industry officials raise doubts about the willingness of new firms to enter the industry and current firms to substantially expand their operations under current conditions. Some domestic industry representatives stated that the profits from ship scrapping have not been commensurate with the financial risks and environmental liabilities associated with it, and one representative stated that his firm was no longer willing to assume such risks. However, other industry representatives believed that they could make a profit scrapping ships, as long as they could get enough ships to justify large scale and continuous production. As discussed later, the agencies have (1) taken action to sell ships in lots and (2) recognized that steps are needed to minimize environmental and worker safety risks associated with ship scrapping to make ship scrapping more financially attractive. In 1996, the Navy and MARAD identified and began implementing a number of initiatives to address domestic ship scrapping performance problems. Also, in 1998, an interagency panel endorsed the 1996 initiatives but recommended that a number of steps be taken to further improve the ship scrapping process, both domestically and internationally. It is too early to assess the impact of the 1996 initiatives, and the agencies are still reviewing the extent to which they will implement the panel’s recommendations. However, no specific time frames for completing the review have been established. Also, no procedures have been established for implementing the recommendations that are accepted. In 1996, the Navy and DRMS realized that the then-existing ship scrapping practices had contributed to the domestic contractor performance problems previously discussed. For example, prior to January 1996, DRMS (1) accepted all technical proposals with the invitation for bid, (2) relied on the high bid without seeking an independent review of the company’s business or financial background, and (3) performed only minimal contract oversight and on-site progress reviews. In an effort to correct these problems, the Navy and DRMS began taking several actions to improve their scrapping practices, as well as to make other improvements in the ship scrapping program. While sufficient experience with the actions taken is not yet available because only two Navy ships have been scrapped since 1996, the actions appear to be reasonable approaches to help address past contractor performance problems. Approaches adopted since 1996 to improve the ship scrapping practices include the following: Developing a two-step bid process requiring contractors to submit a technical proposal for approval before they can be considered viable candidates to place a financial bid for the surplus ships. The technical proposals are to consist of an environmental compliance plan, an operations plan, a business plan, and a safety and health plan. A technical evaluation team is to evaluate each plan, and those contractors found to have acceptable technical proposals will be asked to submit a financial bid. Implementing quarterly progress reviews at each scrapping site to assess the contractor’s progress and compliance with contract provisions, including environmental and safety requirements. Awarding contracts designed to (1) provide daily on-site surveillance of ship scrappers, (2) conduct environmental/safety site assessments, and (3) evaluate ship scrapping operations. Developing a contractor rating system for use in deciding on how closely to provide contract surveillance. Actions taken to improve the general management of the ship scrapping program and to address contractor concerns about the profitability of ship scrapping included advertising and selling ships by lot and allowing contractors to remove the ships from government storage as they are ready to be scrapped, holding periodic industry workshops to inform contractors of what is expected of them in the scrapping of federal surplus ships and obtain feedback from the contractors on their concerns and desires, evaluating the potential for removing more of the hazardous materials before the ships are advertised for sale, and notifying state and local regulators where the ship scrapping will be performed after contracts are awarded. The Navy and DRMS have also adopted, and are considering, other options for disposing of ships. For example, they obtained legislative authority to negotiate contracts for ship scrapping to obtain the most advantageous contract for the government rather than awarding the contract based solely on the highest bid. MARAD also developed and adopted a number of new approaches similar to those of the Navy/DRMS. For example, MARAD has begun using contracting procedures that include the requirement for a technical proposal from bidders on how they would scrap ships. MARAD, like DRMS, is now considering only those bidders with acceptable technical proposals as suitable for contract award. The Department of Defense, in December 1997, took the lead in establishing an Interagency Panel on Ship Scrapping. This panel was tasked to review Navy and MARAD programs to scrap ships and to recommend ways to ensure that federal ships are scrapped in the most effective and efficient manner while protecting the environment and worker safety. While the 1996 initiatives and 1998 interagency panel recommendations, if implemented, offer the potential to address previously experienced problems, some domestic and foreign scrapping issues remain unresolved. They relate to whether the government should promote the expansion of the domestic industry and whether ships should be scrapped overseas. The actions most often discussed for addressing these issues have much different potential results. For example, federal agencies could generate higher revenues by scrapping ships overseas, but such scrapping may involve greater environmental and worker safety risks as well as adversely affect the domestic scrapping industry. Similarly, relying solely on the domestic industry for ship scrapping would avoid overseas scrapping concerns but would require a more prolonged approach to reducing the backlog or greater financial incentives to achieve domestic industry expansion. The panel made numerous recommendations to the various agencies participating in the panel on issues related to both domestic and overseas ship scrapping. While we did not do a detailed assessment of the panel’s recommendations, they do appear to address some of the previously experienced problems. However, the panel’s report does not resolve issues on the government’s role in promoting domestic industry expansion and the use of foreign ship scrapping. The agencies to whom the recommendations are made are responsible for deciding what actions, if any, to take. As of August 6, 1998, the agencies were still reviewing the extent to which they will implement the panel’s recommendations. Further, the process for deciding whether to accept and ultimately implement the recommendations is informal. For example, the agencies have not established specific time frames for completing their review of the recommendations. Also, once the recommendation review process is complete, lead responsibilities, tracking systems, and milestones for implementing the individual recommendations will be needed. The panel’s April 20, 1998, report concluded that the Navy and MARAD had recognized the problems identified with past contracting and monitoring practices and taken steps to address many of them. The report also stated that more could be done to (1) improve the ship scrapping contracting process, (2) encourage the development of a viable domestic industry to handle a significant portion of the backlog, and (3) make the use of foreign scrapping to augment the domestic industry a more acceptable option. More specifically, the panel recommended that the Navy/DRMS and MARAD establish consistent ship scrapping contracting procedures. For example, the Navy/DRMS and MARAD should develop standardized performance bonds to make them equally attractive to bidders. To encourage development of the domestic industry, the panel concluded that the industry needed to improve its knowledge and understanding of the ship scrapping contracting process. To accomplish this, the panel recommended that EPA and the Occupational Safety and Health Administration, in coordination with the Navy/DRMS and MARAD, continue to educate the industry through seminars and workshops and should develop an environmental and worker safety compliance manual for industry use. The panel asserted that the industry needed additional knowledge on the techniques for scrapping large ships and the range, types, and locations of hazardous materials to ensure that ships are scrapped in an environmental, safe, and economical manner. To accomplish this, the panel endorsed the Navy’s plan to establish a pilot project that would quantify the scope and major costs associated with ship scrapping. The panel indicated that the U.S. government could do more to promote better environmental and worker safety controls in foreign ship scrapping countries. To that end, the panel recommended, among other things, that (1) the Navy, MARAD, and EPA expand the notification to foreign countries of the materials commonly found on specific types of ships so that the countries could object to the import of a ship with unacceptable environmental risks and (2) the Navy, MARAD, EPA, the Departments of State and Labor, and the Agency for International Development evaluate how meaningful technical assistance could be provided to interested importing countries, including whether current statutory authorities and funding are adequate for this purpose. Another recommendation was for DRMS and MARAD to examine the use of enforceable contract terms that promote environmental protection and worker safety measures overseas, including requirements that foreign bidders submit technical plans to demonstrate how they intend to comply with applicable local rules and regulations, obtain information from the State Department on the qualifications and past performance of foreign scrappers, and require a performance bond as an incentive for foreign scrappers to comply with contractual requirements. The panel recognized, however, that environmental and worker safety issues would have to be balanced against the economic realities of the countries doing the scrapping. The panel also recommended to the Under Secretary of Defense for Acquisition and Technology that it or a similar panel be reconvened 1 year after the report’s issuance to evaluate the results of implementing the recommendations and to consider whether any additional modifications should be made. The interagency panel’s specific recommendations generally represent steps directed toward correcting previously experienced problems. The effectiveness of these initiatives, if adopted, will not be known until some implementation experience has been gained. Two key issues relating to whether the government should involve itself in promoting the expansion of a domestic industry and whether to utilize the foreign ship scrapping industry are only generally addressed. Further, the process for deciding whether to accept and ultimately implement the panel’s recommendations is informal. For example, the agencies have not established specific time frames for completing their review of the recommendations. Also, no procedures have been established for implementing the recommendations that are accepted. We recommend that the Secretaries of Defense and Transportation take the lead and work with other agencies involved in ship scrapping such as the EPA and the Departments of State and Commerce to establish a specific time frame for completing the review of the interagency panel’s recommendations. Further, we recommend that, once the review is complete, each agency establish milestones for implementing those recommendations that are adopted and that the Secretaries of Defense and Transportation designate lead responsibilities within their respective organizations for addressing individual panel recommendations. The Department of Defense provided comments on a draft of this report, which are presented in appendix III. The Department concurred with both of our recommendations. It also provided some technical comments, which we have incorporated as appropriate. We also requested comments from the Department of Transportation and EPA. Neither agency had provided comments prior to report issuance. We conducted our review between November 1997 and September 1998 in accordance with generally accepted government auditing standards. The scope and methodology for our review are discussed in appendix II. We are sending copies of this report to the Chairman of the Senate Committee on Governmental Affairs, the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the Senate Committee on Armed Services, and the House Committee on National Security. We are also sending copies of this report to the Secretaries of Defense and the Navy; the Secretaries of Commerce, Transportation, Labor, and State; the Administrators of MARAD, EPA, the National Oceanic and Atmospheric Administration, and the Occupational Safety and Health Administration; and the Directors of the Defense Logistics Agency and the Office of Management and Budget. We will make copies available to others upon request. If you have any questions about this report, you may contact me on (202) 512-8412. Major contributors to this report are listed in appendix IV. The Toxic Substances Control Act provides the Environmental Protection Agency (EPA) with the authority to regulate substances that pose a risk to human health or the environment. Asbestos and polychlorinated biphenyls (PCB) are among the more common substances regulated. Ship scrapping contractors are required to comply with the applicable regulations promulgated by EPA under this legislation, including regulations for the proper removal, storage, transportation, and the disposal of materials containing asbestos and PCBs at concentrations of 50 parts per million or greater. The Resource Conservation and Recovery Act of 1976, as amended, is a comprehensive authority for all aspects of managing hazardous wastes. The act and the Hazardous and Solid Waste Amendments of 1984 protect human health and the environment from the potential hazards of waste disposal, promote energy and natural resource conservation, reduce the amount and toxicity of waste generated, and ensure that wastes are managed in an environmentally sound manner. It places “cradle to grave” responsibility for hazardous waste on those personnel or units handling the waste. Waste oil, paints, and solvents are among the types of substances regulated under the act. The act is generally administered by the states under delegation of authority from EPA. The Federal Clean Air Act forms the basis for the national air pollution control effort. Basic elements of the act include establishing national ambient air quality standards for air pollutants and regulating hazardous air pollutants such as lead. EPA and the states administer the act. The Federal Water Pollution Control Act of 1972 bans facilities from discharging pollutants such as metals and acids into lakes, rivers, streams, and coastal waters. Regulation is accomplished by means of discharge permits issued by the states and EPA. The Occupational Safety and Health Act of 1970 was enacted to ensure safe and healthful working conditions for workers. Federal standards developed under the act cover shipyard work and the ship scrapping industry. The Occupational Safety and Health Administration’s regions, along with state and local regulatory agencies, are responsible for enforcing these worker safety standards. To identify the factors contributing to the backlog of federal ships available for scrapping, we performed relevant work at the principal agencies identified to possess and dispose of federal surplus ships for scrapping—the Departments of Defense, Navy, and Army; the Defense Logistics Agency and its Defense Reutilization and Marketing Service (DRMS); the Department of Transportation, including the Maritime Administration (MARAD) and the Coast Guard; the Department of Commerce’s National Oceanic and Atmospheric Administration; and the General Services Administration. This work included discussing and obtaining information on the size and scope of the domestic ship scrapping industry, the historical data and current backlog of ships to be scrapped and factors contributing to the backlog, studies analyzing the domestic industry and its capabilities, visits to selected surplus ship storage locations, and identification of recent performance problems. We also made visits and inquiries to selected current and former ship scrapping contractors to obtain their comments and views on issues such as the state of the domestic ship scrapping industry and its capacity to handle the federal backlog of surplus ships. To review the federal agencies’ efforts to address the backlog, we examined the federal ship marketing and sales functions at each agency selling federal surplus ships and discussed with program personnel, the various options for disposing of the ships. At each agency, we identified their legislative authorities to dispose of and sell ships for scrapping; reviewed their policies, procedures, and practices for selling surplus ships; evaluated the most recent contracts used in the sale of these ships; and identified the actions taken to address ship scrapping problems and improve the agencies’ respective programs. We also visited and requested information from selected ship scrapping contractors concerning the agencies’ efforts to address the past performance problems. Further, we attended meetings of the federal joint ship disposal conference and other workshops held by Navy and DRMS personnel. In addition, we visited the regulatory agencies, EPA and the Occupational Safety and Health Administration, and met with agency program and legal representatives to discuss and obtain information on the standards used to regulate environmental and worker safety matters and the enforcement of their respective regulations within the ship scrapping industry. Furthermore, we reviewed the Department of Defense led interagency panel’s April 20, 1998, report on ship scrapping, focusing primarily on its conclusions and recommendations. We also reviewed the agreements between EPA and the Navy and MARAD for the export of ships for scrapping and various studies that include information on the overseas ship scrapping industry. We also held discussions with the agencies’ program managers responsible for ship sales to identify the scope of the foreign market, the potential for reducing the backlog of surplus ships and the associated maintenance and storage costs, and the advantages and disadvantages of overseas scrapping. Furthermore, we asked for feedback from members of the domestic industry on the potential impact of the foreign scrapping on the domestic industry. We visited the State Department to discuss and obtain information on its involvement in the export of ships for overseas scrapping. At EPA, we also discussed and obtained information on the agency’s proposed rulemaking on PCBs and the agreements the agency had made with other agencies for the export of ships for scrapping. Margaret L. Armen, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of federal ship scrapping programs, focusing on: (1) the factors contributing to the backlog of about 200 surplus ships waiting to be scrapped; and (2) federal agencies' efforts to address the backlog. GAO noted that: (1) key factors contributing to the current backlog of surplus ships awaiting scrapping are the Navy's downsizing following the collapse of the former Soviet Union, the unavailability of overseas scrapping, and a shortage of qualified domestic scrappers; (2) as a result, the backlog of Navy ships to be scrapped has increased since 1991 from 25 to 127; (3) overseas scrapping has been suspended because of legal constraints on the export of polychlorinated biphenyls for disposal; (4) a 1997 agreement to resume overseas scrapping has been temporarily suspended largely because of concerns about environmental and worker safety problems in foreign countries and the impact of foreign scrapping on the domestic industry; (5) progress in reducing the backlog using domestic scrappers has been limited; (6) one reason has been domestic contractor performance difficulties; (7) a second reason has been a shortage of qualified domestic bidders; (8) between the beginning of 1996 and the end of 1997, the Navy and the Maritime Administration (MARAD) requested scrapping bids on 19 ships, but only 4 were actually sold--all to the same domestic bidder--because of the limited number of qualified bidders; (9) since then, MARAD has sold an additional 11 ships for scrapping; (10) federal agencies have identified and begun implementing a number of initiatives to address some of the specific performance issues associated with domestic scrapping; (11) since a key performance issue was contractor noncompliance with environmental and worker safety requirements, several of the initiatives provide for increased screening of contractors prior to award and increased oversight of the performing contractor after award; (12) other initiatives are intended to help attract more qualified domestic bidders; (13) it is too early to assess the impact of these initiatives because few ships have been scrapped since their implementation; (14) additional recommendations for addressing both domestic and overseas scrapping issues were made in April 1998 by an interagency panel; (15) the panel's recommendations expand on the actions to address contracting and oversight problems; (16) however, they only generally address key issues relating to government actions to expand the domestic industry and the scrapping of federal ships in foreign countries; (17) the process for deciding whether to accept and ultimately implement the panel's recommendations is informal; and (18) also, no procedures have been established for implementing the recommendations that are accepted.
The Joint Strike Fighter Program is structured to use a common production line to produce three versions of a single aircraft. These aircraft will be tailored to meet conventional flight requirements for the U.S. Air Force, short take-off and vertical landing characteristics for the U.S. Marine Corps, and carrier operation suitability needs for the U.S. Navy. The program will also provide aircraft to the British Royal Navy and Air Force. Table 1 shows the services’ planned use for the Joint Strike Fighter. A key objective of the Joint Strike Fighter acquisition strategy is affordability—reducing the development, production, and ownership costs of the program relative to prior fighter aircraft programs. To achieve its affordability objective, the Joint Strike Fighter program has incorporated various acquisition initiatives into the program’s acquisition strategy and various technological advances into the fighter. Among the acquisition initiatives planned was to develop critical technologies to a level where they represent low technical risk before the engineering and manufacturing contract is awarded. The expectation was that incorporating these initiatives into the acquisition strategy would avoid cost growth, schedule slippage, and performance shortfalls that have been experienced in other weapon acquisition programs. To date, the Joint Strike Fighter Program has awarded contracts totaling over $2 billion to Boeing and Lockheed Martin for the current concept demonstration phase. During this phase, DOD required each contractor to design and build two aircraft to demonstrate the following: commonality/modularity to validate the contractors’ ability to produce three aircraft versions on the same production line; the aircraft’s ability to do a short take-off and vertical landing, hover, and transition to forward flight; and satisfactory low airspeed, carrier approach flying and handling qualities. Each contractor was required to submit a Preferred Weapon System Concept, which outlines its final design concept for developing a Joint Strike Fighter aircraft that is affordable and meets performance requirements. The Preferred Weapon System Concept includes results from the flight and ground demonstrations and is being used by DOD to select the winning aircraft design and to award the engineering and manufacturing development contract. During engineering and manufacturing development, the Joint Strike Fighter will be fully developed, engineered, designed, fabricated, tested, and evaluated to demonstrate that the production aircraft will meet stated requirements. Critical junctures in engineering and manufacturing development are the preliminary and critical design reviews and commitments; testing of aircraft; and commitments to production hardware, including the purchase of long lead production items. It is at the critical design review that decisions are made toward finalizing the aircraft design and begin building test aircraft. About two-thirds of engineering and manufacturing development funding will be spent after this review. Figure 1 shows planned Joint Strike Fighter aircraft designs by contractor. In our previous work on best business practices, commercial firms have told us that a key part of product development is getting the technology into the right size, weight, and configuration needed for the intended product—in this case, the final Joint Strike Fighter design. Once this has been demonstrated, the technology is at an acceptable level for product development. Technology readiness levels (TRL) can be used to assess the maturity of technology and can reveal whether a gap exists between a technology’s maturity and the maturity demanded for successful inclusion in the intended product. Defining this gap for the Joint Strike Fighter technologies is important for determining whether they can be expected to demonstrate required capabilities before being integrated into the aircraft design. Readiness levels are measured along a scale of one to nine, starting with paper studies of the basic concept, proceeding with laboratory demonstrations, and ending with a technology that has proven itself on the intended product. (See app. I for a detailed description of TRLs.) The Air Force Research Laboratory considers TRL 7 an acceptable risk for starting the engineering and manufacturing development phase. The readiness level definitions state that for a technology to be rated at TRL 7, it must be demonstrated using prototype hardware (such as a complete radar subsystem) that is the same size, weight, and configuration as that called for in the final aircraft design and that prototype has to be demonstrated to work in an environment similar to the planned operational system. We have previously reviewed the impact of incorporating technologies into new product and weapon system designs. The results showed that programs met product objectives when the technologies were matured to higher levels and conversely showed that cost and schedule problems arose when programs started when technologies were at low readiness levels. For example, the Joint Direct Attack Munition (JDAM) used modified variants of proven components for guidance and global positioning. It also used mature, existing components from other proven manufacturing processes for its own system for controlling tail fin movements. The munition was touted for its performance in Kosovo and was purchased for less than half of its expected unit cost. However, the Comanche helicopter program began with critical technologies such as the engine, rotor, and integrated avionics at TRL levels of 5 or below. That program has seen 101 percent cost growth and 120-percent schedule slippage as a result of these low maturity levels and other factors. In commenting on our report concerning better management of technology development, DOD agreed that TRLs are important and necessary in assisting decision makers in deciding on when and where to insert new technologies into weapons system programs and that it is desirable to mature technologies to TRL 7 prior to entering the engineering and manufacturing development phase of a weapon system program.Since that time, DOD has adopted the technology readiness levels as a means of assessing the technological maturity of new major programs. In a July 5, 2001, memorandum, the Deputy Under Secretary of Defense (Science and Technology) stated that new DOD regulations require that the military services’ science and technology executives conduct a technology readiness level assessment for critical technologies identified in major weapon systems programs prior to the start of engineering and manufacturing development and production. The memorandum notes that technology readiness levels are the preferred approach for all new major programs unless the Deputy Under Secretary approves an equivalent assessment method. The Joint Strike Fighter Program, like many other DOD programs, has used risk management plans and engineering judgment as a way of assessing technological maturity. The Principal Deputy Under Secretary of Defense (Acquisition and Technology) has determined that these means will continue to be used by DOD and the Joint Strike Fighter contractors to assess the program’s technological risk. Risk management plans and judgment are necessary to managing any major development effort like the Joint Strike Fighter. However, without an underpinning such as technology readiness levels that allow transparency into program decisions, these methods allow significant technical unknowns to be judged acceptable risks because a plan exists for resolving the unknowns in the future. Experience on previous programs has shown that such methods have rarely assessed technical unknowns as a high or unacceptable risk; consequently, they failed to guide programs to meet promised outcomes. Technology readiness levels are based on actual demonstrations of how well technologies actually perform. Their strength lies in the fact that they characterize knowledge that exists rather than plans to gain knowledge in the future; they are, thus, less susceptible to optimism. In May 2000 we reported that all of the eight technologies identified by the Joint Strike Fighter program office as critical to the program were expected to be at maturity levels below that considered acceptable for low risk when entering engineering and manufacturing development (TRL 7). The eight critical technologies are: prognostics and health management, integrated flight propulsion control, subsystems, integrated support system, integrated core processor, radar, manufacturing, and mission systems integration. (See app. II for a description of these technologies.) During our review last year, we worked with the two competing contractors and the program office to arrive at the applicable TRLs for the critical technologies. Specifically, on separate visits to the contractors, with program office personnel present, we asked the contractors’ relevant technology managers to score the technologies they considered critical to enable their Joint Strike Fighter design to meet DOD requirements for the aircraft. At that time, we also asked them to describe their plans to mature the technologies to the planned start of the engineering and manufacturing development phase, then scheduled for April 2001. Upon reviewing these scores with the program office and in order to gain an overall Joint Strike Fighter Program perspective on technical maturity, the Joint Strike Fighter office agreed to provide us with TRL scores for the eight technologies they considered critical for meeting program cost and performance requirements. Figure 2 reflects the program office scores at the time of our last review. Due to the current Joint Strike Fighter competition, the specific technologies mentioned previously are not linked to scores so as not to divulge competition sensitive information. As the figure shows, all eight technologies were projected to be below the level of maturity (TRL 7) considered acceptable for low risk when entering the engineering and manufacturing development phase and six of the technologies were projected to be below the level of maturity (TRL 6) that is considered low risk for entering the demonstration phase, which the Joint Strike Fighter Program began in 1996. During our current review, we again visited the two competing contractors to discuss the status of the eight technologies. We learned that they have essentially accomplished, or plan to accomplish by October 2001, the technology development and demonstrations that they planned to accomplish as of April 2001. Thus figure 2 represents the current assessment of technical maturity. While two of these areas are very close to appropriate maturity levels, the Joint Strike Fighter’s critical technologies are not projected to be matured to levels that we believe would indicate a low risk program at the planned start of the engineering and manufacturing development phase. Key component technologies remain at higher risk levels for engineering and manufacturing development because (1) they have not been developed to approximately the same size, weight, and configuration called for in the final aircraft design and/or (2) they have not been demonstrated to work in an environment similar to the planned operational system. The Joint Strike Fighter Program has made good progress in some technology areas. For example, contractor and program officials told us that because of concerns about propulsion technology, both contractors focused considerable attention on that area. Both contractors flew aircraft that demonstrated the capability for short take-off and vertical landing and accumulated at least 20 hours of flight time on those aircraft, which should satisfy the requirement in the Fiscal Year 2001 National Defense Authorization Act. In some other areas, the technology maturation has not been uniform across all critical components of a technology. For example, the radar has a number of critical components that must work together as a system. Both contractors have made considerable progress on one or more of those components, but the other critical components have not been matured to an acceptable level of risk. In order for this technology to achieve a TRL level of 7, all components had to be (1) demonstrated in the size and weight required to meet aircraft capabilities, (2) integrated together as they would be in the final aircraft design, and (3) flown in an environment similar to what the Joint Strike Fighter will be subjected. To demonstrate some critical technologies, both contractors flew key electronic and other components in flying avionics test beds (commercial aircraft reconfigured as flying laboratories). While these tests occurred in a relevant environment (e.g., in flight), the tested hardware was not always the same size and weight required for the Joint Strike Fighter aircraft. Conversely, some components were built to the required size and weight, but were demonstrated only in ground-testing environments. By not having matured all critical technology areas to appropriate maturity levels, the program remains at risk for achieving cost and performance goals upon entering product development. Moving into engineering and manufacturing development creates an expectation that the Joint Strike Fighter can be delivered for a stated time and dollar investment and with a given set of capabilities. The decisions the Department of Defense makes now and over the next 2 years will largely determine whether those expectations can be met. A key component of the Joint Strike Fighter Program’s acquisition strategy is to enter the engineering and manufacturing development phase with low technical risk. The program will not have achieved that point by October 2001 because technologies, which the Joint Strike Fighter Program Office identified as critical to meeting the program’s cost and requirements objectives, will not have been matured to an acceptable risk level. By entering the engineering and manufacturing development phase with immature critical technologies, the program will need to continue to develop those technologies at the same time it will be concentrating on production issues and the integration of subsystems into a Joint Strike Fighter. This approach would not be consistent with best practices. In fact, it would more closely follow DOD’s traditional practices in weapon system programs that have often resulted in cost increases, schedule delays, and compromised performance. To eliminate one of the major sources of cost and schedule risk, we recommend that the Secretary of Defense delay the start of engineering and manufacturing development until critical technologies are matured to acceptable levels. Alternatively, if the Secretary of Defense decides to accept these risks and move the program into engineering and manufacturing development as scheduled, we recommend that the Secretary dedicate the resources to ensuring that maturity of the critical technologies is demonstrated by the critical design review or defer the inclusion of immature technologies from the approved design. In written comments on a draft of this report, the Director of Strategic and Tactical Systems, within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, partially concurred with our recommendation. DOD contended that an independent technology readiness assessment it carried out on the program showed that technology has extensively matured and the program is now ready to enter into systems development and demonstration. DOD also stated that the Joint Strike Fighter Program Office has implemented a risk management program that will continue to monitor and address technology risks, as well as other risks, throughout the program’s life. The full text of DOD’s comments is included in appendix III. We disagree with the Department’s assessment of technological maturity. The TRL assessment conducted as part of our review showed that technologies critical to the Joint Strike Fighter Program are not projected to be matured to levels that we believe would indicate a low risk program at the planned start of the engineering and manufacturing development phase. Many of the technologies have not been demonstrated in their appropriate size and weight, nor have they been demonstated to function in an environment in which they will be used. For example, many of the technologies are still in the laboratory and will require considerable maturation before they can be incorporated into the final design. By entering the engineering and manufacturing development phase with immature critical technologies, the program will need to continue to develop those technologies at the same time it will be concentrating on engineering, designing, and fabricating the product. As it has with many other DOD programs, this approach increases the likelihood of schedule delays and program cost increases. This is primarily why DOD’s new acquisition regulations emphasize separating technology development from product development. In fact, experience has shown that resolving technology problems in product development can result in at least a ten- fold cost increase. Moreover, DOD incorrectly states that the tools it used to assess its technology and the TRLs used for our review are equivalent methodologies for assessing technological maturity. The Willoughby Templates used by DOD are a risk management tool. They can be an excellent way to manage program risks, but in practice they have not been used to identify risk. Identifying risk is the first step to managing it. By contrast, by focusing specifically on assessing technology maturity against objective standards, TRLs have proven successful at identifying risks. A more appropriate approach for DOD to take is to use technology readiness levels in conjunction with a management tool such as the Willoughby templates since this can result in more informed decision making and fewer unanticipated problems in an acquisition program. In fact, the Joint Strike Fighter program provides DOD with an excellent opportunity to apply these concepts in tandem. To assess whether the Joint Strike Fighter’s critical technologies are projected to mature to low technical risk at the start of the engineering and manufacturing development phase, we used the technology readiness level tool and information provided by Joint Strike Fighter program officials and contractor officials at the Boeing Company, Seattle, Washington; Lockheed Martin Aeronautics Company, Fort Worth, Texas; and Pratt & Whitney, East Hartford, Connecticut. During our previous review, we had obtained detailed briefings from Boeing and Lockheed Martin officials on their plans to mature critical technologies prior to the date for awarding the engineering and manufacturing development contract, then scheduled for April 2001. We had also obtained program office and contractor assessments of the expected technology readiness levels for the critical technologies at April 2001. During our current review, we obtained detailed briefings from program office personnel on the status of critical technologies. We also obtained detailed briefings from Boeing, Lockheed Martin, and Pratt & Whitney officials on the contractors’ progress in maturing critical technologies and any further maturation plans through October 2001. We compared the latest information from the program office and the contractors to the information obtained during our prior review to determine if the critical technologies had been matured to higher technology readiness levels and the levels achieved. We conducted our review from April through September 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the congressional defense committees; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable James G. Roche, Secretary of the Air Force; the Honorable Gordon R. England, Secretary of the Navy; General James L. Jones, Commandant of the Marine Corps; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We will also make copies available to other interested parties on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Key contributors to this report were Robert Pelletier and Brian Mullins. 6. System/subsystem model or prototype demonstration in a relevant environment. 7. System prototype demonstration in an operational environment. Description Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology’s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle, or space. Examples include testing the prototype in a test bed aircraft. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. Appendix II: Critical Technologies and Their Descriptions Description Involves the ability to detect and isolate the cause of aircraft problems and then predict when maintenance activity will have to occur on systems with pending failures. Life-cycle cost savings are dependent on prognostics and health management through improved sortie generation rate, reduced logistics and manpower requirements, and more efficient inventory control. Includes integration of propulsion, vehicle management system, and other subsystems as they affect aircraft stability, control, and flying qualities (especially short take-off and vertical landing). Aircraft improvements are to reduce pilot workload and increase flight safety. Includes areas of electrical power, electrical wiring, environmental control systems, fire protection, fuel systems, hydraulics, landing gear systems, mechanisms, and secondary power. Important for reducing aircraft weight, decreasing maintenance cost, and improving reliability. Involves designing an integrated support concept that includes an aircraft with supportable stealth characteristics and improved logistics and maintenance functions. Life-cycle cost savings are expected from improved low observable maintenance techniques and streamlined logistics and inventory systems. Includes the ability to use commercial-based processors in an open architecture design to provide processing capability for radar, information management, communications, etc. Use of commercial processors reduces development and production costs and an open architecture design reduces future development and upgrade costs. Includes advanced integration of communication, navigation, and identification functions and electronic warfare functions through improved apertures, antennas, modules, radomes, etc. Important for reducing avionics cost and weight, and decreasing maintenance cost through improved reliability. Involves lean, automated, highly efficient aircraft fabrication and assembly techniques. Manufacturing costs should be less through improved flow time, lower manpower requirements, and reduced tooling cost. Involves decreasing pilot workload by providing information for targeting, situational awareness, and survivability through fusion of radar, electronic warfare, and communication, navigation, and identification data. Improvements are achieved through highly integrated concept of shared and managed resources, which reduces production costs, aircraft weight, and volume requirements, in addition to improved reliability.
The Joint Strike Fighter Program (JSFP), the military's most expensive aircraft program, is intended to produce affordable, next-generation aircraft to replace aging aircraft in military inventories. Although JSFP has made good progress in some technology areas, the program may not meet its affordability objective because critical technologies are not projected to be matured to levels GAO believes would indicate a low risk program at the planned start of engineering and manufacturing development in October 2001.
We took several steps to update information on the status of TARP funds, including disbursements, dividend payments, repurchases, and warrant liquidations, from October 3, 2008, through September 25, 2009 (unless otherwise noted), and the status of Treasury’s actions taken in response to recommendations from our TARP reports, including its progress in developing a comprehensive system of internal control. We reviewed documents provided by OFS and conducted interviews with officials from OFS, including the Chief Financial Officer, Deputy Chief Financial Officer, Cash Management Officer, Director of Internal Controls, and their representatives. For the Capital Purchase Program (CPP), we reviewed documents from OFS that described the amounts, types, and terms of Treasury’s purchases of senior preferred stocks, subordinated debt, and warrants under CPP. We also reviewed documentation and interviewed officials from OFS who were responsible for approving financial institutions’ participation in CPP and overseeing the repurchase process for CPP preferred stock and warrants. Additionally, we contacted officials from the four federal banking regulators—the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System and Federal Reserve Banks (Federal Reserve), and the Office of Thrift Supervision (OTS)—to obtain information on their processes for reviewing CPP applications, the status of pending applications, their processes for reviewing preferred stock and warrant repurchase requests, and their examination processes for reviewing recipients’ lending activities and compliance with TARP requirements. To update the status of the Capital Assistance Program (CAP), we reviewed relevant documents and interviewed OFS officials about the program. We also met with Federal Reserve officials to discuss the stress test methodology and results for the 19 largest U.S. bank holding companies and reviewed related documents relevant to CAP. To update our work on the Targeted Investment Program (TIP) and the Asset Guarantee Program (AGP), we reviewed the Securities Purchase Agreements that Citigroup Inc. (Citigroup) and Bank of America Corporation (Bank of America) entered into with Treasury and the Master Agreement signed by Citigroup, Treasury, FDIC, and the Federal Reserve Bank of New York (FRBNY). In addition, we interviewed OFS officials, including the acting Chief Investment Officer and the General Counsel, to obtain information on the current status of TIP and AGP in terms of new applicants for the programs, compliance with their requirements, and possible exit strategies for unwinding the TIP investments. For the Systemically Significant Failing Institutions (SSFI) program, we reviewed relevant documents from Treasury, the Federal Reserve, and American International Group, Inc. (AIG), including securities purchase agreements, periodic reports provided to Treasury, and other relevant documentation. We also met with officials from each organization and relevant state insurance regulators. To meet the report’s objectives with respect to the Consumer and Business Lending Initiative, we reviewed announcements and other publicly available information on the Term Asset-Backed Securities Loan Facility (TALF) that were available on the FRBNY’s Web site, in OFS internal reports, and in program design documents from Treasury and FRBNY. We also interviewed officials from OFS, FRBNY, and the Federal Reserve, as well as TALF investors, a securitization attorney, three underwriters, two major credit rating agencies, an academic, a policy analyst, and TALF issuers for commercial mortgage-backed securities (CMBS) and asset-backed securities (ABS) backed by credit cards, auto loans, student loans, and Small Business Administration (SBA) loans. To meet the report’s objectives with respect to the Public-Private Investment Program (PPIP), we reviewed PPIP-related announcements, OFS internal reports, and program operation and design documents published by Treasury and FDIC. We also interviewed officials from Treasury and FDIC, as well as a policy analyst and an economist. To determine the status of TARP assistance provided through the Automotive Industry Financing Program (AIFP), we reviewed documents related to the restructuring of General Motors Company (GM) and Chrysler Group LLC (Chrysler), including the automakers’ bankruptcy filings, credit agreements between the automakers and the federal government, and TARP disbursements to the automakers. We also interviewed Treasury officials, including officials from Treasury’s auto team, and representatives from GM and Chrysler. To determine the program status of the Home Affordable Modification Program (HAMP) and the status of our previous recommendations related to the program, we reviewed Treasury’s guidelines for each HAMP component, published reports on servicer performance, and Treasury’s written response to our July recommendations. In addition, we interviewed Treasury officials and officials at Fannie Mae and Freddie Mac—financial agents of Treasury for HAMP—about the status of program implementation, including a comprehensive system of internal control. We also reviewed documentation of Treasury’s recent communications with servicers, such as draft servicer guidelines and letters sent to participating servicers. Finally, we spoke with representatives of consumer groups, housing counselors, and servicer associations to obtain their views on the implementation of HAMP to date. To determine Treasury’s progress in developing an overall communications strategy for TARP, we interviewed individuals from OFS and Treasury’s Office of Public Affairs and Office of Legislative Affairs to determine what steps Treasury had taken to improve and coordinate communications with the public and Congress. To assess Treasury’s progress in hiring permanent staff for OFS, we met with officials from the Human Resources Office and OFS to discuss hiring efforts and reviewed various documents that OFS provided to us. In the interviews, officials discussed their processes for recruiting individuals with the skill sets and competencies needed to administer TARP, including steps taken to find permanent replacements to fill key leadership positions. To examine changes in the composition of staff since the office was established, we reviewed past GAO reports on TARP and various documents that OFS provided to us, including OFS’s updated organizational chart. To gauge OFS’s mix of permanent and temporary staff and the number of vacancies, we reviewed the totals for each type of staff over time and within each OFS office. To assess OFS’s use of contractors and financial agents to support TARP administration and operations, we obtained information from Treasury on contracting activity as of September 18, 2009—including task orders and modifications—for the OFS-support financial agency agreements, contracts, blanket purchase agreements, and interagency agreements (IAA). We analyzed this information to identify each contract’s and agreement’s purpose, period of performance, and potential value. To assess OFS’s processes for (1) management and oversight of contractors’ and financial agents’ performance, and (2) managing conflicts of interest of contractors and financial agents supporting TARP administration and operations, we reviewed applicable documents that had become available from OFS since our June 2009 report. We also communicated with Treasury compliance officials and reviewed applicable documentation concerning OFS’s progress in (1) completing reviews of vendor conflicts- of-interest mitigation plans to conform with applicable TARP requirements and (2) issuing guidance on OFS requirements and procedures for documenting and resolving conflicts of interest. As we noted in our initial report under the mandate, we identified a preliminary set of indicators on the state of credit and financial markets that might be suggestive of the performance and effectiveness of TARP. We consulted Treasury officials and other experts and analyzed available data sources and the academic literature. We selected a set of indicators that offered perspectives on different facets of credit and financial markets, including perceptions of risk, cost of credit, and flows of credit to businesses and consumers. We assessed the reliability of the data upon which the indicators were based and found that, despite certain limitations, they were sufficiently reliable for our purposes. To update the indicators in this report, we primarily used data from Thomson Reuters Datastream, a financial statistics database. As these data are widely used, we conducted only a limited review of the information but ensured that the trends we found were consistent with other research. We also relied on data from Inside Mortgage Finance, Treasury, the Federal Reserve, the Chicago Board Options Exchange, the Securities Industry and Financial Markets Association, and Global Insight. We have relied on data from these sources for past reports and determined that, considered together, they are sufficiently reliable for the purpose of presenting and analyzing trends in financial markets. The data from Treasury’s survey of lending by the largest CPP recipients (as of July 31, 2009, the latest available survey) are based on internal reporting from participating institutions, and the definitions of loan categories may vary across banks. Because these data are unique, we are not able to benchmark the origination levels against historical lending or seasonal patterns at the institutions. Based on discussions with Treasury and our review, we found that the data were sufficiently reliable for the purpose of documenting trends in lending. Lastly, we collected data on loan balances from the Consolidated Financial Statements for Bank Holding Companies Y-9C Report Forms, the primary analytical tool that regulators use to monitor financial institutions. We verified that the input process did not result in data entry errors. Because the Y-9C is the primary source for balance sheet data and can be corroborated to some extent by audited financial statements, we conducted only a limited review of this data. We conducted this performance audit from July 2009 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Before the act was passed, TARP was expected to be a program to purchase mortgage-backed securities (MBS) and whole loans from financial institutions to stabilize the financial system. Within 2 weeks of enactment, however, following similar action by several foreign governments and central banks, Treasury—through the newly established OFS—announced that it would make $250 billion available to U.S. financial institutions through purchases of preferred stock to provide additional capital that would help enable the institutions to continue lending. This effort was coordinated with a number of foreign governments as part of a global effort to stabilize financial markets. In the United States, the Federal Reserve and FDIC also announced concurrent coordinated actions that were intended to increase confidence in the U.S. financial system. Treasury’s decision to change its strategy raised questions about TARP’s transparency, and the fact that the funds were disbursed before a comprehensive system of internal control had been established raised issues of accountability. Figure 1 provides an overview of key dates for TARP implementation. In the last year, GAO has made 35 recommendations to Treasury and one to the Federal Reserve on a number of issues surrounding the implementation of TARP and the need to improve its operations and transparency. Some of our recommendations applied to TARP in general, while others, such as CPP and HAMP, were program specific. Our recommendations to Treasury generally fell into three broad categ (1) transparency, reporting, and accountability; (2) management infrastructure; and (3) communication. Other TARP oversight entitie such as the Special Inspector General for TARP (SIGTARP) and the Congressional Oversight Panel, have also made numerous recomm TARP. endations aimed at improving the implementation and oversight of Transparency, reporting, and accountability. We made a series of recommendations aimed at improving the transparency and accountability of TARP and its programs. Initially, we made a series of recommend ations aimed at improving the transparency of CPP. As a result, OFS now requires all CPP participants to participate in some form of monthly lending survey. We recommended that OFS report publicly the monies, such as dividends, paid to Treasury by TARP participants, something OF started doing in June 2009. Similarly, Treasury took steps to implement our recommendations aimed at making the warrant repurchase proc more transparent. Finally, we made a number of recommendations addressing the basis and design of HAMP’s Home Price Decline Protection program and the need to routinely review and update the key assumptions that underlie Treasury’s projection of the number of borrowers likely to be assisted. Treasury has started to address many of these recommendations. Management infrastructure. To ensure that OFS established a robust management structure, comprehensive system of internal control, and assessment process, we made a series of recommendations aimed at addressing challenges associated with establishing a federal progra short period of time including challenges associated with staffing, contractor oversight, and internal controls. For example: We recommended that Treasury expedite its hiring efforts to help ensure that OFS had the needed personnel throughout the implementation phase of the program and that key OFS leadership positions were filled during and after the transition to the new administration. In certain areas, challenges remain, and most recently we recommended that Treasu staff vacant positions in the Homeownership Preservation Office— including filling the position of Chief Homeownership Preservat with a permanent placement—and evaluate staffing levels and competencies. We recommended that OFS take a number of actions to ensure an appropriate oversight infrastructure to manage contractors and addres conflicts of interest, including ensuring that sufficient personnel were s assigned and properly trained to oversee the performance of all contractors. We recommended that OFS issue regulations on conflicts of interest involving Treasury’s agents, contractors, and their employees and related entities as expeditiously as possible. We also recommended issu guidance requiring that key communications and decisions concerning p otential or actual vendor-related conflicts of interest be documented. More broadly, we recommended that Treasury continue to develop a comprehensive system of internal control over TARP, including policies, procedures, and guidance that were robust enough to protect taxpayer s’ interests and ensure that the program objectives were being met. For example, we recommended improvements in documenting certain internal control procedures and in updating the guidance available to the public on determining warrant exercise prices so that it was consistent with OFS’s actual practices. Finally, we recommended that OFS expe a comprehensive system of internal control over HAMP. We also recommended that Treasury develop a process to monitor status of programs and identify any potential risk that announced programs would not have adequate funding. Most recently, we also recommended that OFS develop a means of systematically assess servicers’ capacity to meet HAMP requirements during program admission. Communication. In light of the backlash from Congress and other regarding Treasury’s initial shift in the program from purchases of mortgages and mortgage-backed securities to capitalization of financia institutions, we made a series of recommendations over the past year aimed at improving OFS’s communication with Congress and the public. While the theme has been constant, the recommendations have attempte to help ensure that Treasury develops a comprehensive communication strategy and clearly articulated vision for the program that goes beyon d just providing information. We have recommended, for instance, that Treasury develop a communication strategy that includes building an understanding and support for the various components of the program Treasury continues to take steps to address these recommendations, including hiring a communications officer, integrating communicatio into TARP operations, scheduling regular and ongoing contact with congressional committees and members, and attempting to leverage technology. . We made one recommendation that was not directed to Treasury. To help improve the transparency of CAP—in particular the stress test results —we recommended that the Director of Supervision and Regulation of th Federal Reserve consider periodically disclosing to the public the aggregate performance of the 19 bank holding companies against the mo adverse scenario forecast for the duration of the 2-year forecast period and decide whether the scenario needs to be revised. At a minimum, the Federal Reserve should provide the aggregate performance data to OFS program staff for the 19 institutions partic ipating in CAP or CPP. We are addressing these issues in ongoing work. TARP is one of many programs and activities the federal government h put in place over the past year to respond to the financial crisis. As of September 25, 2009, it had disbursed almost $364 billion to participa institutions. Participating institutions have in turn made billions in dividends, interest, and principal payments on loans and some have started to repurchase their preferred shares and warrants. With the exception of CPP, which has hundreds of participants of various typ sizes, most of the other investment-based programs have provided substantial amounts of assistance to individual institutions. For exam AIG has received assistance under SSFI and GM and Chrysler have received support through AIFP. Amid concerns about the direction of the program and lack of transparency, the new administration has attempted to provide a more strategic direction for using the remaining funds an created a number of programs aimed at stabilizing the securitization markets, preserving homeownership, and most recently at providing assistance to community banks and small businesses. Some programs, such as TALF—which is operated by FRBNY, with Treasury providing a backstop against losses—appear to be achieving the intended re a reduced scale. Others, such as HAMP and PPIP, face ongoing implementation or operational challenges. Finally, over the past year OFS tegy and has also started to take steps to formalize its communication stra improve the way it communicates with Congress and the public. In the past year, Treasury has implemented a range of TARP progr stabilize the financial system. As of September 25, 2009, OFS had disbursed almost $364 billion for TARP loans and equity investments (table 1). Disbursements represent amounts actually paid to make troubled asset purchases or loans. Participating institutions have al Treasury billions of dollars in repurchases of preferred shares and warrants, dividend payments, and loan repayments. In general, Treas authority to purchase, commit to purchase, or commit to guarantee troubled assets will expire on December 31, 2009. However, the Secretar y of the Treasury, upon submission of a written certification to Congress, may extend these authorities to no later than October 3, 2010—2 years Based on the total prices of outstanding from the date of enactment. troubled asset purchases and outstanding commitments to purchase and the total face amount of outstanding guarantees as of September 25, 2009, almost $329 billion remains available under the almost $700 billion limit on Treasury’s authority to purchase or insure troubled assets; however, while Treasury has updated its projected use of funds for AGP and AIFP, it has not modified any of its estimates for the others despite changes in curre market conditions, program participation rates, and repurchases since March 2009. For example, when Treasury updated its estimates in March 2009, it estimated that CPP participants’ repurchases would total about $25 billion but almost three times that amount has been repurchased as of September 25, 2009. Moreover, questions remain about the projected use of funds associated with consumer and business lending initiatives and PPIP. While Treasury officials acknowledge they are currently review potential changes to the projections for the future, they continue to believe that these estimates are appropriate program funding allocatio given current market conditions. Without more meaningful estimates about projected uses of the remaining funds, Treasury’s ability to plan and effectively execute the next phase of the program will be limited. As shown in table 1, repurchases of preferred stock and repayments loan principal have reduced the outstanding balance of the program. Specifically, 41 institutions, including 10 of the largest bank holding companies participating in TARP, had repurchased all or a portion preferred stock from Treasury for a total of about $70.7 billion as of September 25, 2009 (table 2). While the decision to allow an institutio repurchase its preferred shares rests with its primary federal regulator, w continue to believe that Treasury, as administrator of CPP, has a responsibility to help ensure that institutions are being treated consistently and that the regulators are applying generally consistent criteria when reviewing TARP participants’ requests to repurchase their preferred shares. A number of participants have also started to repurchase their warrants and preferred stock obtained through the exercise of warrants. However, unlike preferred stock, these amounts are deposited in the general fund of the U.S. Treasury and are not to be used to reduce the outstanding troubled assets counted against the almost $700 billion limit, as required by the act (see table 2). Specifically, as of September 25, 2009, 20 financial institutions had repurchased their warrants, and 3 had repurchased their warrant preferred stock from Treasury, at an aggregate cost of about $2.9 billion. While the first warrants repurchased were valued using a valuation process agreed to by Treasury, some in the industry have suggested that an auction process may represent the best method for Treasury to realize the market value of the warrants and provide a more transparent process. Treasury announced in June 2009 that it would auction certain warrants but has yet to establish guidelines for how the auction process will work. Treasury and others have noted that an auction method may not necessarily yield the best price for the federal government. As of September 25, 2009, Treasury had not yet auctioned any securities. As of September 25, 2009, Treasury had received approximately $9.2 billion in dividend payments on shares of preferred stock acquired through CPP, TIP, AIFP, and AGP (table 3). Treasury’s agreements under these programs entitled it to receive dividend payments on varying terms and at varying rates. The dividend payments to Treasury are contingent on each institution declaring dividends. However, AIG—the sole participant in SSFI—had not declared dividends and therefore had not made any of its three scheduled dividend payments as of September 25, 2009. Treasury borrows funds to finance the gap between the federal government’s revenues and outlays and is subject to a statutory debt limit. Because Treasury must borrow the funds disbursed, TARP and other actions taken to stabilize the financial markets increase the federal debt and result in related borrowing costs in the form of interest. Because Treasury manages its cash position and debt issuances from a governmentwide perspective, it is generally not possible to match TARP disbursements with specific debt securities issued by Treasury and the related borrowing costs. Moreover, Treasury typically does not calculate the federal government’s borrowing cost related to specific disbursements, including the net disbursements for TARP. However, Treasury provided us with an unaudited estimate of approximately $2.3 billion in borrowing costs based on certain assumptions relating to the net disbursements for TARP from TARP’s inception through September 30, 2009. Using different assumptions would result in different estimated borrowing costs. Treasury’s estimation of the federal government’s borrowing costs for TARP does not represent the ultimate costs of TARP and does not consider, among other things, the intra-governmental interest that TARP incurs. Over the past year, CPP—the largest and most widely used program under Treasury’s TARP authority for stabilizing the financial markets—made investments in large publicly held financial institutions quickly but faced challenges and delays in developing standard terms for investing in smaller, nonpublic institutions. As of September 25, 2009, Treasury had provided capital to 685 financial institutions through CPP. Treasury has extended the CPP application deadline for small banks and increased the amount of investment they can receive to encourage participation, but the number of CPP disbursements has decreased dramatically. Investments that are being made are going to relatively small banks. For example, the average investment size for the 9 institutions funded in August 2009 was $14.4 million, compared with the average investment size of $121 million for the 147 institutions funded in January 2009. According to Treasury, over 430 institutions have withdrawn their applications to CPP after receiving approval for funding. Also, federal banking regulators’ data show that over 1,800 applications have been withdrawn since the start of the program. The number of approved institutions withdrawing increased earlier this year, in part because of uncertainties about program requirements (e.g., changes to executive compensation). Anecdotally, some of the reasons cited have included increased confidence in the financial condition of banks and, for smaller institutions, the relatively high cost of closing CPP transactions. We are continuing to review the process used to assess applications for CPP funding to determine the extent to which Treasury consistently applied established criteria and adequately documented the regulators’ recommendations and its final decisions. The results of this review will be discussed in a subsequent report. Over the last year, and consistent with our recommendation that Treasury bolster its ability to determine whether institutions’ activities are generally consistent with the act’s purposes, Treasury and federal banking regulators have made progress in monitoring the activities of CPP participants. Specifically: Using its new monthly lending surveys, in February 2009 Treasury began to publish detailed information for the 20 largest CPP institutions. In April 2009, it added questions addressing their small business lending activity to the survey. In June Treasury began to publish basic information from all CPP participating institutions. The monthly surveys are an important step toward greater transparency and accountability for institutions of all sizes. In August 2009, Treasury and the bank regulators began publishing a quarterly analysis of regulatory financial data for CPP and non-CPP institutions that focuses on three broad categories: on- and off-balance sheet items, performance ratios, and asset quality measures. Moreover, the largest CPP institutions that have repurchased their preferred shares and warrants have agreed to voluntarily provide lending information through 2009. These data will enable Treasury and others to monitor the institutions’ lending activities following the repurchase of their shares. In the past year, the federal banking regulators have taken steps to help ensure compliance with the CPP agreements and other TARP requirements, as we recommended. All of the federal banking regulators have issued or are finalizing examiner guidance and procedures for assessing institutions’ compliance with CPP and other TARP requirements. For example, in March 2009 OCC issued a supervisory memorandum to all examination staff that provided specific forms, checklists, and guidance for assessing compliance with CPP and TARP requirements. They plan to examine institutions’ compliance with TARP requirements on executive compensation, dividend payments, and stock repurchases as part of routine examinations. Three of the four banking regulators had conducted 351 examinations as of September 2009, that included checking for compliance with CPP and TARP requirements. According to these regulators, the institutions examined were generally in compliance with the requirements. Treasury also has hired three asset management firms to provide market advice about its portfolio of investments in financial institutions participating in various TARP programs. Consistent with our recommendation that Treasury increase its oversight of compliance with terms of the CPP agreements, including limits on dividends and stock repurchases, these managers are responsible for helping OFS monitor compliance with these terms. However, Treasury has yet to finalize the specific guidance and performance measures for the asset managers’ oversight responsibilities or identify the process for monitoring the asset managers’ performance. We plan to continue monitoring this area. As of September 25, 2009, no funds had been expended under CAP. The Federal Reserve’s stress tests of the 19 largest bank holding companies in May 2009 identified 10 bank holding companies that needed to raise approximately $75 billion in additional capital. According to FinSOB, this result was better than the markets anticipated and helped boost the markets’ confidence in the largest banks. By September 25, 2009, these 10 institutions had raised about $79 billion in capital, and 9 institutions had successfully raised the full amount required by the stress test. While the program is open to other institutions that did not participate in the stress test, the extent to which these other institutions will choose to participate in the program appears limited. Treasury extended the CAP application deadline from May 25, 2009, to November 9, 2009. As of September 25, 2009, Treasury had not received any CAP applications. However, regulators said that they had begun to receive CAP applications. Early in the implementation of TARP, Treasury announced that it was providing what it refers to as “exceptional assistance” to three institutions deemed to be critically important to financial markets and subsequently created three programs—TIP, SSFI, and AGP—to provide that assistance. TIP investments in Bank of America and Citigroup, Inc. in January 2009 and December 2008, respectively, followed the institutions’ participation in CPP. In addition, OFS provided assistance to AIG under SSFI. Treasury officials said that they did not expect to have to use these programs again if economic conditions and market stability continued to improve. The Consumer and Business Lending Initiative announced in February as part of the Financial Stability Plan consists of two programs, including t Federal Reserve’s TALF, operated primarily by FRBNY, and a program to directly purchase securities backed by SBA-guaranteed small business loans that has yet to materialize. A separate program, PPIP, which is being implemented in cooperation with the Federal Reserve and FDIC, is intended to invest in funds that provide a market for the legacy loans and securities that currently burden the financial system. TALF participants and market observers also told us that TALF financi terms had become less favorable as credit markets stabilized, making TALF less appealing. Small Business Lending. Treasury has yet to begin purchasing securities backed by SBA-guaranteed small business loans as part of the Consumer and Business Lending Initiative. Initially, Treasury anticipated purchasing securities backed by SBA section 7(a) guaranteed loans by the end of March 2009 and securities backed by SBA section 504 loan guarantees by the end of May 2009; however, no purchases had been made as of September 21, 2009. Several factors have been cited to explain this delay. First, a Treasury official told us that some participants in the SBA loan markets said they did not want to sell SBA-guaranteed securities to Treasury if doing so would require them to provide warrants to Treasury and to comply with executive compensation restrictions. Second, some market participants said that this program might not be as helpful to the SBA loan market as initiatives by SBA, because SBA efforts included reductions in fees and increases in guarantees for the 7(a) program that had been helpful. One major market participant also noted that high participation in Treasury’s direct purchase program was unlikely, in part because of the requirements of the act noted above. Public-Private Investment Program. Treasury announced PPIP in March 2009 to help add liquidity to the market for legacy assets (both securities and loans), to allow banks and other financial institutions to free up capital, and to stimulate the extension of new credit. Treasury continues to take steps to implement the Legacy Securities Program, and FDIC has continued to develop the Legacy Loans Program by co pilot sale of receivership assets to test the funding mechanism contemplated for this program. Some market participants and observers we spoke with in the summer of 2009 told us that while the problem of toxic assets remained, there have been delays in launching PPIP. These individuals, Treasury, and FDIC cited rising investor confidence the stress test results and successful capital-raising by financial institutions as one of the main reasons. In addition, banks have had increasing incentives to hold troubled assets in the short term, rather t selling them and taking losses now, in the hopes that such assets will han perform better in the future. Treasury officials also noted the diffic measuring the impact of the program announcement on markets. Nevertheless, Treasury officials noted the financial market’s positive reaction when the program was announced and said that they contin believe that the program is important to further bolstering financial markets. Treasury officials stated that as of October 5, 2009, five of the nine pre-qualified funds have raised at least the minimum $500 million to qualify to invest in two legacy securities. The first two legacy securities PPIP funds closed on September 30, 2009. AIFP has provided assistance to Chrysler, GM, auto suppliers, and auto finance companies in an effort to assist the failing domestic automotiv e industry. Over the past year, Chrysler and GM underwent bankruptcy reorganization and streamlined their operations by closing factories and reducing the number of dealerships. However, whether the reorganized Chrysler and GM will achieve long-term financial viability remains unclea In addition to funding provided under AIFP, the federal government has launched other programs to help the automotive industry. In particu lar, the Department of Transportation’s Car Allowance Rebate System program (“Cash for Clunkers”) provided nearly $3 billion in reba consumers who purchased more fuel-efficient vehicles, and the Department of Energy’s Advanced Technology Vehicles Manufacturing Incentive Program has provided loans for the development of motors and components that use advanced technologies. Automotive and financial experts we spoke with as part of our ongoing monitoring of AIFP agree that the federal government-provided funding likely increased Chrysler’s and GM’s odds of attaining financial success but said that other factors would affect the outcome, including consumer preferences, the strength o the economy, and the success of the companies in continuing to increase their profitability. We are continuing to evaluate Treasury’s exit strategyr. for AIFP and the impact of the ass istance on pensions and plan to report on these issues in future reports. HAMP faces a significant challenge that centers on uncertainty over the number of homeowners it will ultimately help. Residential mortgage defaults and foreclosures are at historic highs, and Treasury officials and others have identified reducing the number of unnecessary foreclosures as critical to the current economic recovery. In our July 2009 report, we noted that Treasury’s estimate that it would likely help 3 to 4 million homeowners under the HAMP loan modification program may have been overstated. Further, we and others have raised concerns about the capacity and consistency of servicers participating in HAMP in offering loan modifications to qualified homeowners facing potential foreclosure. Treasury has taken some actions to encourage servicers to increase the number of modifications made, including sending a letter to participating HAMP servicers and meeting with them to discuss challenges to making modifications. However, the ultimate result of Treasury’s actions to increase the number of HAMP loan modifications and the corresponding impact on stabilizing the housing market remains to be seen. Treasury faces a number of other challenges in implementing HAMP, including ensuring that decisions to deny or approve a loan modification are transparent to borrowers and establishing an effective system of operational controls to oversee the compliance of participating servicers with HAMP guidelines. In July 2009, we made several recommendations to Treasury concerning HAMP. Among other things, we recommended actions to monitor particular program requirements, re-evaluate and review certain program components and assumptions, and finalize a comprehensive system of internal control over HAMP. Treasury noted that it would take various actions in response to our recommendations, such as exploring options to monitor counseling requirements and working to refine its internal controls over the program. We plan to continue to monitor Treasury’s responses to our recommendations as part of our ongoing work on HAMP. In our July report, we also noted that Treasury lacked a way to assess, during the admission process, the capacity of servicers to meet program requirements. Recently, Treasury reported significant variations across participating servicers in the number of trial modifications started as a percent of estimated eligible loans (those delinquent by at least 60 days). To encourage servicers to increase the number of modifications they were making, Treasury and the Department of Housing and Urban Development sent a letter to participating HAMP servicers in July 2009 asking them to expand their capacity to make modifications. Treasury also subsequently held a meeting with servicers to discuss challenges to making modifications and strategies to improve the program’s effectiveness. Since Treasury’s unexpected shift soon after the act was passed toward making capital investments in financial institutions rather than purchasing the mortgages and mortgage-related assets on their books, Treasury has struggled to improve the transparency of the program and effectively communicate a strategic vision for TARP. Over the last year, Treasury has posted information on its Web site; announced decisions in press releases, press conferences, and speeches; and testified at congressional hearings. But these efforts, although intended to help ensure that TARP programs and decisions are transparent, have not always been effective in communicating Treasury’s rationale for certain decisions or in addressing confusion and concerns about the program. As discussed previously, we made a series of recommendations aimed at improving the transparency of TARP, including establishing more effective communication with Congress and the public and developing a clearly articulated strategy for the program, among other things. Over the last several months, Treasury has taken steps to improve its communication efforts, including releasing the Financial Stability Plan in February 2009; launching its FinancialStability.gov Web site in March 2009; and, in August 2009, adding a usability survey on its FinancialStability.gov Web site to gauge user satisfaction and gather input on the quality of users’ experience navigating the site. Moreover, OFS has formed a working group to help ensure that Treasury’s communication strategy addresses both internal and external communications and that appropriate staff are being hired to support the strategy. Treasury officials told us that key components of the strategy included (1) coordinating communication among OFS and Treasury’s Office of Public Affairs and Office of Legislative Affairs to help ensure that congressional and other external stakeholders received timely information, (2) continuously improving the financial stability Web site, and (3) conducting outreach across the country on the homeownership preservation programs. To support these efforts, Treasury is planning to hire a communications director for OFS once it completes a position description of duties and responsibilities. Treasury has already hired a communications director and four staff members to support its efforts to communicate with the public and Congress on the homeownership programs. These ongoing efforts should help address the concerns about Treasury’s communication on TARP issues that we noted in earlier reports. As we recommended in our December 2008 report, Treasury has expeditiously hired OFS staff to administer TARP duties. Over the last year, the total number of OFS staff has quadrupled, rising from 48 in November 2008 to 196 as of September 15, 2009 (see fig. 2). Moreover, OFS has relied increasingly on permanent staff rather than detailees. For example, OFS increased the number of permanent staff from 5 in November 2008 to 184 as of September 15, 2009, while the number of detailees fell from 43 in November 2008 to 12 as of September 15, 2009. While Treasury has made progress in establishing OFS and filling many positions, it continues to face hiring challenges. Treasury officials said that the direct-hire authority authorized by TARP had been helpful in bringing staff on board expeditiously. OFS has increased its estimate of the number of full-time staff that it needs based on changes to TARP and currently estimates that it will need 283 full-time equivalents for fiscal year 2010 to operate at full capacity. Most of the increase in the estimate of full capacity is attributable to anticipated needs in the Homeownership Preservation, Investment, and Compliance offices and staff for the Director of Internal Review. In addition, the Assistant Secretary for Financial Stability has continued to develop OFS’s organizational structure. For example, the Assistant Secretary is considering establishing a Director of OFS Internal Review who will help oversee internal control and compliance procedures and liaise with oversight entities. OFS has experienced challenges finding permanent staff for some of its key senior positions, specifically the Chief Homeownership Preservation Officer and the Chief Investment Officer. The Chief Homeownership Preservation Officer position has been filled by two successive interim appointments, and the Director of Operations is currently serving as the acting chief. Similarly, the Director of Investments has been serving as the acting Chief Investment Officer since the interim chief left in June 2009. In our July report, we emphasized that the lack of a permanent head of the Homeownership Preservation Office (along with the number of vacancies in the office itself) could impact Treasury’s ability to effectively monitor HAMP and recommended that these staffing needs be given high priority. Treasury has hired an executive search firm to recruit candidates for these leadership positions, potentially facilitating the process of identifying qualified applicants but also adding additional time to the hiring process. The Assistant Secretary is reassessing the duties of the Chief Operating Officer and the need for the position, which is currently vacant, to bring them in line with TARP’s current needs before filling the position. After nearly a year, the number of private contracts and financial agency agreements Treasury uses as part of OFS’s management infrastructure has grown from 11 to 52. Treasury has primarily used two mechanisms for engaging private sector firms. First, as of September 18, 2009, Treasury has exercised its statutory authority to retain seven financial agents to provide services such as managing TARP’s public assets. Second, Treasury has entered into contracts and blanket purchase agreements under the Federal Acquisition Regulation (FAR) for a variety of legal and accounting services, investment consulting, and other services and supplies. In some cases, interagency agreements (IAA) are also used in support of OFS’s administration and operations for TARP to engage vendors that have existing contracts with other Treasury offices or bureaus or other federal agencies. As of September 18, 2009, Treasury had 39 contracts and blanket purchase agreements and six IAAs. Legal services contracts and financial agency agreements accounted for 57 percent of the service providers directly supporting OFS’s administration of TARP. For contracts and agreements in place through August 31, 2009, Treasury reports incurring a total of $110.2 million in expenses. The potential value of all 52 TARP support agreements and contracts—some completed and some scheduled to run until June 2014—totals about $601.6 million. The share of work by small businesses—including minority- and women- owned businesses—under TARP contracts and financial agency agreements has grown substantially since November 2008, when only one of Treasury’s prime contracts was with a small business and only one minority small business firm was a subcontractor with a large business contractor. From the outset, Treasury has encouraged small businesses to pursue procurement opportunities for TARP contracts and financial agency agreements. As shown in table 4, eight of Treasury’s prime contracts and financial agency agreements are with small and/or minority- and women-owned businesses. The majority of these businesses are subcontractors to TARP prime contractors. Treasury’s reliance on private sector resources to assist OFS with implementing TARP underscores the importance of addressing conflicts- of-interest issues. As required by the act, in January 2009 Treasury issued an interim regulation on TARP conflicts of interest, which was effective immediately. With this action, Treasury put in place a set of clear requirements to address actual or potential conflicts that may arise during the selection of retained entities seeking a contract or financial agency agreement with the Treasury, particularly those involved in the acquisition, valuation, management, and disposition of troubled assets. Since January 2009, OFS’s Chief Risk and Compliance Office has been actively renegotiating several contracts that predated the TARP conflicts- of-interest rulemaking to enhance specificity and conformity with the regulations. To date, conflicts-of-interest provisions and approved mitigation plans have been renegotiated for three of the six contracts, as shown in table 5. According to Treasury, the complex nature of these contracts and business relations with other firms means that significant time is required to develop mitigation plans that appropriately meet the provisions of the regulations, and as a result these plans are in various stages of renegotiation. Since March 2009, and consistent with our recommendation, Treasury has strengthened guidance and procedures requiring that key communications and decisions concerning potential or actual vendor-related conflicts of interest be documented. In an effort to improve the monitoring of contracts and formally document conflict-of-interest processes, Treasury has taken several steps. For example, it has developed and implemented conflicts-of-interest procedures and distributed guidance documents to Treasury contracting staff and TARP contractors and financial agents that include detailed workflow charts depicting the standardized processes for the review and disposition of conflict-of-interest inquiries. Also, Treasury has finished implementing an improved internal reporting database for documenting and tracking all conflict-of-interest inquiries and requests for conflicts-of-interest waivers. Treasury’s guidance was sent to contractors and financial agents in early July, along with a request that all inquiries related to conflicts of interest be submitted via email to the “TARP.COI” mailbox created in April 2009 for contractors and financial agents to document communications to Treasury. Although Treasury has an appropriate management infrastructure in place, it must remain vigilant in managing and monitoring conflicts of interest that may arise with the use of private sector resources. As required by the act, Treasury must annually prepare and submit to Congress and the public audited fiscal year financial statements for TARP that are prepared in accordance with generally accepted accounting principles. Moreover, the act requires Treasury to establish and maintain an effective system of internal control over TARP that provides reasonable assurance of achieving three objectives: (1) reliable financial reporting, including financial statements and other reports for internal and external use; (2) compliance with applicable laws and regulations; and (3) effective and efficient operations, including the use of TARP resources. Accordingly, OFS continues to develop a comprehensive system of internal control for TARP. The fiscal year ending September 30, 2009, will be the first period for which Treasury prepares financial statements for TARP. The act requires that Treasury assess and report annually on the effectiveness of TARP’s internal controls over financial reporting. The act also requires GAO to audit TARP’s financial statements annually in accordance with generally accepted auditing standards. We are currently performing an audit of TARP’s financial statements and the related internal controls. Our objectives are to render opinions on (1) the financial statements as of and for the period ending September 30, 2009; and (2) internal controls over both financial reporting and compliance with applicable laws and regulations as of September 30, 2009. We will also be reporting on the results of our tests of TARP’s compliance with selected provisions of laws and regulations related to financial reporting. The results of our financial statement audit will be published in a separate report. Although isolating and estimating the effect of TARP programs remains a challenging endeavor, the indicators that we have been monitoring over the last year suggest that there have been broad improvements in credit markets since the announcement of CPP, the first TARP program. Specifically, we found that: the cost of credit and perceptions of risk declined significantly in interbank, corporate debt, and mortgage markets; the decline in perceptions of risk (as measured by premiums over Treasury securities) in the interbank market could be attributed in part to several federal programs aimed at stabilizing markets that were announced on October 14, 2008, including CPP; and institutions that received CPP funds in the first quarter of 2009 saw more improvement in their capital positions than banks outside the program. Acting on GAO’s recommendation that Treasury collect information about the impact of its investments on participants’ lending activities, Treasury implemented a monthly survey. Our analysis of the surveys, which cover October 2008 through July 2009, show that collectively the 21 largest participants reported extending almost $2.3 trillion in new loans since receiving $160 billion in CPP capital from the Treasury. Although lending standards remained tight, new lending by these institutions increased from $240 billion a month during the fourth quarter of 2008 to roughly $287 billion a month in the second quarter of 2009. Because loan origination data is not available for most banking institutions—including CPP recipients outside of the largest institutions—the ability to perform more rigorous analysis to determine the extent to which the increased lending could be attributed to TARP is limited. Consistent with the intent of TALF, asset-backed security issuance has recently shown signs of a slight recovery. While foreclosures continued to increase, it is too soon to judge the effects of the HAMP program. Treasury recently released a report that discusses the next phase of its stabilization and rehabilitation efforts. Treasury’s report also begins to establish a framework that could provide a basis for deciding whether any further actions will be necessary to assist in financial stabilization after its authority to purchase or insure additional troubled assets expires on December 31, 2009 (unless it is extended through October 3, 2010). As it decides the future of TARP, Treasury will need to document and communicate its reasoning to Congress and the American people in order for its decisions to be viewed as credible. Continuing to develop its quantitative indicators of market conditions to benchmark TARP programs and its measures of program effectiveness would support Treasury in this process. In our reports since December 2008, we have highlighted the intended effects of several broad-based TARP programs, including CPP, CAP, TALF, PPIP, and HAMP. Chief among these intended effects was to stabilize and return confidence to the financial system. We paid particular attention to developments in the interbank market by monitoring the London Interbank Offered Rate (LIBOR), which is the cost of interbank credit, and the TED spread, which captures the risk perceived in interbank markets and gauges the willingness of banks to lend to other banks (see fig. 3). As figure 3 shows, LIBOR increased significantly in September 2008, and more importantly banks began to pay an even higher premium for loans to compensate for the perceived increase in default risk. After widening somewhat after the first major subprime mortgage write-down and the Bear Stearns rescue, the TED spread increased significantly in the days following the bankruptcy of Lehman Brothers and other adverse events, exceeding 4.5 percent at its highest point (450 basis points). However, since the announcement of CPP and other interventions in October 2008, the 3-month LIBOR and TED spread have fallen by more than 430 basis points. About 60 basis points of that decline occurred after the announcement of the stress test results associated with CAP in May 2009. To examine whether the decline in the TED spread could be attributed in part to TARP, we conducted additional analysis using a simple econometric model, which took into account the possibility that the spread would have narrowed without the intervention. We did not attempt to account for all the important factors that might influence the TED spread. Because the TED spread reached extreme values leading up to the CPP announcement (more than 450 basis points), it could have declined even in the absence of CPP, simply because extreme values have a tendency to return to normal levels. Even when we accounted for this possibility and for other factors that might influence the interbank market, we found that the October 14, 2008, announcement of the CPP had a statistically significant impact on changes in the TED spread. Nevertheless, the associated improvement in the TED spread (or LIBOR) cannot be attributed solely to TARP because the October 14, 2008, announcement was a joint announcement that also introduced the Federal Reserve’s Commercial Paper Funding Facility program and FDIC’s Temporary Liquidity Guarantee Program. More broadly, the programs established under TARP, if effective, should have jointly resulted in improvements in general credit market conditions, including declining risk premiums and lower borrowing costs for nonbank businesses and consumers. In the month leading up to the CPP announcement, market interest rates and spreads reflected a significant tightening in credit conditions as investors, worried about the health of the economy, became increasingly risk averse. The indicators that we have been monitoring illustrate that since mid-October 2008 the cost of credit and perceptions of risk (measured by premiums over Treasury securities) have declined significantly, not only in interbank markets but also in corporate debt and mortgage markets (see table 6). Recent trends in these measures are consistent with those for other indicators that we and other researchers have monitored. For example, stock market volatility has fallen considerably, and the credit default swap index for the banking sector has declined significantly since TARP actions began. Even taken collectively, though, changes in these indicators are an imperfect way to measure TARP’s impact, as they may also be influenced by general market forces and cannot be exclusively linked to any one program or action. One of the intentions of TARP, and specifically of CPP, was to improve banks’ balance sheets, enhance lenders’ ability to borrow, raise capital, and lend to creditworthy borrowers. Capital ratios at institutions that received CPP capital in the first quarter of 2009 rose more than capital ratios at non-CPP institutions between December 31, 2008, and March 31, 2009. This difference holds across several measures of capital adequacy (see table 7). Improved confidence in the interbank market may to some degree reflect the increased capital ratios at institutions that received CPP funding, as these ratios are important indicators of solvency—that is, the higher the ratio, the more solvent the institution. As we have discussed in previous reports, tension exists between promoting lending and improving banks’ capital position. We noted that some institutions likely would use CPP capital to improve their capital ratios by holding the additional capital as treasuries or other safe assets rather than leveraging it to support additional lending. Using the capital in this manner could allow institutions to absorb losses or write down troubled assets. Recent trends in lending suggest that CPP capital infusions may have made participating banks somewhat more willing and able to increase lending to creditworthy businesses and consumers, although lending standards for consumer and business credit remain tight. Our analysis of Treasury’s loan surveys showed that these CPP recipients reported an increase in new lending to consumers and businesses to, on average, $287 billion a month in the second quarter of 2009, up $47 billion from $240 billion a month in the fourth quarter of 2008 (see fig. 4). These findings are consistent with the trends in aggregate mortgage originations, which more than doubled between the fourth quarter of 2008 and the end of the second quarter of 2009 to $550 billion. Table 8 documents the total amount of new consumer and business lending for each institution that received CPP funds. Despite tight lending standards and the usual drop in credit flows during recessions, collectively the top 21 institutions participating in CPP have reported extending almost $2.3 trillion in new loans since receiving CPP capital totaling $160 billion. While lending typically falls during a recession, recent research by the Federal Reserve concluded that through the first quarter of 2009, the contraction in commercial mortgages, nonfinancial business credit, and consumer credit did not appear to be particularly severe relative to contractions in these types of lending in other downturns. However, the contraction in residential mortgage lending has exceeded past downturns. Data limitations may prevent a reliable comparison of lending volumes across institutions of different sizes and between CPP and non-CPP participants. For the hundreds of smaller financial institutions receiving CPP funds, the only lending information provided was based on the value of loan balances and thus was not comparable to the more detailed data for large CPP recipients. Similarly, only comparative balance sheet data is available for non-CPP institutions. Although balance sheet data—which is available for all banking institutions—could be useful for comparing capital ratios, our quantitative work suggests that loan balances may not be a good proxy for lending activity, at least for the third quarter of 2008 to the first quarter of 2009. Specifically, we found the correlations between new lending and changes in loan balances to be relatively low over this period. A number of factors can affect loan balances that are unrelated to new lending, including merger activity, changes in the value of existing loans (e.g., realizing losses on a loan portfolio), and loan payoffs as borrowers attempt to reduce debt burdens. Banks could, for example, undertake significant origination activity and still see a drop in the total value of loans that they held. As a result, it is difficult to determine CPP’s specific impact on lending activity in a more rigorous way. The primary goal of TALF, as designed and operated by the Federal Reserve, is to make credit more readily available to households and small businesses by increasing liquidity and improving conditions in ABS markets. Investors requested $51.7 billion in TALF loans between the start of the program in March of 2009 and September 2009. As figure 5 indicates, ABS activity has begun to rebound somewhat after reaching zero for several types of issuances in the fourth quarter of 2008. While aggregate issuance is still down significantly from 2007, non-mortgage-related ABS issuance rose to $47.9 billion in the second quarter of 2009 from $3.3 billion in the fourth quarter of 2008. ABS backed by home equity loans, which are not eligible for TALF assistance, increased to just $71 million from $17 million over the same period, although whether a significant increase would be expected given the turmoil in mortgage markets is not clear. As we discussed in previous reports, TALF support to securitization markets should, if effective, increase the availability of new credit to consumers and businesses, lowering rates on credit card, automobile, small business, student, commercial mortgage, and other types of loans traditionally facilitated by securitization. From November 2008 to May 2009, the average rate on automobile loans from finance companies declined significantly (296 basis points) to 3.5 percent, well below the bank rate, which fell only 26 basis points to 6.8 percent. While these declines correlate with the launching of TALF, the federal government’s support of GM and Chrysler also likely played a role in alleviating liquidity constraints at finance companies. Because stand-alone auto finance companies rely more heavily on securitization than commercial banks, the differences in the trends in their automobile loan rates could partially reflect the issues in securitization markets that TALF was intended to address. After initially providing funding to certain holders of AAA-rated ABS backed by newly and recently originated consumer and small business loans, TALF has been expanded to other assets, including commercial MBS. We will continue to monitor ABS activity, interest rates on consumer and business loans and other TALF-eligible securities, as well as ABS spreads. In future reports, we will address the effectiveness of the more recently initiated financial stability programs, using indicators and auxiliary quantitative work. These programs are intended to address rising foreclosures and the condition of the housing market (HAMP) as well as the legacy loans and securities that are widely held to be the root cause of the deteriorating conditions of many financial institutions (PPIP). Foreclosure data, although also influenced by general market forces such as falling housing prices and unemployment, should provide an indication of the effectiveness of HAMP. Although it is too soon to expect any HAMP- related improvements, because HAMP was only recently implemented, we have monitored foreclosure rates over the past year. While the average foreclosure rate from 1979 to 2006 was less than 1 percent, the percentage of loans in foreclosure reached an unprecedented high of 4.3 percent at the end of the second quarter of 2009, up from 3.3 percent in the fourth quarter of 2008 (see table 6). Over the same period, the foreclosure rate on subprime loans rose to 15.1 percent from 13.7 percent (the rate for adjustable-rate subprime loans is now more than 24 percent). As discussed in our March 2009 report, Treasury introduced PPIP to facilitate the purchase of legacy loans and securities. PPIP’s impact will depend largely on the pricing of the purchased assets. Sufficiently high prices will allow financial institutions to sell assets, deleverage, and improve their capital adequacy, but overpaying for these assets could have negative implications for taxpayers. In addition to providing more transparent pricing for assets, PPIP—if it is effective—should improve solvency at participating institutions and others holding those assets, reduce uncertainty about their balance sheets, and improve investor confidence. If it does, the institutions will be able to borrow and lend at lower rates and raise additional capital from the private sector. But PPIP is in the initial stages of implementation, and it is too early to expect effects on related markets. While TARP’s activities could improve market confidence in participating banks and have other beneficial effects on credit markets, we have also noted in our previous reports that several factors will complicate efforts to measure any impact. For example, any changes attributed to TARP could well be changes that: would have occurred anyway; can be attributed to other policy interventions, such as the actions of FDIC, the Federal Reserve, or other financial regulators; or were enhanced or counteracted by other market forces, such as the correction in housing markets and revaluation of mortgage-related assets. Consideration of market forces is particularly important when using bank lending as a measure of CPP’s and CAP’s success, because it is not clear what would have happened in the absence of TARP. Weaknesses in the balance sheets of financial intermediaries, a decline in the demand for credit, reduced creditworthiness among borrowers, and other market fundamentals suggest lower lending activity than would be expected. Similarly, nonbank financial institutions, which have accounted for a significant portion of lending activity over the last two decades, have been constrained due to weak securitization markets. Lastly, because the extension of credit to less-than-creditworthy borrowers appears to have been an important factor in the current financial crisis, it is not clear that lending should return to precrisis levels. Similar difficulties arise in using foreclosure data as a measure of HAMP’s success, especially given the rising unemployment rate and the number of homeowners who may have taken on mortgage-related debt beyond prudent levels. While Treasury is beginning to establish a case for exiting from some emergency programs and maintaining others, it has not fully established a comprehensive framework that will provide a basis for making transparent decisions about which TARP-specific actions are necessary or how those programs will be evaluated. Treasury’s authority to purchase or insure additional troubled assets will expire on December 31, 2009, unless the Secretary submits a written certification to Congress explaining why an extension is necessary and how much it is expected to cost. For this reason, Treasury will need to make decisions about providing new funding and maintaining existing funding for TARP programs in the next few months. It will need to do this in light of current and expected market conditions, and it will need to communicate its determinations to Congress and the American people. Treasury has recently released a report that begins to discuss the next phase of its stabilization and rehabilitation efforts—a discussion that may be a starting point for deciding whether any further actions are necessary to stabilize financial markets and the first step in establishing a framework for such actions. The report describes the drop in utilization of some programs as financial conditions normalize and confidence in financial markets improves and identifies a number of financial market indicators. Treasury also notes that it will need to ensure the continuation of some policies and programs that it believes are needed for financial and economic recovery. However, Treasury has yet to take all the steps needed to provide a basis for deciding whether or not to provide new funding for TARP. For example, while some rationale is provided for continued HAMP and TALF action, none is provided for PPIP. Without a robust analytic framework, Treasury may be challenged in effectively carrying out the next phase of its programs. For the decision-making process to be viewed as credible, Treasury will need to document and communicate the basis for its decisions. Although qualitative factors should be given serious consideration, to the extent that Treasury can relate its decision making to a set of quantitative measures or indicators, its case can be more convincing. In addition, Treasury would add further credibility to the process by announcing ahead of time the indicators or measures it plans to use. Doing so would help to disarm potential criticism that it had selectively chosen indicators or measures to justify its decisions after the fact. While indicators of credit market conditions can suggest the extent to which, for example, credit costs and lending have returned to levels consistent with the stability of financial markets, measures of program effectiveness can offer insight into the potential benefits of additional TARP expenditures. The Office of Management and Budget (OMB) guidance for cost-benefit and regulatory analyses suggests, among other things, making assumptions explicit, characterizing the uncertainties involved, varying assumptions to determine the sensitivity of estimated outcomes (sensitivity analysis), and considering alternative approaches. If Treasury adopts a more formal cost-benefit framework, additional principles may also be applicable, including the use of net present value measures, an enumeration of benefits and costs, and the quantifying of benefits and costs whenever possible. In establishing this new program, Treasury faced a number of operational challenges. Not only did it have to implement the program in the midst of the greatest financial crisis since the Great Depression, but Treasury also had to adjust the program as events continued to unfold. As TARP’s focus shifted from making a number of capital purchases as investments in individual institutions to one geared toward restarting securitization markets and preserving homeownership, its management infrastructure had to change as well. While progress has been made in establishing TARP, much remains uncertain about the program, including whether it will pay for itself or prove to be a cost to the taxpayers. The reasons for the uncertainty include the following: Some programs remain in their infancy, while others are winding down. Therefore, determining the overall impact and costs of the programs will take time. TARP funds were invested in a variety of institutions, some of which were less risky than others. Some TARP programs may generate some returns for Treasury through interest and dividend payments and sales of warrants, while others—such as HAMP—are expenditure programs aimed at helping homeowners modify their mortgages. To help Treasury meet the challenges associated with implementing a program while concurrently establishing a comprehensive system of internal control, we have made 35 recommendations to Treasury aimed at improving the accountability, integrity, and transparency of TARP. As discussed in appendix I, Treasury has taken action to address most of them. And while much important progress has been made, a number of areas warrant ongoing attention as Treasury moves into the next phase of the program and contemplates a possible extension. First, we continue to believe that Treasury should work with the Chairmen of the FDIC and Federal Reserve, the Comptroller of the Currency, and the Acting Director of OTS to help ensure that the primary federal regulators use generally consistent criteria when considering repurchase decisions under TARP. While we understand that the final repurchase decision rests with a participant’s primary federal regulator, Treasury has a responsibility to ensure that these regulators are applying generally consistent criteria when reviewing TARP participants’ requests to repurchase their preferred shares. Second, Treasury has yet to finalize its implementation of an oversight program for asset managers covering CPP and the other capital-based programs, such as TIP and AGP. While Treasury now has asset managers to help manage its equity investments, it must also ensure that the federal government’s interests are protected and that the asset managers are performing as agreed. Third, Treasury has yet to implement our recommendation aimed at strengthening its efforts to help preserve homeownership and protect home values. As we previously recommended, Treasury should routinely update projections of the number of homeowners who can be helped under HAMP by reviewing key assumptions about the housing market and the behavior of mortgage holders, borrowers, and servicers. In addition, Treasury should develop a means of systematically assessing servicers’ capacity to make HAMP modifications and meet program requirements, so that Treasury can understand and address any risks associated with individual servicers’ ability to fulfill program requirements. Fourth, in the area of management infrastructure, OFS has continued to make progress in establishing a management infrastructure to administer TARP and oversee contractors and financial agents, but some challenges remain. Though OFS now has close to 200 staff, some key senior positions have not been permanently filled, such as the Chief Homeownership Preservation Officer and Chief Investment Officer. Bringing on board permanent staff for these key positions is important in helping Treasury effectively administer TARP activities and ensuring accountability for program outcomes. Treasury has strengthened its management and oversight of contractors as its reliance on them to support TARP has grown over the past year. OFS continues to make progress in developing a comprehensive system of internal control. As we complete our first audit of OFS’s annual financial statements for TARP, we will be able to provide a more definitive view of TARP’s internal controls over financial reporting. Fifth, in the area of communication, the program has evolved and continues to evolve. Treasury viewed its initial shift toward capital investments in the first weeks of the program as a more effective way to stabilize fragile financial markets. This shift in strategy, however, caught Congress and the public by surprise, and has created long-term challenges for the program, and, some would argue, ultimately impacted the effectiveness of the program. Concerns about this shift in structure highlight the communication challenges that continue to confront the program. And as Treasury continues to improve its communication efforts and formalize its communication strategy, Treasury must ensure that its ongoing efforts include keeping Congress adequately informed about TARP and its strategy, including its exit strategy for the various programs created under TARP. Furthermore, formalizing the communication strategy and hiring a communications director will help ensure that communication is given sufficient attention on an ongoing basis. Finally, because TARP has been part of a broader effort that has included the Federal Reserve and FDIC, measuring the effectiveness of TARP’s programs has been an ongoing challenge. We developed a set of indicators that we used to track conditions of financial markets over the past year. These indicators show that a number of the anticipated effects of TARP have materialized. However, changes in these metrics are an imperfect way to measure TARP’s impact, as they may also be influenced by general market forces and cannot be exclusively linked to any one program or action. As a result, isolating and assessing TARP’s effect on the economy remains a challenging endeavor. Treasury has not fully established a comprehensive analytic framework for assessing the need for additional actions and evaluating program results in a transparent manner. In a recent report on the next phase of the federal government’s stabilization efforts, Treasury began to lay the foundation for an analytic framework for determining whether to extend TARP and also provided a number of financial indicators. Although TARP expires on December 31, 2009, the Secretary of the Treasury may extend the program to October 3, 2010, and Treasury will need to make a determination regarding such an extension. As Treasury considers whether to extend the program, the Secretary’s determination must be made in light of actions taken and planned by the Federal Reserve and FDIC and their winding down of certain programs and continuation of others that were also established to help stabilize markets. In addition, any continued action under TARP should be based in part on quantitative measures of program effectiveness, such as performance indicators, in order to weigh the benefits of TARP programs against the cost of using additional taxpayer resources. However, Treasury has not fully established a comprehensive analytic framework for assessing the future direction of the programs or determining whether additional actions are warranted. Moreover, as it finalizes the next phase of the program, Treasury will need to document its decision-making process and communicate its reasoning to Congress and the American people in order for its decisions to be viewed as credible. Finally, with the exception of AGP and AIFP, Treasury has not updated its projected use of funds for the TARP programs in light of current market conditions and program participation rates since March 2009. Based on changes in the markets, repurchases, participation levels in certain programs, and the implementation status of others, a thorough review of Treasury’s existing estimates of its projected use of TARP funds is warranted in light of the need to make a determination about whether to extend the program. Without more current and meaningful estimates about projected uses of the remaining funds, Treasury’s ability to plan for and effectively execute the next phase of the program will be limited. As it enters the next phase of the program, Treasury will likely face ongoing challenges. Building on our prior recommendations, we are making three new recommendations aimed at improving Treasury’s ability to effectively manage the next phase of the program. Specifically, we recommend that the Secretary of the Treasury Consider TARP in a broad market context and as part of determining whether to extend TARP, work with the Chairmen of the Federal Reserve and FDIC to develop a coordinated framework and analytical basis to determine whether an extension is needed. If it is, the Secretary should clearly spell out what the objectives and measures of any extended programs would be, along with anticipated costs and safeguards; Document its analytical decision-making process and clearly communicate the results to Congress and the American people for determining whether an extension is needed; and Update its projected use of funds and, if the program is extended, continue to re-evaluate them on a periodic basis. We provided a draft of this report to Treasury for its review and comment. We also provided excerpts of the draft report to the Federal Reserve and FDIC for their review. Treasury provided written comments that we have reprinted in appendix XI. Treasury, the Federal Reserve, and FDIC also provided technical comments that have been incorporated as appropriate. In its comments, Treasury noted that “there is important work ahead” and that our recommendations were constructive as Treasury works to implement its financial stability programs and enhance OFS’s performance. In particular, the Assistant Secretary noted in response to our recommendations that the Secretary in deciding whether to extend TARP authority beyond December 31, 2009, “will coordinate with appropriate officials to ensure that the determination is considered in a broad market context that takes account of relevant objectives, costs, and measures” and that Treasury will communicate the reasons for the decision when it is made. Concerning our recommendation that Treasury update its projected use of funds estimates and if the program is extended, regularly re-evaluate them, Treasury commented that it regularly evaluates funding needs for TARP programs and announces revisions as decisions are made. However, with the exception of AGP and AIFP, Treasury had not publicly affirmed its projected use estimates since March 2009. As it continues to evaluate these estimates, Treasury should disclose this information periodically, including reconciling estimates to actual results. For example, in March Treasury estimated that it had $135 billion remaining under TARP. This included $25 billion in estimated repurchases, yet as of September 25, 2009, actual repurchases totaled almost three times that amount. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov; Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov; or Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. Develop and implement a well-defined and disciplined risk-assessment process, as such a process is essential to monitoring program status and identifying any risks of potential inadequate funding of announced programs. Review and renegotiate existing conflict-of-interest mitigation plans, as necessary, to enhance specificity and conformity with the new interim conflicts-of-interest regulation and take continued steps to manage and monitor conflicts of interest and enforce mitigation plans. Develop a communication strategy that includes building an understanding and support for the various components of the program. Specific actions could include hiring a communications officer, integrating communications into TARP operations, scheduling regular and ongoing contact with congressional committees and members, holding town hall meetings with the public across the country, establishing a counsel of advisors, and leveraging available technology. Require that American International Group, Inc. (AIG) seek concessions from stakeholders, such as management, employees, and counterparties, including seeking to renegotiate existing contracts, as appropriate, as it finalizes the agreement for additional assistance. Update OFS documentation of certain internal control procedures and the guidance available to the public on determining warrant exercise prices to be consistent with actual practices applied by OFS. Improve transparency pertaining to TARP program activities by reporting publicly the monies, such as dividends, paid to Treasury by TARP participants. Complete the review of, and as necessary renegotiate, the four existing vendor conflicts-of-interest mitigation plans to enhance specificity and conformity with the new interim conflicts-of-interest rule. Issue guidance requiring that key communications and decisions concerning potential or actual vendor-related conflicts of interest be documented. Ensure that the warrant valuation process maximizes benefits to taxpayers and consider publicly disclosing additional details regarding the warrant repurchase process, such as the initial price offered by the issuing entity and Treasury’s independent valuations, to demonstrate Treasury’s attempts to maximize the benefit received for the warrants on behalf of the taxpayer. In consultation with the Chairmen of the Federal Deposit Insurance Corporation and the Board of Governors of the Federal Reserve System, the Comptroller of the Currency, and the Acting Director of the Office of Thrift Supervision, ensure consideration of generally consistent criteria by the primary federal regulators when considering repurchase decisions under TARP. Fully implement a communication strategy that ensures that all key congressional stakeholders are adequately informed and kept up to date about TARP. Expedite efforts to conduct usability testing to measure the quality of users’ experiences with the financial stability Web site and measure customer satisfaction with the site, using appropriate tools such as online surveys, focus groups, and e-mail feedback forms. Explore options for providing the public with more detailed information on the costs of TARP contracts and agreements, such as a dollar breakdown of obligations and/or expenses. Finally, to help improve the transparency of Capital Assistance Program (CAP)—in particular the stress tests results—we recommend that the Director of Supervision and Regulation of the Federal Reserve consider periodically disclosing to the public the aggregate performance of the 19 bank holding companies against the more adverse scenario forecast numbers for the duration of the 2-year forecast period and whether or not the scenario needs to be revised. At a minimum, the Federal Reserve should provide the aggregate performance data to OFS program staff for any of the 19 institutions participating in CAP or CPP. Consider methods of (1) monitoring whether borrowers with total household debt of more than 55 percent of their income who have been told that they must obtain HUD-approved housing counseling do so, and (2) assessing how this counseling affects the performance of modified loans to see if the requirement is having its intended effect of limiting redefaults. Reevaluate the basis and design of Home Price Decline Protection (HPDP) program to ensure that the Home Affordable Modification Program (HAMP) funds are being used efficiently to maximize the number of borrowers who are helped under HAMP and to maximize overall benefits of utilizing taxpayer dollars. Institute a system to routinely review and update key assumptions and projections about the housing market and the behavior of mortgage holders, borrowers, and servicers that underlie Treasury’s projection of the number of borrowers whose loans are likely to be modified under HAMP and revise the projection as necessary in order to assess the program’s effectiveness and structure. Place a high priority on fully staffing vacant positions in Homeownership Preservation Office (HPO)— including filling the position of Chief Homeownership Preservation Officer with a permanent placement—and evaluate HPO’s staffing levels and competencies to determine whether they are sufficient and appropriate to effectively fulfill its HAMP governance responsibilities. Expeditiously finalize a comprehensive system of internal control over HAMP—including policies, procedures, and guidance for program Activities—to ensure that the interests of both the government and taxpayer are protected and that the program objectives and requirements are being met once loan modifications and incentive payments begin. Expeditiously develop a means of systematically assessing servicers’ capacity to meet program requirements during program admission so that Treasury can understand and address any risks associated with individual servicers’ abilities to fulfill program requirements, including those related to data reporting and collection. The Capital Purchase Program (CPP) has been the primary initiative under the Troubled Asset Relief Program (TARP) for stabilizing the financial markets and banking system. The Department of the Treasury (Treasury) created CPP in October 2008 to stabilize the financial system by providing capital to qualifying regulated financial institutions through the purchase of senior preferred shares and subordinated debentures. In return for its investment, Treasury was to receive dividend or interest payments and warrants. Treasury has stated that by building capital, CPP should help increase the flow of financing to U.S. businesses and consumers and support the U.S. economy. At the time of the program’s announced establishment, nine major financial institutions—considered by federal banking regulators and Treasury to be essential to the operation of the financial system—agreed to participate in CPP. Together, these institutions held about 55 percent of U.S. banking assets. Banking regulators recommend program participants, which Treasury selects, based on examination ratings and performance and may accept or reject applications based on these factors and on mitigating circumstances, such as confirmed private investment. On October 14, 2008, Treasury allocated $250 billion of the almost $700 billion for CPP but adjusted its allocation to $218 billion in March 2009. According to Treasury officials, this downward adjustment reflected the estimated funding needs of the program based on participation to date and the money it expected to receive from participants repurchasing their preferred shares and subordinated debt. As of September 25, 2009, Treasury had disbursed more than $204.6 billion (see table 9) and had received about $70.7 billion from repurchases of preferred shares leaving $84.1 billion available for future CPP funding, according to Tresaury. Through CPP, Treasury had provided more than $204 billion in capital to 685 institutions as of September 25, 2009. These purchases ranged from $301,000 to $25 billion per institution and represented about 94 percent of the $218 billion Treasury allocated for CPP. As of September 25, 2009, the types of institutions that received CPP capital varied in size and included 280 publicly held institutions, 337 privately held institutions, 1 mutual institution, 45 S-corporations, and 22 community development financial institutions. These purchases represented investments in state-chartered and national banks and bank holding companies located in the District of Columbia, Puerto Rico, and every state except Montana and Vermont. For a detailed listing of financial institutions that received CPP funds as of September 25, 2009, see GAO-10-24SP. While the last application deadline was May 14, 2009, for mutual institutions, on May 13, 2009, the Secretary of the Treasury extended the CPP program to November 21, 2009, for all types of small banks. The program is starting to wind down, with fewer than 115 applications under consideration by regulators and fewer than 30 applications by Treasury as of September 18, 2009. Treasury and the federal bank regulators continue to review applications for CPP. According to Treasury, as of September 25, 2009, it had received over 1,300 CPP applications (including approximately 10 under the small bank program) from the banking regulators, with fewer than 30 awaiting decision by OFS’s Investment Committee. For many applications in this category, Treasury is awaiting updated information from the regulators before taking the application to the Investment Committee for a vote. The bank regulators also reported that they were reviewing applications of fewer than 115 institutions plus more than 50 under the small bank program that had not yet been forwarded to Treasury. Qualified financial institutions generally have 30 calendar days after Treasury notifies them of preliminary approval for CPP funding to submit investment agreements and related documentation. OFS officials stated that more than 430 financial institutions that received preliminary approval had withdrawn their CPP applications as of September 25, 2009. Institutions withdrew their applications for a variety of reasons, including the uncertainty surrounding future program requirements, the legal cost to close transactions (for small institutions), cost of warrants, and improving confidence in the banking system that allowed them to raise capital in the private markets. In addition to outflows, Treasury had received about $6.7 billion in dividend payments and interest payments from CPP participants as of September 25, 2009. CPP participants repurchased about $70.7 billion in preferred shares and paid another $2.9 billion to repurchase their warrants and preferred stock received through the exercise of warrants. October 13, 2008: Consistent with conditions prescribed by the Emergency Economic Stabilization Act of 2008 (the act), Treasury notifies Congress that Treasury officials have determined that it would be more efficient to purchase preferred shares issued by certain financial institutions instead of purchasing mortgage-related assets. October 14, 2008: Treasury announces that it will make direct capital investments in a broad array of qualifying financial institutions in exchange for preferred stock and warrants through CPP. Also, Treasury allocates $125 billion in purchases for the first nine financial institutions deemed systemically significant by federal bank regulators and Treasury. The nine large financial institutions agree to participate in CPP, in part, to signal the importance of the program for the system. October 14, 2008: Treasury provides a description of CPP terms for investments in public financial institutions and issues the term sheet for public institutions. The deadline for public institutions to submit applications to their primary federal bank regulator is November 14, 2008. October 20, 2008: Treasury published in the Federal Register an interim final rule to provide guidance on the executive compensation provisions applicable to CPP participants. October 20, 2008: Treasury and the four federal banking agencies—the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and the Federal Deposit Insurance Corporation—issue application guidelines, frequently asked questions, and standardized terms for making capital investments in public financial institutions. The deadline for public institutions to submit applications to their primary federal bank regulator is November 14, 2008. October 28, 2008: Treasury settles the capital purchase transactions with eight of the nine institutions participating in the first round of CPP for a total of $115 billion. November 17, 2008: Treasury issues standardized terms for making capital investments in privately held financial institutions. December 8, 2008, is the deadline for privately held institution to submit applications to their primary federal bank regulator for CPP funds. January 14, 2009: Treasury issues standardized terms for making capital investments in S-corporations and answers to frequently asked questions for qualified financial institutions applying to CPP that are S-corporations. The term sheet provides for issuances of debt instead of preferred stock issued by certain other CPP participants. The deadline for S-corporations to submit applications to their primary federal bank regulator is February 17, 2009. February 17, 2009: Treasury publishes its first Monthly Lending and Intermediation Snapshot with information from the top 21 financial institutions participating in CPP. February 17, 2009: The American Recovery and Reinvestment Act of 2009 (ARRA) amends the Emergency Economic Stabilization Act of 2008 by allowing financial institutions to repurchase or buy back their preferred shares and warrants from Treasury at any time with the approval of their primary federal regulator. Under the original terms of CPP, financial institutions are prohibited from repurchasing in the first 3 years unless they had completed a qualified equity offering. February 26, 2009: Treasury publishes frequently asked questions addressing changes to CPP under ARRA. March 31, 2009: The first financial institutions begin repaying their CPP capital investments (that is, repurchasing preferred shares) after receiving approval from their primary federal banking regulator. Five institutions pay $353 million to Treasury. April 7, 2009: Treasury issues three term sheets for qualifying financial institutions applying to CPP that are mutual holding companies. The deadline for mutual institutions to submit applications to their primary federal bank regulator is May 7, 2009. April 14, 2009: Treasury releases term sheet for mutual banks applying to CPP that do not have holding companies. The deadline for mutual institutions to submit applications to their primary federal bank regulator is May 14, 2009. April 22, 2009: Treasury announces the selection of three firms (AllianceBernstein LP; FSI Group, LLC; and Piedmont Investment Advisors, LLC) to serve as asset managers for CPP and other programs. Treasury officials state that these managers would have a role in helping ensure that institutions are honoring dividend payments and stock repurchases requirements. May 13, 2009: The Treasury Secretary announces in a speech that Treasury has taken additional actions under CPP to ensure that small community banks and holding companies (qualifying financial institutions with total assets less than $500 million) will have the capital they needed to lend to creditworthy borrowers. Small banks have until November 21, 2009, to apply to CPP under all term sheets. May 14, 2009: Treasury notifies six insurance companies that they have received preliminary CPP funding approval. All insurance companies complied with the requirements to participate in CPP under existing term sheets, as these companies are organized as bank or thrift holding companies and filed their CPP applications by the deadline date. As of September 25, 2009, two of the six had been funded. June 1, 2009: Treasury releases its first CPP Monthly Lending Report, which includes information on outstanding balances on consumer loans, commercial loans, and total loans of all CPP participants. June 9, 2009: Treasury announces that 10 of the largest U.S. financial institutions participating in CPP are eligible to complete the repurchase process and repay about $68 billion, having obtained regulatory consent to their repayment requests. June 10, 2009: Treasury adopts an interim rule to implement the executive compensation and corporate governance provisions of the act, as amended by ARRA. June 17, 2009: Ten of the largest U.S. bank holding companies—all but one of which participated in the Supervisory Capital Assessment Program exercise—repay about $68 billion in CPP capital investments to Treasury. June 26, 2009: Treasury announces its policy with respect to warrant repurchases and the disposition of warrants received in connection with investments made under CPP. Also, frequently asked questions about warrants and CPP are published. August 17, 2009: Treasury, in conjunction with bank regulators, publishes the first Quarterly Capital Purchase Report of regulatory financial data for CPP and non-CPP banks, thrifts, and bank holding companies. It focuses on three broad categories: on- and off-balance sheet items, performance ratios, and asset quality measures. December 31, 2009: Deadline for Treasury to end the approval process for the additional funding of institutions under CPP and TARP unless the Treasury Secretary extends it. In February 2009, the Department of the Treasury (Treasury) announced the Financial Stability Plan, which outlined a set of measures to help address the financial crisis and restore confidence in our financial and housing markets by restarting the flow of credit to consumers and businesses, strengthening financial institutions, and providing aid to homeowners and small businesses. The plan announced six key components, one of which was the Capital Assistance Program (CAP). CAP is designed to help ensure that qualified financial institutions have sufficient capital to withstand severe economic challenges. These institutions must meet eligibility requirements that will be substantially similar to those used for Capital Purchase Program (CPP). A key component of CAP is the Supervisory Capital Assessment Program (SCAP), which the 19 largest U.S. bank holding companies (those with risk-weighted assets of $100 billion or more as of December 31, 2008) were required to participate in. Specifically, federal bank regulators, led by the Board of Governors of the Federal Reserve System, conducted capital assessments or “stress tests” to determine whether the largest bank holding companies have enough capital to absorb losses and continue lending even if conditions were worse than expected between December 2008 and December 2010. Institutions deemed not to have sufficient capital were given 6 months to raise private capital or to access capital through CAP. Institutions with less than $100 billion in risk-weighted assets were not required to complete a stress test but are also eligible to obtain capital under CAP. In a process similar to the one used for CPP, institutions interested in CAP must submit applications to their primary federal banking regulators by November 9, 2009. The regulators are to submit recommendations to Treasury regarding an applicant’s viability. In addition, as part of the application process, institutions must submit a plan showing how they intend to use this capital to support their lending activities and how the assistance will impact their lending compared to what would have been possible without it. Participating institutions under CAP will be required to submit to Treasury monthly reports—similar to those for CPP—on their lending activities. To date, Treasury has not allocated any funding to CAP. The Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) conducted the stress test in the spring of 2009. More than 150 examiners, supervisors, accountants, economists, and other specialists from these banking agencies participated in the supervisory process. On May 7, 2009, the Federal Reserve announced the results of SCAP. SCAP results showed that 10 of the 19 bank holding companies needed to raise approximately $75 billion in additional capital. The 10 institutions that needed to raise additional capital filed their capital plans with the Federal Reserve by the June 8, 2009, deadline. Federal banking regulators said that as of September 18, 2009, they had received 6 CAP applications from institutions wanting to participate in the program. Regulators noted that they had forwarded no applications to Treasury for funding consideration. Therefore, no funds have been disbursed under CAP, according to Treasury officials. February 10, 2009: CAP was announced as a key component of Treasury’s Financial Stability Plan. February 25, 2009: Treasury announces the terms and conditions for CAP. February 25, 2009: Federal bank regulatory agencies announce that they would start conducting forward-looking economic assessments of large U.S. banking holding companies. Also, the three economic assumptions underlying the stress test are published. April 24, 2009: The Federal Reserve publishes details of the stress test process and methodologies employed by the federal banking supervisors in their forward-looking capital assessment of large U.S. bank holding companies. May 7, 2009: The Federal Reserve and Treasury announce the results of the stress test under CAP. Also, the Treasury Secretary releases a statement announcing his hopes that the release of the results will lead to increased bank lending. May 7, 2009: Treasury publishes frequently asked questions on CPP repayment and CAP. June 1, 2009: The Federal Reserve announces the criteria it plans to use to evaluate applications to repurchase Treasury’s capital investments of the 19 institutions that underwent stress tests. June 8, 2009: The deadline for bank holding companies that need to raise capital under the stress test to file their capital plan with the Federal Reserve. November 9, 2009: The deadline for bank holding companies to implement their capital plans and to apply for and fund transactions. The Targeted Investment Program (TIP) was designed to prevent a loss of confidence in financial institutions that could (1) result in significant market disruptions, (2) threatened the financial strength of similarly situated financial institutions, (3) impair broader financial markets, and (4) undermine the overall economy. The Department of the Treasury (Treasury) determines the forms, terms, and conditions of any investments made under this program and considers institutions for approval on a case-by-case basis based on the threats posed by the potential destabilization of the institution, the risk caused by a loss of confidence in the institutions, and the institution’s importance to the nation’s economy. Treasury may, on a case-by-case basis, use this program in coordination with a broader guarantee involving other agencies of the federal government. Treasury requires any institution participating in this program to provide Treasury with warrants or alternative considerations, as necessary, to minimize the long-term costs and maximize the benefits to the taxpayers in accordance with the Emergency Economic Stabilization Act of 2008 (the act). Institutions that participate in TIP are subject to stringent regulations regarding executive compensation, lobbying expenses, and other corporate governance requirements. Only two institutions have participated in the TIP program: Bank of America and Citigroup. No new applicants have applied for TIP assistance since Bank of America received TIP assistance in early 2009. As of September 25, 2009, Bank of America and Citigroup have not repurchased their preferred shares or warrants. On July 30, 2009, Citigroup exchanged its fixed-rate cumulative perpetual stock ($20 billion) for trust preferred securities. This exchange was part of Citigroup’s overall agreement with Treasury to exchange all of Treasury’s investments in Citigroup. It included the exchange of the Capital Purchase Program’s preferred shares of $25 billion for common stock in Citigroup, which essentially gave Treasury a common equity interest in the bank holding company. December 31, 2008: Treasury enters into an agreement with Citigroup to purchase $20 billion in Fixed Rate Cumulative Perpetual Preferred Stock and a warrant to purchase common stock. January 15, 2009: Treasury enters into an agreement with Bank of America Corporation to purchase $20 billion in preferred stock and a warrant to purchase common stock. February 27, 2009: Citigroup announces plans to undertake a series of transactions, involving the exchange of privately and publicly held preferred securities and trust securities for common stock and the exchange of up to $25 billion of Treasury CPP senior preferred. May 7, 2009: Citigroup announces that it would expand its planned exchange of preferred securities and trust preferred securities held by public and private investors (other than Treasury) for common stock from $27.5 billon to $33 billion following the results of the stress test. June 9, 2009: Treasury and Citigroup finalize their exchange agreement and Treasury agree to convert up to $25 billion of the Treasury CPP senior preferred shares for interim securities and warrants and its remaining preferred securities acquired in connection with assistance provided to Citigroup under the TIP and AGP programs for trust preferred securities so that the institution could strengthen its capital structure by increasing tangible common equity. July 23, 2009: Citigroup announces completion of the exchange of $12.5 billion of the Treasury CPP senior preferred and $12.5 billion of privately held convertible preferred securities for interim securities and warrants. July 30, 2009: Treasury announces the final results of its offer to exchange publicly held preferred securities and trust securities, as well as the exchange by Treasury of its remaining $12.5 billion of Treasury CPP senior preferred shares outstanding for interim securities and warrants. September 3, 2009: Citigroup announces the mandatory conservation of the interim securities issued in the exchange offers into common stock, and cancellation of the warrants, in accordance with the terms of the exchange offers. The Systemically Significant Failing Institutions (SSFI) program was established to provide stability and prevent disruptions to financial markets from the failure of institutions that are critical to the functioning of the U.S. financial system. The only participating institution was American International Group, Inc. (AIG). Federal assistance to AIG is a joint effort by the Department of the Treasury (Treasury) and the Board of Governors of the Federal Reserve System and Federal Reserve Banks (Federal Reserve). We recently issued a report on the status of this assistance. November 10, 2008: Treasury announces plans to use its SSFI program to purchase $40 billion in AIG preferred shares. November 25, 2008: AIG enters into an agreement with Treasury, which agrees to purchase $40 billion of AIG’s fixed-rate cumulative perpetual preferred stock (Series D) and a warrant to purchase approximately 2 percent of the then issued shares of AIG’s common stock to Treasury. April 17, 2009: AIG and Treasury enter into an agreement in which Treasury agrees to exchange its $40 billion of AIG’s Series D fixed-rate cumulative perpetual preferred stock for $41.6 billion of AIG’s Series E fixed-rate noncumulative perpetual preferred shares. Also, Treasury provided a $29.8 billion equity capital facility to AIG, which then issued to Treasury 300,000 shares of fixed-rate noncumulative perpetual preferred stock (Series F) and a warrant to purchase up to 3,000 shares of AIG’s common stock. Under the Asset Guarantee Program (AGP), the Department of the Treasury (Treasury) provides federal government assurances for assets held by financial institutions that are deemed critical to the functioning of the U.S. financial system. The goal of AGP is to encourage investors to keep funds in the institutions. According to Treasury, placing guarantees, or assurances, against distressed or illiquid assets was viewed as another way to help stabilize the financial system. In implementing AGP, Treasury collects a premium, deliverable in a form deemed appropriate by the Treasury Secretary. As required by the statute, an actuarial analysis is used to ensure that the expected value of the premium is no less than the expected value of the losses to TARP from the guarantee. The U.S. government would also provide a set of portfolio management guidelines to which the institution must adhere for the guaranteed portfolio. The set of insured assets was first designated by Citigroup and submitted to Treasury for approval. In accordance with section 102(a), assets to be guaranteed must have been originated before March 14, 2008. The program is meant only for systemically significant institutions and can be used in coordination with other programs. Since early 2009, no new participants have applied to the AGP program. Bank of America withdrew from the program and in September 2009 negotiated a termination fee of $425 million that was paid to the Federal Reserve, the Federal Deposit Insurance Corporation, and Treasury. Thus, as of October 1, 2009, Citigroup is the only institution participating in AGP. On January 15, 2009, Citigroup issued preferred shares to the Treasury and the Federal Deposit Insurance Corporation (FDIC), and a warrant to Treasury in exchange for $301 billion of loss protection on a specified pool of Citigroup assets. As a result of receipt of principal repayments and charge-offs, the total asset pool has declined by approximately $35 billion from the original $301 billion to approximately $266.4 billion. As part of a series of exchange offers undertaken by Citigroup in July 2009, the preferred shares issued to Treasury and FDIC for Citigroup’s participation in AGP were exchanged for new Citigroup trust preferred securities. January 15, 2009: Citigroup enters into an agreement with the Treasury, FDIC and the FRBNY to guarantee losses arising on a $301 billion portfolio of Citigroup assets. As consideration for the loss-sharing agreement, Citigroup issues non-voting perpetual, cumulative preferred stock and a warrant to the Treasury. January 16, 2009: Bank of America Corporation enters into a term sheet (Term Sheet) with the Treasury, FDIC, and the Board of Governors of the Federal Reserve System, in which the agencies agreed in principle to guarantee losses arising on a $118 billion portfolio of Bank of America Corporation assets. May 6, 2009: Bank of America Corporation notifies the Treasury, FDIC, and the Federal Reserve of its plan to terminate negotiations with respect to the loss sharing guarantee program. July 30, 2009: Treasury exchanges all of its Fixed Rate Cumulative Perpetual Stock received as premium under the Citigroup AGP agreement, “dollar for dollar’ for Trust Preferred Securities. September 21, 2009: Bank of America announces that it has reached an agreement to pay a total of $425 million to the USG in connection with the termination of the Term Sheet, which is equal to: (a) the out-of-pocket expenses of the USG in negotiating and entering into the Term Sheet and the negotiations concerning the definitive documentation, consisting of the expenses of its advisors; and (b) the fee that would have been payable under the Term Sheet but pro-rated for the period commencing on January 16, 2009, and ending on May 6, 2009, and adjusted for certain exclusions from the asset pool. The Consumer and Business Lending Initiative includes the Department of the Treasury’s (Treasury) role in the Board of Governors of the Federal Reserve System’s and the Federal Reserve Bank of New York’s (Federal Reserve) Term Asset-Backed Securities Loan Facility (TALF) and Treasury’s plan to directly purchase securities backed by SBA-guaranteed small business loans. TALF—a Federal Reserve Bank of New York (FRBNY) credit facility supported by a backstop of $20 billion in the Troubled Asset Relief Program (TARP) funds from Treasury—was announced by the Federal Reserve in November 2008. The goal of the program is to provide up to $200 billion in low-cost financing for investors to purchase a variety of consumer, small business, and commercial mortgage securitizations with the goal of unfreezing securitization markets and increasing credit access for consumers and small businesses. Also under the initiative, Treasury anticipates purchasing securities backed by SBA 7(a) guaranteed loans and securities backed by SBA 504 loan guarantees to jumpstart securitization and credit markets for small businesses though it had not purchased any SBA-backed securities as of September 2009. Between March 2009 and September 2009, approximately $51.7 billion in TALF funds were requested. Table 17 provides a summary of monthly loan requests by asset class. November 25, 2008: The Federal Reserve announces TALF, agreeing to lend up to $200 billion on a nonrecourse basis to holders of newly issued AAA-rated asset-backed securities (ABS) backed by credit cards, auto loans, student loans, and small business loans guaranteed by the SBA. February 10, 2009: As part of the Financial Stability Plan, the Federal Reserve, FRBNY, and Treasury announce a willingness to consider expanding the size of the TALF to $1 trillion over the life of the program. March 3, 2009: The agencies launch the TALF program, and the first subscription occurs. March 19, 2009: The agencies expand the range of eligible collateral to include asset-backed securities backed by mortgage servicing advances, business equipment loans or leases, floorplan loans, and leases of vehicle fleets. They also announce an intention to expand the list of eligible collateral to include previously issued securities—so called “legacy securities”—as a complement to the Public-Private Investment Program (PPIP). May 1, 2009: The Federal Reserve announces that two new asset classes are eligible for TALF funding: newly-issued commercial mortgage-backed securities (CMBS) and ABS backed by insurance premium finance loans. May 19, 2009: The Federal Reserve announces that certain high-quality legacy CMBS are eligible for TALF funding. June 2, 2009: Aggregate loan requests for the program reach peak levels for consumer and business ABS. July 16, 2009: The first legacy CMBS loan requests are submitted to TALF. August 17, 2009: The Federal Reserve and Treasury jointly announce TALF’s extension for ABS and legacy CMBS collateral through March 2010, and through June 2010 for newly-issued CMBS collateral. Also, the Federal Reserve states that it does not anticipate further additions to the eligible asset classes. September 1, 2009: FRBNY approves four non-primary dealers to supplement the 18 primary dealers that interface between FRBNY and TALF borrowers. The Department of the Treasury (Treasury)—with assistance from the Board of Governors of the Federal Reserve System and the Federal Reserve Bank of New York (Federal Reserve) and the Federal Deposit Insurance Corporation (FDIC)—designed the Public-Private Investment Program (PPIP) to lessen the impact of legacy assets on balance sheets and thereby improve consumer and business lending. PPIP has two distinct components: the Legacy Securities Program and the Legacy Loans Program. Under the Legacy Securities Program, commercial mortgage- backed securities and non-agency residential mortgage-backed securities will be purchased and managed by fund managers overseeing public- private investment funds. According to Treasury, in the course of prequalifying nine fund managers, Treasury vetted each of them for their investment strategy—primarily long-term buy and hold. Public-private investment funds will raise equity capital from private sector investors and receive matching equity funds and secured nonrecourse loans from Treasury. The Legacy Loans Program is designed to encourage the purchase of troubled and illiquid loans from FDIC-insured banks and thrifts. FDIC will provide debt guarantees and Treasury will provide equity co-investment to private funds purchasing such loans through an auction. FDIC will oversee the new funds. FDIC held a pilot sale of receivership assets to test the funding mechanism contemplated by the Legacy Loans Program, and continues to develop the program should it be needed in the future. PPIP has not disbursed any of its $100 billion TARP allocation as of September 25, 2009, (see table 18). For the Legacy Securities Program, nine fund managers have been prequalified, and as of October 5, 2009, Treasury officials stated that five fund mangers had raised the requisite capital to receive matching funds and leverage from Treasury—though no investments have yet been made. FDIC recently tested a funding mechanism based on the legacy loan program model, but agency officials are still assessing the outcome. Treasury, in consultation with the Federal Reserve, needs to make a systemic risk determination for the legacy loans program to be implemented, according to FDIC officials. March 23, 2009: Treasury and FDIC officials release the initial outlines of PPIP. March 26, 2009: FDIC announces a comment period for the Legacy Loan Program. April 24, 2009: Private asset managers submit applications to Treasury as part of the Legacy Securities Program selection process. June 3, 2009: FDIC announces that the Legacy Loan Program is put on hold. FDIC officials state that financial institutions have been able to raise capital without selling troubled assets through the Legacy Loan Program. July 8, 2009: Treasury preapproves nine fund managers to operate public- private investment funds for the Legacy Securities Program. Fund managers select ten small-, veteran-, minority-, and women-owned businesses as partners. July 31, 2009: FDIC announces a model funding mechanism based on the Legacy Loan Program for a test sale of receivership assets. September 25, 2009: Treasury officials state that two of the nine prequalified fund managers have raised at least the required minimum of $500 million each to begin investing in legacy securities, though no investments have yet been made. October 5, 2009: Treasury officials state that an additional three of the nine prequalified fund managers have raised at least the required minimum of $500 million each to begin investing in legacy securities, though no investments have yet been made. The Department of the Treasury (Treasury) established the Automotive Industry Financing Program (AIFP) in December 2008 to help stabilize the U.S. automotive industry and avoid disruptions that would pose systemic risk to the nation’s economy. Under this program, Treasury has authorized a total of about $81.1 billion of Troubled Asset Relief Program (TARP) funds to help support automakers, automotive suppliers, consumers, and automobile finance companies as of September 25, 2009. A sizeable amount of funding has been to support the restructuring of Chrysler Group LLC (Chrysler) and General Motors Company (GM). The AIFP consists of the following four components: Funding to Support Automakers during Restructuring. Treasury has provided financial assistance to Chrysler and GM to support their restructuring in an attempt to return to profitability. The assistance was provided in loans and equity investments in the companies. Auto Supplier Support Program. Under this component of the program, Chrysler and GM received funding for the purpose of ensuring payment to suppliers. The program is designed to ensure that automakers receive the parts and components they need to manufacture vehicles and that suppliers have access to credit from lenders. The funding provided to Chrysler and GM in this program is in the form of a debt obligation. Warranty Commitment Program. The program was designed to mitigate consumer uncertainty about purchasing vehicles from the restructuring automakers by providing funding to guarantee the warranties on new vehicles purchased from participating manufacturers that were undergoing restructuring. The funds provided to the companies ultimately were not needed, because both companies were able to continue to honor consumer warranties. purchased preferred membership stock with warrants and common equity interest in GMAC—which includes funding to help support retail and wholesale purchases for Chrysler. To provide strategic guidance for AIFP and to advise the President and the Secretary of the Treasury on issues impacting the financial health of the industry, the White House established the Presidential Task Force on the Auto Industry. Treasury also hired staff with expertise in the financial industry to help oversee the assistance. Since December 2008, about $81.1 billion in AIFP funds have been authorized. Below are key developments in the program. December 19, 2008: Treasury announces the creation of AIFP using TARP funds to stabilize the U.S. automotive industry and avoid disruptions that would pose systemic risk to the nation’s economy. December 29, 2008: Treasury purchases $5 billion in preferred stock with exercised warrants in GMAC LLC. December 31, 2008: Treasury provides a $13.4 billion loan to GM to assist the company’s restructuring. January 2, 2009: Treasury provides a $4 billion loan to Chrysler and a $1.5 billion loan to Chrysler Financial Services Americas LLC. February 17, 2009: Chrysler and GM submit restructuring plans to Treasury as required by the terms of their loan agreements. March 19, 2009: Treasury announces the Auto Supplier Support Program to ensure payments to automotive suppliers. March 30, 2009: The White House announces that Chrysler and GM’s restructuring plans do not establish a credible path to viability or merit additional federal government investment. The companies are given additional time to show greater progress. March 30, 2009: Treasury announces the Warranty Commitment Program to guarantee the warranties on new vehicles purchased from participating auto manufacturers. April 3, 2009: GM receives loans of $2.5 billion, under the Auto Supplier Support Program. April 7, 2009: Chrysler receives loans of $1 billion, under the Auto Supplier Support Program. April 29, 2009: Treasury commits to providing a loan of up to $500 million to Chrysler under the Warranty Commitment Program. April 30, 2009: The White House announces it will provide an additional $8.5 billion to support Chrysler’s restructuring. May 20, 2009: Treasury provides GM with an additional $4 billion for restructuring. May 21, 2009: Treasury purchases $7.5 billion in preferred stock with exercised warrants in GMAC LLC. May 27, 2009: Treasury provides a $360 million loan for the Warranty Commitment Program. June 1, 2009: Treasury announces it will provide GM with up to an additional $30.1 billion to support the company’s bankruptcy proceeding and transition through restructuring. After emerging from bankruptcy in June 2009, Chrysler has seen the appointment of several new senior officials, including its chief executive officer and chief financial officer, as well as a newly constituted board of directors, which Chrysler officials said met for the first time in July 2009. When we met with Chrysler in September 2009, officials told us that the company was focused on developing a new business plan, with assistance from Fiat in the areas of product development, distribution, and sales and marketing. GM has continued to take steps to restructure, funded by the $30.1 billion in financing that Treasury provided in June. On July 5, 2009, a bankruptcy judge approved GM’s motion to sell its assets to a new company in which the federal government would have a majority share. On July 10, 2009, the asset sale was finalized, and Treasury executed a loan agreement with the restructured GM, under which the company is required to repay Treasury $7.1 billion. The remainder of the funding that Treasury provided to GM was converted to 60.8 percent ownership in the new company and $2.1 billion in preferred stock. Other stakeholders also received equity in GM. In consideration of their ownership stakes, GM’s shareholders—including Treasury—received the right to appoint directors to GM’s board. The new members of GM’s board have been appointed, and the board held its first in-person meeting in August. The Department of the Treasury’s (Treasury) Office of Financial Stability (OFS) developed the Home Affordable Modification Program (HAMP) to address two of the stated purposes of the Emergency Economic Stabilization Act (the act)—preserving homeownership and protecting home values. According to Treasury, HAMP’s primary goal is to help up to three to four million borrowers who are struggling to make their mortgage payments by reducing their monthly payments to an affordable level (loan modification), thereby preventing unnecessary foreclosures and helping to stabilize home prices in the neighborhoods hit hardest by foreclosures. To implement the program, Treasury has delegated significant responsibilities to its financial agents, Fannie Mae and Freddie Mac, to act as the program administrator and compliance agent for HAMP, respectively. Under HAMP, Treasury will use Troubled Asset Relief Program (TARP) funds to share the cost of reducing monthly payments on first-lien mortgages with mortgage holders and investors, and provide financial incentives to servicers, borrowers, and mortgage holders and investors for loans modified under the program. Under HAMP, Treasury also plans to (1) provide additional incentives to mortgage holders/investors to modify, rather than foreclose on, loans in areas where home price declines have been most severe; (2) provide incentives to modify or pay off second-lien loans of borrowers whose first mortgages were modified under HAMP; and (3) provide incentives to servicers and borrowers to pursue alternatives to foreclosure (short sales and deeds-in-lieu) to homeowners who do not qualify for a HAMP modification or cannot maintain payments during the trial period or modification. As of September 25, 2009, 63 servicers have signed up to participate in the program, covering approximately 85 percent of U.S. mortgage loans. Treasury has announced that up to $50 billion of funds from TARP may be used for HAMP. Most of these funds are directed to the modification of first-lien mortgages held by borrowers in danger of foreclosure (first-lien modification program). To monitor HAMP’s funding needs, Treasury has estimated the funding requirements, or caps, for each participating servicer based on the number of modifications they are expected to perform during the entire duration of the program. The caps include maximum payable incentives associated with modifying borrowers’ first- lien mortgages, including incentive payments to borrowers, servicers, and mortgage holders and investors. According to Treasury, cap allocations are initially set based on publicly available information and are updated using more complete data on the servicers’ mortgage portfolios. Treasury has been reassessing each servicer’s cap on a quarterly basis, using data on the actual number of modifications made by the servicer under the program. As of September 25, 2009, Treasury had allocated a total of $22.3 billion through the caps on its 63 participating servicers, of which about $946,000 has been paid out in servicer and investor incentive payments. Most of Treasury’s efforts to develop HAMP have been directed to the first-lien modification program. Treasury has designed the first-lien program to target borrowers in default (defined as 60 days or more delinquent on their mortgage payments) or in imminent danger of default (borrowers that are current on their mortgages but facing hardships such as job loss or interest rate increases on their adjustable rate mortgages). Treasury has established several eligibility requirements for borrower participation in HAMP, including that the property be an owner-occupied, single-family residence (one to four units) that is the borrower’s primary residence and that the mortgage loan amount not exceed specified dollar thresholds. Additionally, borrowers cannot participate in HAMP if they have non-GSE loans unless their servicers have signed participation agreements with Fannie Mae—Treasury’s administrator for the program. According to Treasury, as of September 25, 2009, the following HAMP progress has been made related to loans not owned or guaranteed by Fannie Mae and Freddie Mac: 63 servicers had signed participation agreements for the first-lien More than 1.3 million solicitation letters for HAMP loan modifications to More than 328,000 HAMP trial modification offers to borrowers; More than 209,000 HAMP trial modifications had started; and 1,080 borrowers had successfully completed the trial period and received HAMP modifications. Of the three other subprograms that were announced as part of HAMP in the March 4, 2009, program guidelines, Treasury has recently begun to implement the Home Price Decline Prevention (HPDP) program but has not implemented the other two. Treasury issued official guidance on HPDP in late July 2009 and began implementing the program on September 1, 2009. As of that date, the net present value model used to calculate borrowers’ eligibility for HAMP took into account the additional incentive payments available through HPDP to investors in areas of the country where price declines had been large. However, the extent to which HPDP will increase the number of modifications made remains unclear. In our July 2009 report, we recommended that Treasury re- evaluate the basis and design of the HPDP program to ensure that HAMP funds are being used effectively. Treasury released detailed guidelines on the second-lien modification component of HAMP on August 13, 2009. However, these guidelines require that servicers sign participation agreements with Fannie Mae on or before December 31, 2009, to be eligible for the program. As of September 25, 2009, no servicers have signed such participation agreements. Finally, Treasury had not released any detailed guidelines on the foreclosure alternatives component of HAMP. We previously reported that although the central program—the first-lien modification program—had been implemented, many of its administrative processes and its internal control policies and procedures were not yet finalized. Fannie Mae, as HAMP administrator, has mapped operational processes and identified points of control for multiple aspects of HAMP, such as servicer registration and servicer set-up in HAMP’s electronic system, servicer data reporting, trial and official modifications, and the steps of the payment process administered by Fannie Mae. Fannie Mae has also drafted procedures to carry out many of these processes and internal controls. According to Fannie Mae, processes, controls and procedures have not been finalized for a planned servicer call center, and the budgeting and billing of Fannie Mae’s work under the HAMP financial agent agreement. Processes and controls designed by Fannie Mae to date were to be tested by September 30, 2009, according to Fannie Mae officials. In addition, Freddie Mac, as HAMP compliance agent, has mapped out the overall compliance program, working with OFS and PricewaterhouseCoopers LLP, and is developing policies and procedures to carry it out. In a related effort, Freddie Mac has described to us its methods for testing compliance for 233 program requirements. In addition, according to Treasury, Treasury and its financial agents have formalized a charter for a HAMP Compliance Committee. Treasury also noted that the Committee is finalizing a policy for addressing remedies for identified instances of noncompliance among servicers. However, while Treasury has drafted performance measures to evaluate HAMP, these measures have not been fully developed and have yet to be implemented. On August 4, 2009, Treasury released its first report on the performance of participating servicers under HAMP. The Monthly Servicer Performance Report showed significant variations among the servicers in the percentage of delinquent borrowers in their servicing portfolios that had been offered or received trial modifications. For example, for servicers that had signed up to participate in the program before May 31, 2009, the percentage of delinquent borrowers who had been offered HAMP trial modifications ranged from 0 percent to 45 percent, and the percentage of their delinquent borrowers who had started HAMP trial modifications ranged from 0 percent to 25 percent. Such variations have highlighted potential issues with servicers’ capacity to implement HAMP. In our July 2009 report, we expressed concern that Treasury was not fully vetting servicers signing HAMP loan modification participation agreements and recommended that Treasury develop a means of systematically assessing servicers’ capacity to meet program requirements during program admission. As compliance agent for HAMP, Freddie Mac has developed several different types of reviews intended to be conducted after a servicer has signed up to participate in the program that touch on issues of servicer capacity. Freddie Mac is currently working with Treasury to refine these procedures, which include: “full” on-site reviews, which are 1-week reviews that include a detailed management interview about all HAMP processes, walk-throughs of each of these processes, and file reviews of a sample of the servicer’s loan files; walk-through reviews, which are 1- to 2-day reviews that can occur sooner than a “full” review and that go into less detail on loss mitigation, collections, and investor accounting processes; and “second look” reviews, which are off-site loan file reviews that look for servicer errors in evaluating borrowers for HAMP. February 18, 2009: Treasury announced HAMP, a national loan modification program intended to offer assistance to up to three to four million homeowners by reducing monthly payments to sustainable levels. March 4, 2009: Treasury issued official guidance for loan modifications under HAMP and announced that servicers could begin conducting modifications that conform to the guidelines. These initial guidelines largely focused on the first-lien modification subprogram. Treasury also issued updated guidance on completing first-lien modifications on April 6, 2009. March 19, 2009: Treasury launched its Making Home Affordable (MHA) Web site for borrowers to provide information on the program, including eligibility requirements and housing counseling options, among other things. April 13, 2009: The first six servicers signed participation agreements under HAMP. April 15, 2009: Treasury launched an administrative Web site for mortgage servicers to provide them with the information and tools needed to participate in HAMP. July 28, 2009: Treasury and the Department of Housing and Urban Development (HUD) officials held a meeting with all participating servicers at which they asked the servicers to ramp up their efforts to increase trial modifications, with a goal of starting 500,000 trial modifications by November 1, 2009. July 31, 2009: Treasury issued official guidance on the HPDP component of HAMP. August 4, 2009: Treasury released its first monthly Servicer Performance Report detailing servicers’ progress to date with HAMP. According to Treasury, the purpose of the report is to document the number of struggling homeowners already helped under the program, provide information on servicer performance, and increase the program’s transparency. August 13, 2009: Treasury announced details of the second-lien modification component of HAMP, which allows second liens with corresponding first liens that have been modified under HAMP to be modified or extinguished. While Treasury estimates that between one and one-and-a-half million borrowers may be eligible to receive a second-lien modification, servicer participation in the second-lien modification subprogram is unclear, as servicers who had previously signed participation agreements must sign amended agreements in order to participate in the program. As of September 25, 2009, no servicers had signed participation agreements for the second-lien program. August 27, 2009: Treasury conducted its first disbursement of $276,000 to one servicer for payment of servicer incentives related to 276 non-GSE loans modified under HAMP. No payments were disbursed for monthly mortgage payment reductions or associated incentive payments to investors or borrowers, and no payments were made to other servicers. September 25, 2009: Treasury conducted its second disbursement of about $670,000 to three servicers for payment of servicer incentives and investor subsidies. In addition to the contacts named above, A. Nicole Clowers, Gary Engel, Mathew Scirè, and William T. Woods (lead Directors); Cheryl Clark, Lawrance Evans Jr., Dan Garcia-Diaz, Carolyn Kirby, Barbara Keller, Kay Kuhlman, Harry Medina, Raymond Sendejas, Karen Tremba (lead Assistant Directors); Judith Ambrose, Timothy Carr, Tania Calhoun, Emily Chalmers, Brent Corby, Rachel DeMarcus, M’Baye Diagne, Nancy Eibeck, Sarah Farkas, Alice Feldesman, Heather Halliwell, Michael Hoffman, Joe Hunter, Tyrone Hutchins, John Karikari, Amber Keyser, Steven Koons, Robert Lee, Sarah McGrath, Joseph O’Neill, Rebecca Riklin, Susan Michal- Smith, Maria Soriano, Cynthia Taylor, Angela D. Thomas, Julie Trinder, Marc Molino, Winnie Tsen , Jim Vitarello, Yun Wang, and Heather Whitehead have made significant contributions to this report. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09-707SP. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvestment Act. GAO-09-507R. Washington, D.C.: April 16, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March, 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December, 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008.
GAO's eighth report assesses the Troubled Asset Relief Program's (TARP) impact over the last year. Specifically, it addresses (1) the evolution of TARP's strategy and the status of TARP programs as of September 25, 2009; (2) the Department of the Treasury's (Treasury) progress in creating an effective management structure, including hiring for the Office of Financial Stability (OFS), overseeing contractors, and establishing a comprehensive system of internal control; and (3) indicators of TARP's performance that could help Treasury decide whether to extend the program. GAO reviewed relevant documentation and met with officials from OFS, contractors, and financial regulators. Over the last year, TARP in general, and the Capital Purchase Program (CPP) in particular, along with other efforts by the Board of Governors of the Federal Reserve System (Federal Reserve) and Federal Deposit Insurance Corporation (FDIC), have made important contributions to helping stabilize credit markets. TARP is still a work in progress, and many uncertainties and challenges remain. For example, while some CPP participants had repurchased over $70 billion in preferred shares and warrants as of September 25, 2009, whether Treasury will fully recoup TARP assistance to the automobile industry and American International Group Inc., among others, remains uncertain. Moreover, other programs, such as the Public-Private Investment Program and the Home Affordable Modification Program (HAMP) are still in varying stages of implementation. As of September 25, 2009, Treasury had disbursed almost $364 billion in TARP funds; however, Treasury has yet to update its projected use of funds for most programs in light of current market conditions, program participation rates, and repurchases. Without more current estimates about expected uses of the remaining funds, Treasury's ability to plan for and effectively execute the next steps of the program will be limited. Amid concerns about the direction and transparency of TARP, the new administration has attempted to provide a more strategic direction for using the remaining funds. TARP has moved from investment-based initiatives to programs aimed at stabilizing the securitization markets and preserving homeownership, and most recently at providing assistance to community banks and small businesses. While some programs, such as the Term Asset-Backed Securities Loan Facility, appear to have generated market interest, others, such as HAMP, face ongoing implementation and operational challenges. Related to transparency, Treasury has taken a number of steps to improve communication with the public and Congress, including launching a Web site and preparing to hire a communications director for OFS to support these efforts. Treasury has also made significant progress in establishing and staffing OFS; however, it must continue to focus on filling critical leadership positions, including the Chief Homeownership Preservation Officer and Chief Investment Officer, with permanent staff. Treasury's network of contractors and financial agents that support TARP administration and operations has grown from 11 to 52. While Treasury has an appropriate infrastructure in place, it must remain vigilant in managing and monitoring conflicts of interests that may arise with the use of private sector sources.
Stealth-related commodities and technology are sensitive for many reasons. When incorporated into advanced weapon systems, stealth technology greatly improves the effectiveness of forces. The United States is the world leader in stealth technology, and this lead has given U.S. forces a clear battlefield advantage as was demonstrated in Operation Desert Storm. Stealth-related commodities are sensitive from an export control perspective because some of the materials and processes involved have civil applications that make it difficult to control the commodities’ dissemination and retain U.S. leadership in stealth technology. Stealth designs incorporate materials, shapes, and structures in a functional system that can meet mission requirements. Stealth techniques fall into two general groups. First, a material may deflect an incoming radar signal into neutral space thereby preventing the source radar from picking up the radar reflection and “seeing” the object. Second, a material may simply absorb an incoming radar signal, not allowing the signal to reflect back to its source. In addition to materials, measurement gear used to test radar-absorbing properties and technologies and software related to manufacturing and application techniques are also considered sensitive from an export control perspective. DOD’s policy on the commercial export of stealth technology recognizes its military significance and sensitivity while acknowledging that some items with stealth properties have been developed for commercial purposes, are widely available, and are not militarily significant. DOD’s policy states that commercial marketing of unclassified, non-DOD funded stealth technology may be permitted on a case-by-case basis after review by appropriate offices and agencies and approval of the required export license. The U.S. export control system is divided into two regimes, one for munitions items under the Arms Export Control Act (AECA) and one for dual-use items (items with both civil and military uses) under the Export Administration Act (EAA). The Department of State controls munitions items through its Office of Defense Trade Controls and establishes the USML, with input from DOD. The Department of Commerce, through its Bureau of Export Administration, controls dual-use commodities (e.g., machine tools) and establishes the CCL. In general, export controls under the EAA are less restrictive than the controls under the AECA. Exporters must determine whether the item they wish to export is on the CCL or the USML and then apply to the appropriate agency for an export license. When there is confusion over which agency controls a commodity, an exporter may ask State to make a commodity jurisdiction determination. State, in consultation with the exporter, DOD, Commerce, and other agencies, reviews the characteristics of the commodity and determines whether the item is controlled under the USML or the CCL. Since 1992, the majority of all commodity jurisdiction determinations ruled that the commodity belonged on the CCL and not the USML. On the USML, stealth-related commodities are primarily controlled in two general categories. Stealth-related items are controlled under several other categories when the technology is incorporated as part of a system or end item. For example, fighter aircraft that incorporate stealth features are controlled under the category for aircraft. In general, the USML relies on functional descriptions of the items being controlled. Table 1 shows that the USML controls stealth-related exports as parts of several control categories. The CCL, as shown in table 2, controls stealth-related exports under seven export commodity control numbers. In general, the CCL uses more detailed language (often with technical performance criteria) than the USML to describe what is controlled. Because some export control classification numbers cover a broad array of items, some of the exports classified under these numbers are not related to stealth. State and DOD officials acknowledge that the descriptions in the CCL and the USML covering stealth-related items and technology do not clearly define which stealth-related exports are controlled by which agency. State and DOD officials also agree that the lines of jurisdiction should be clarified. DOD officials noted that they are only concerned about militarily significant items or items in the grey area that are potentially militarily significant. A Commerce official noted that overlapping jurisdiction is confusing for exporters and said commodities that fall in the grey area between Commerce and State should be placed on the USML. The Commerce official said that putting grey area cases on the USML would help exporters avoid the (1) confusion of determining where to go for a license and (2) possibility of having their exports seized by a Customs agent who believes the items belong on the USML. The Commerce official cautioned, however, that in moving items to the USML, consideration should be given to whether comparable items are readily available from other countries. State noted in its comments to this report that, under the AECA, foreign availability is not a factor in determining whether an item warrants the national security and foreign policy controls of the USML. Commerce noted in its comments that it does not agree that there is overlapping or unclear jurisdiction over stealth-related commodities and technology between the CCL and the USML. We disagree. As noted in the report, officials from both DOD and State told us that the lines of jurisdiction are unclear and should be clarified. Further, as discussed below, this unclear jurisdiction has led to problems in Commerce’s licensing of sensitive stealth-related commodities. Unclear jurisdiction over stealth-related commodities increases the likelihood that militarily sensitive stealth technology will be exported under the less restrictive Commerce export control system. In 1994, Commerce approved two export applications for a radar-absorbing coating determined later to belong on the USML. Although DOD and State have not verified the exact capabilities and military sensitivity of this product, these export licenses illustrate the problems with unclear jurisdiction and authority over stealth-related exports. Commerce approved two applications in 1994 to export a high-performance, radar-absorbing coating. The details of one of the applications was reported in a major trade publication. As reported, the export application described the high-performance claims for the product and indicated that 200 gallons of the material would be used for a cruise missile project headed by a German company. Commerce also granted a license to export the same commodity to another country for use on a commercial satellite. Commerce approved both of these applications in fewer than 10 days and, in accordance with referral procedures, did not refer these applications to either DOD or State. The article reporting Commerce’s approval of this material for export noted that the radar frequencies this stealth coating seeks to defend against include those employed by the Patriot antimissile system. In response to that report and subsequent concerns raised by DOD, State performed a commodity jurisdiction review to determine whether the stealth coatings actually belonged under the USML. At this time, the coatings had not yet been shipped overseas. On the basis of State’s review that included consultation with both DOD and Commerce, State ruled that the radar-absorbing coating was under the jurisdiction of the USML. After State’s ruling, Commerce suspended the export licenses it had approved and the exporter submitted new export applications to State. After State and DOD were unable to obtain adequate information on the exact performance characteristics of the product from the exporter, State decided not to approve the export applications. Commerce’s export control authority under the EAA is more limited than State’s authority under the AECA. In fact, a high-ranking Commerce official said Commerce probably could not have denied the two applications to export the radar-absorbing coatings. The EAA regulates dual-use exports under national security controls and foreign policy controls. As shown in table 3, the seven stealth-related commodities on the CCL are controlled for national security and missile technology reasons (considered a foreign policy control). National security controls are designed to prevent exports from reaching the former East bloc and Communist nations. Exports that are controlled on the CCL for national security reasons and that are going to noncontrolled countries can only be denied by Commerce if there is evidence the exports will be diverted to a controlled country. Foreign policy controls under the EAA are designed to control exports for specific reasons (e.g., missile technology concerns) and if the exports are going to specific countries (e.g., countries considered to be missile proliferators). In essence, these controls are targeted to specific items, end uses, and/or countries. Consequently, items controlled for missile technology reasons (e.g., most stealth-related commodities), as a practical matter, are not restricted if they are destined for other end uses (e.g., ship applications and aircraft) or for a country not considered to be a missile proliferation threat (e.g., any member of the Missile Technology Control Regime). In contrast, under the AECA, commodities on the USML are controlled to all destinations, and authority to regulate exports is not limited by end use or country. The AECA grants State broad authority to deny export applications based on a determination that the license is against national interests. Commerce referral procedures for the seven stealth-related categories do not require most applications to be sent to either DOD or State for review. Commerce referral procedures depend on the reason the export is controlled and the ultimate destination. As shown in table 4, between fiscal years 1991 and 1994, most applications under the seven export control classification numbers related to stealth were not referred to either DOD or State. During this time, only 15 of 166 applications processed by Commerce were sent to either DOD or State for review. Table 4 also shows, because some export control classification numbers cover a broad array of items, some of the export applications classified under these numbers are not related to stealth. Table 5 lists examples of applications that were referred by Commerce, and table 6 lists applications that were not referred. In general, commodities controlled on the CCL for national security reasons are referred to DOD only if they are going to a controlled country. These referral procedures are based, in part, on agreements between Commerce and DOD. National security controls are designed to prevent exports from going to controlled countries. Consequently, exports of commodities that are controlled for national security reasons and that are going to other destinations are generally not restricted, and Commerce does not refer such applications to DOD. Exports of commodities controlled for missile technology reasons are referred by Commerce only if they meet two key tests. First, the description of the export must fit the definition of missile technology items as described in the Annex to the Missile Technology Control Regime. Some commodities that fall under export commodity control numbers controlled for missile technology may not fit the detailed description of missile technology found in the Annex. Second, the export must be going to a country considered to be of concern for missile technology proliferation reasons. Export applications that Commerce refers based on missile technology concerns are sent to the Missile Technology Export Control group (MTEC). The MTEC is chaired by State with representatives from DOD, Commerce, the U.S. intelligence agencies, and others at the invitation of the Chair and the concurrence of the group. DOD, by being a member of MTEC, has access to missile technology applications that Commerce refers to the group. In a recent report, we noted concerns about Commerce’s referral practices for missile-related exports. Only a fraction of the export applications under export control classification numbers controlled for missile technology reasons going to China were sent by Commerce to other agencies for review. According to the current Chair of the MTEC, Commerce does not refer all relevant missile technology applications to the MTEC for review. Commerce officials stated that they refer all relevant cases and noted that the MTEC Chair may be unfamiliar with Commerce referral procedures. State noted in its comments that it would be preferable for the MTEC to review all export licenses for Annex items. In light of the more stringent controls under the AECA and the sensitivity of stealth technology, we recommend that the Secretary of State, with the concurrence of the Secretary of Defense and in consultation with the Secretary of Commerce, clarify the licensing jurisdiction between the USML and the CCL for all stealth-related commodities and technologies with a view toward ensuring adequate controls under the AECA for all sensitive stealth-related items and the Secretary of Commerce revise current licensing referral procedures on all stealth-related items that remain on the CCL to ensure that Commerce refers all export applications for stealth-related commodities and technology to DOD and State for review, unless the Secretaries of Defense and State determine their review of these items is not necessary. We obtained written comments from the Departments of State and Commerce (see apps. I and II). State generally agreed with the analyses and recommendations in the report. State indicated that our first recommendation should be revised to properly reflect State’s leading role in determining which items are subject to the AECA (i.e., belong on the USML). State also noted that our second recommendation should be amended to include State in determining whether some stealth-related export licenses need to be referred to State for review for foreign policy reasons. We clarified both recommendations to address State’s concerns. Commerce disagreed with our first recommendation stating that the lines of jurisdiction over exports of stealth-related commodities are already clear. As demonstrated in the report, we believe the lines of jurisdiction are unclear. In addition, State, in its comments to this report, concurs with our recommendation to clarify which stealth-related items should be controlled under the USML and the CCL. Commerce also disagreed with our second recommendation indicating that the executive branch has drafted an executive order that would give the relevant agencies authority to review all dual-use license applications. If implemented, this draft executive order may help improve the review of sensitive exports by DOD and State. However, this draft executive order, by itself does not address the need to clarify jurisdiction between the CCL and the USML in light of the military significance and sensitivity of stealth-related technology and the more stringent controls under the AECA. DOD officials provided oral comments on a draft of this report. We made changes to the report as appropriate to address the technical issues they raised. To determine how control over stealth technology is split between the CCL and the USML, we reviewed the two lists and interviewed officials from State’s Office of Defense Trade Controls, Commerce’s Bureau of Export Administration, DOD’s Defense Technology Security Administration, and the Institute for Defense Analyses. To identify the impact of shared jurisdiction over stealth-related items, we reviewed the export controls established in the AECA and the EAA; obtained Commerce export licensing records on computer tape and focused our analysis on licenses processed after the CCL was restructured in 1991; examined Commerce export license application records that had export classification numbers related to stealth technology; and discussed the impacts of shared jurisdiction over stealth with defense and technical experts in DOD’s Special Programs Office, the Institute for Defense Analyses, the Defense Technology Security Administration, and officials from the MTEC group, State’s Office of Defense Trade Controls, and the Bureau of Export Administration. To assess whether current referral procedures allow DOD to review all stealth-related exports, we examined the referral histories for the stealth-related exports we identified. We conducted our review from June 1994 through April 1995. Our review was performed in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to other congressional committees and the Secretaries of Defense, State, and Commerce. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report were Davi M. D’Agostino, Jai Eun Lee, and David C. Trimble. The following are GAO’s comments on the Department of State’s letter dated May 1, 1995. 1. We agree that foreign availability is not relevant in determining whether an item should be controlled on the U.S. Munitions List (USML). Our statement in the draft report concerning foreign availability considerations has been deleted. 2. The report was changed to more accurately describe the Missile Technology Control Regime. 3. We made changes to the report to reflect State’s view that Commerce should not “pre-screen” export licenses and that the Missile Technology Export Control group (MTEC) should review all export licenses for Missile Technology Control Regime Annex items. 4. We added a footnote to the report to mention Enhanced Proliferation Controls Initiative referrals. The following are GAO’s comments on the Department of Commerce’s letter dated May 2, 1995. 1. We agree that the two systems are different. However, as discussed in the report, Commerce’s system is less restrictive than State’s system. This difference, as Commerce notes, is due to Commerce being responsible for regulating dual-use commodities and State regulating more sensitive military commodities. 2. The rationalization exercise was initiated in 1990 by President Bush to move dual-use items on the USML to the Commerce Control List (CCL), not to examine both control lists for problems of unclear or overlapping jurisdiction. Though some stealth-related commodities were examined during the course of this exercise in 1991, problems of overlapping jurisdiction remain. In addition, as noted in our report, the Department of Defense (DOD) and State officials agree that jurisdiction over stealth-related technology and commodities is ill defined and should be clarified. 3. We do not have responsibility for determining where the lines of jurisdiction between the control lists should be drawn. As we stated in our recommendation, this is the role of the Department of State in consultation with DOD and the Department of Commerce. 4. We made changes to the report to more accurately reflect Commerce’s position. 5. We do not suggest that new International Traffic in Arms Regulations controls over dual-use items be implemented. 6. Our draft report acknowledged the role of DOD in establishing referral procedures. We made changes to the final report to further clarify DOD’s role. Moreover, in comments on our draft report, State indicated that it would be preferable for Commerce to refer to State all export licenses for Missile Technology Control Regime Annex items regardless of destination. 7. We clarified our use of the term “stealth” in the final report to explain that our review focused primarily on radar cross-section reduction. Consequently, any possible overlap in export controls for other aspects of stealth technology (e.g., technologies and materials related to reducing infrared, acoustic, electromagnetic and visual signatures, and counter low-observables technologies) was not addressed in our report. 8. We made changes to the report to comply with the confidentiality concerns raised by Commerce. 9. Our draft report acknowledges that because some export control classification numbers cover a broad array of items, some of the exports classified under these numbers are not related to stealth. We made changes to the final report to make this point more clearly. We would have preferred to review these applications with technical experts from DOD to determine which applications involved stealth technology. However, in our review examining missile related exports to China, we were prevented from sharing license information with DOD for the purposes of assessing the technology in a sample of Commerce export licenses. Due to Commerce’s lengthy administrative requirements for requesting permission to share license information with DOD, we were unable to perform this detailed analysis in the timeframes of our assignment. 10. Commerce states that it has sufficient authority to deny validated license applications for products the U.S. government does not want to export. It points to regional stability controls reached with interagency consensus as examples of its use of such authority. While Commerce could take a more expansive view of its statutory charter, in practice, it has been more restrained. For example, Commerce officials told us they could not have prevented the export of radar-absorbing coatings to Germany for use on a cruise missile. 11. We made changes to the report to clarify our point that items controlled for missile technology reasons are, as a practical matter, not restricted if they are destined for other uses or for a country not considered a missile proliferation threat. 12. The report does not state that Commerce violated its referral procedures for exports going to China that are controlled for missile-technology reasons. Our point is that current referral practices preclude State and DOD from seeing most Commerce license applications for export commodity classification numbers controlled for missile technology reasons. 13. The examples in the table are valid. The license that was returned without action was held by Commerce for 44 days before it was returned. This provided Commerce ample time to refer the case to DOD for review. The other application involved equipment used to make radar cross-section measurements—an important capability in assessing efforts to reduce the radar signature of an aircraft or missile. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed export controls over stealth-related commodities and technology, focusing on: (1) how control over stealth technology and related commodities is split between the Department of State's U.S. Munitions List (USML) and the Department of Commerce's Commodity Control List (CCL); (2) the impact of shared jurisdiction over stealth-related items; and (3) whether current referral procedures allow the Department of Defense (DOD) to review all stealth-related exports. GAO found that: (1) stealth technology materials fall under the jurisdiction of both USML and CCL; (2) Commerce believes that stealth-related commodities should be placed on USML to avoid confusion and possible seizure by the Customs Service; (3) the unclear jurisdiction over stealth technology may lead to the inappropriate export of militarily-sensitive stealth materials and technology; (4) the less restrictive export controls governing CCL commodities give exporters an incentive to apply for CCL export licenses for USML-covered material; (5) Commerce can deny CCL export licenses only under limited circumstances or for certain destinations, while State has broader authority to deny applications that are against national interests; and (6) the United States cannot ensure that export licenses for stealth-related technology are properly reviewed and controlled because Commerce does not refer all stealth technology export applications to DOD or State for review.
VA offers a broad array of disability benefits and health care through its Veterans Benefits Administration (VBA) and its Veterans Health Administration (VHA), respectively. VBA provides benefits and services such as disability compensation and VR&E to veterans through its 57 regional offices. The VR&E program is designed to ensure that veterans with disabilities find meaningful work and achieve maximum independence in daily living. VR&E services include vocational counseling, evaluation, and training that can include payment for tuition and other expenses for education, as well as job placement assistance. VHA manages one of the largest health care systems in the United States and provides PTSD services in its medical facilities, community settings, and Vet Centers. VA is a world leader in PTSD treatment and offers PTSD services to veterans. PTSD can result from having experienced an extremely stressful event such as the threat of death or serious injury, as happens in military combat, and is the most prevalent mental disorder resulting from combat. Servicemembers injured in Afghanistan and Iraq are surviving injuries that would have been fatal in past conflicts, due, in part, to advanced protective equipment and medical treatment. However, the severity of their injuries can result in a lengthy transition involving rehabilitation and complex assessments of their ability to function. Many also sustain psychological injuries. Mental health experts predict that because of the intensity of warfare in Afghanistan and Iraq 15 percent or more of the servicemembers returning from these conflicts will develop PTSD. In our January 2005 report on VA’s efforts to expedite VR&E services for seriously injured servicemembers returning from Afghanistan and Iraq, we noted that VA instructed its VBA regional offices, in a September 2003 letter, to provide priority consideration and assistance for all VA services, including health care, to these servicemembers. VA specifically instructed regional offices to focus on servicemembers whose disabilities will definitely or are likely to result in military separation. Because most seriously injured servicemembers are initially treated at major MTFs, VA has deployed staff to the sites where the majority of the seriously injured are treated. These staff have included VA social workers and disability compensation benefit counselors. VA has placed social workers and benefit counselors at Walter Reed and Brooke Army Medical Centers and at several other MTFs. In addition to these staff, VA has provided a vocational rehabilitation counselor to work with hospitalized patients at Walter Reed Army Medical Center, where the largest number of seriously injured servicemembers has been treated. To identify and monitor those whose injuries may result in a need for VA disability and health services, VA has asked DOD to share data about seriously injured servicemembers. VA has been working with DOD to develop a formal agreement on what specific information to share. VA requested personal identifying information, medical information, and DOD’s injury classification for each listed servicemember. VA also requested monthly lists of servicemembers being evaluated for medical separation from military service. VA officials said that systematic information from DOD would provide them with a way to more reliably identify and monitor seriously injured servicemembers. As of the end of 2004, a formal agreement with DOD was still pending. In the absence of a formal arrangement for DOD data on seriously injured servicemembers, VA has relied on its regional offices to obtain information about them. In its September 2003 letter, VA asked the regional offices to coordinate with staff at MTFs and VA medical centers in their areas to ascertain the identities, medical conditions, and military status of the seriously injured. In regard to psychological injuries, our September 2004 report noted that mental health experts have recognized the importance of early identification and treatment of PTSD. VA and DOD jointly developed a clinical practice guideline for identifying and treating individuals with PTSD. The guideline includes a four-question screening tool to identify servicemembers and veterans who may be at risk for PTSD. VA uses these questions to screen all veterans who visit VA for health care, including those previously deployed to Afghanistan and Iraq. The screening questions are: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? DOD is also using these four questions in its post-deployment health assessment questionnaire (form DD 2796) to identify servicemembers at risk for PTSD. DOD requires the questionnaire be completed by all servicemembers, including Reserve and National Guard members, returning from a combat theater and is planning to conduct follow-up screenings within 6 months after return. VA faces significant challenges in providing services to servicemembers who have sustained serious physical and psychological injuries. For example, in providing VR&E services, individual differences and uncertainties in the recovery process make it inherently difficult to determine when a seriously injured servicemember will be most receptive to assistance. The nature of the recovery process is highly individualized and depends to a large extent on the individual’s medical condition and personal readiness. Consequently, VA professionals exercise judgment to determine when to contact the seriously injured and when to begin services. In our January 2005 report on VA’s efforts to expedite VR&E services to seriously injured servicemembers, we noted that many need time to recover and adjust to the prospect that they may be unable to remain in the military and will need to prepare instead for civilian employment. Yet we found that VA has no policy for maintaining contact with those servicemembers who may not apply for VR&E services prior to discharge from the hospital. As a result, several regional offices reported that they do not stay in contact with these individuals, while others use various ways to maintain contact. VA is also challenged by DOD’s concern that outreach about VA benefits could work at cross purposes to military retention goals. In our January 2005 report, we stated that DOD expressed concern about the timing of VA’s outreach to servicemembers whose discharge from military service is not yet certain. To expedite VR&E services, VA’s outreach process may overlap with the military’s process for evaluating servicemembers who may be able to return to duty. According to DOD officials, it may be premature for VA to begin working with injured servicemembers who may eventually return to active duty. With advances in medicine and prosthetic devices, many serious injuries no longer result in work-related impairments. Army officials who track injured servicemembers told us that many seriously injured servicemembers overcome their injuries and return to active duty. Further, VA is challenged by the lack of access to systematic data regarding seriously injured servicemembers. In the absence of a formal information-sharing agreement with DOD, VA does not have systematic access to DOD data about the population who may need its services. Specifically, VA cannot reliably identify all seriously injured servicemembers or know with certainty when they are medically stabilized, when they are undergoing evaluation for a medical discharge, or when they are actually medically discharged from the military. VA has instead had to rely on ad hoc regional office arrangements at the local level to identify and obtain specific data about seriously injured servicemembers. While regional office staff generally expressed confidence that the information sources they developed enabled them to identify most seriously injured servicemembers, they have no official data source from DOD with which to confirm the completeness and reliability of their data nor can they provide reasonable assurance that some seriously injured servicemembers have not been overlooked. In addition, informal data-sharing relationships could break down with changes in personnel at either the MTF or the regional office. In our review of 12 regional offices, we found that they have developed different information sources resulting in varying levels of information. The nature of the local relationships between VA staff and military staff at MTFs was a key factor in the completeness and reliability of the information the military provided. For example, the MTF staff at one regional office provided VA staff with only the names of new patients and no indication of the severity of their condition or the theater from which they were returning. Another regional office reported receiving lists of servicemembers for whom the Army had initiated a medical separation in addition to lists of patients with information on the severity of their injuries. Some regional offices were able to capitalize on long-standing informal relationships. For example, the VA coordinator responsible for identifying and monitoring the seriously injured at one regional office had served as an Army nurse at the local MTF and was provided all pertinent information. In contrast, staff at another regional office reported that local military staff did not until recently provide them with any information on seriously injured servicemembers admitted to the MTF. DOD officials expressed their concerns about the type of information to be shared and when the information would be shared. DOD noted that it needed to comply with legal privacy rules on sharing individual patient information. DOD officials told us that information could be made available to VA upon separation from military service, that is, when a servicemember enters the separation process. However, prior to separation, information can only be provided under certain circumstances, such as when a patient’s authorization is obtained. Based on our review of VA’s efforts to expedite VR&E services to seriously injured servicemembers, we recommended that VA and DOD collaborate to reach an agreement for VA to have access to information that both agencies agree is needed to promote recovery and return to work for seriously injured servicemembers. We also recommended that VA develop policy and procedures for regional offices to maintain contact with seriously injured servicemembers who do not initially apply for VR&E services. VA and DOD generally concurred with our recommendations. VA also told us that its follow-up policies and procedures include sending veterans information on VR&E benefits upon notification of disability compensation award and 60 days later. However, we believe a more individualized approach, such as maintaining personal contact, could better ensure the opportunity for veterans to participate in the program when they are ready. In dealing with psychological injuries such as PTSD, VA also faces challenges in providing services. Specifically, the inherent uncertainty of the onset of PTSD symptoms poses a challenge because symptoms may be delayed for years after the stressful event. Symptoms include insomnia, intense anxiety, nightmares about the event, and difficulties coping with work, family, and social relationships. Although there is no cure for PTSD, experts believe that early identification and treatment of PTSD symptoms may lessen the severity of the condition and improve the overall quality of life for servicemembers and veterans. If left untreated it can lead to substance abuse, severe depression, and suicide. Another challenge VA faces in dealing with veterans with PTSD is the lack of accurate data on its workload for PTSD. Inaccurate data limit VA’s ability to estimate its capacity for treating additional veterans and to plan for an increased demand for these services. For example, we noted in our September 2004 report that VA publishes two reports that include information on veterans receiving PTSD services at its medical facilities. However, neither report includes all the veterans receiving PTSD services. We found that veterans may be double counted in these two reports, counted in only one report, or omitted from both reports. Moreover, the VA Office of Inspector General found that the data in VA’s annual capacity report, which includes information on veterans receiving PTSD services, are not accurate. Thus, VA does not have an accurate count of the number of veterans being treated for PTSD. In our September 2004 report, we recommended that VA determine the total number of veterans receiving PTSD services and provide facility- specific information to VA medical centers. VA concurred with our recommendation and later provided us with information on the number of Operation Enduring Freedom and Operation Iraqi Freedom veterans that has accessed VA services in its medical centers, as well as its Vet Centers. However, VA acknowledged that estimating workload demand and resource readiness remains limited. VA stated that the provision of basic post-deployment health data from DOD to VA would better enable VA to provide health care to individual veterans and help VA to better understand and plan for the health problems of servicemembers returning from Afghanistan and Iraq. In February 2005, we reported on recommendations made by VA’s Special Committee on PTSD; some of the recommendations were long-standing. We recommended that VA prioritize implementation of those recommendations that would improve PTSD services. VA disagreed with our recommendation and stated the report failed to address the many efforts undertaken by the agency to improve the care delivered to veterans with PTSD. We believe our report appropriately raised questions about VA’s capacity to meet veterans’ needs for PTSD services. We noted that, given VA’s outreach efforts, expanded access to VA health care for many new combat veterans, and the large number of servicemembers returning from Afghanistan and Iraq who may seek PTSD services, it is critical that VA’s PTSD services be available when servicemembers return from military combat. VA has taken steps to help the nation’s newest generation of veterans who returned from Afghanistan and Iraq seriously injured move forward with their lives, particularly those who return from combat with disabling physical injuries. While physical injuries may be more apparent, psychological injuries, although not visible, are also debilitating. VA has made seriously injured servicemembers and veterans a priority, but faces challenges in providing services to both the physically and psychologically injured. For example, VA must be mindful to balance effective outreach with an approach that could be viewed as intrusive. Moreover, overcoming these challenges requires VA and DOD to work more closely to identify those who need services and to share data about them so that seriously injured servicemembers and veterans receive the care they need. Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions that you or Members of the Committee might have. For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Also contributing to this statement were Irene Chu, Linda Diggs, Martha A. Fisher, Lori Fritz, and Janet Overton. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. Vocational Rehabilitation: More VA and DOD Collaboration Needed to Expedite Services for Seriously Injured Servicemembers. GAO-05-167. Washington, D.C.: January 14, 2005. VA and Defense Health Care: More Information Needed to Determine if VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. VA Vocational Rehabilitation and Employment Program: GAO Comments on Key Task Force Findings and Recommendations. GAO-04- 853. Washington, D.C.: June 15, 2004. Defense Health Care: DOD Needs to Improve Force Health Protection and Surveillance Processes. GAO-04-158T. Washington, D.C.: October 16, 2003. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041. Washington, D.C.: September 19, 2003. VA Benefits: Fundamental Changes to VA’s Disability Criteria Need Careful Consideration. GAO-03-1172T. Washington, D.C.: September 23, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. SSA and VA Disability Programs: Re-Examination of Disability Criteria Needed to Help Ensure Program Integrity. GAO-02-597. Washington, D.C.: August 9, 2002. Military and Veterans’ Benefits: Observations on the Transition Assistance Program. GAO-02-914T. Washington, D.C.: July 18, 2002. Disabled Veterans’ Care: Better Data and More Accountability Needed to Adequately Assess Care. GAO/HEHS-00-57. Washington, D.C.: April 21, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
More than 10,000 U.S. military servicemembers, including members of the National Guard and Reserve, have been injured in the conflicts in Afghanistan and Iraq. Those with serious physical and psychological injuries are initially treated at the Department of Defense's (DOD) major military treatment facilities (MTF). The Department of Veterans Affairs (VA) has made provision of services to these servicemembers a high priority. This testimony focuses on the steps VA has taken and the challenges it faces in providing services to the seriously injured and highlights findings from three recent GAO reports that addressed VA's efforts to provide services to the seriously injured. These services include vocational rehabilitation and employment (VR&E) and health care for those with post-traumatic stress disorder (PTSD). VA has taken steps to provide services as a high priority to seriously injured servicemembers returning from Afghanistan and Iraq. To identify and monitor those who may require VA's services, VA and DOD are working on a formal agreement to share data about servicemembers with serious injuries. Meanwhile, VA has relied on its regional offices to coordinate with staff at MTFs and VA medical centers to learn the identities, medical conditions, and military status of seriously injured servicemembers. For servicemembers with PTSD, VA has taken steps to improve care including developing with DOD a clinical practice guideline for identifying and treating individuals with PTSD. The guideline contains a four-question screening tool, which both VA and DOD use to identify those who may be at risk for PTSD. VA faces significant challenges in providing services to seriously injured servicemembers. For example, the individualized nature of recovery makes it difficult to determine when a seriously injured servicemember will be ready for vocational rehabilitation, and DOD has expressed concern that VA's outreach to servicemembers could affect retention for those whose discharge from military service is uncertain. VA is also challenged by the lack of access to DOD data; although VA staff have developed ad hoc arrangements, such informal agreements can break down. Regarding PTSD, inaccurate data limit VA's ability to estimate its capacity for treating additional veterans and to plan for an increased demand for these services.
The Bureau puts forth tremendous effort to conduct a complete and accurate count of the nation’s population. However, some degree of error in the form of persons missed or counted more than once is inevitable because of limitations in census-taking methods. Because census results are used, among other purposes, to apportion Congress, redraw congressional districts, and allocate federal aid to state and local governments, the size and demographic composition of these coverage errors have become increasingly sensitive since the Bureau was first able to generate detailed data on them during the 1980 Census. However, the Bureau has never used the results of its coverage measurements to correct estimated coverage errors. The Bureau first attempted to measure the accuracy of the census in the 1940s when it compared the census numbers to birth and death certificates and other administrative data using a procedure called demographic analysis. Modern coverage measurement began with the 1980 Census when the Bureau compared census figures to the results of an independent sample survey of the population. Using statistical methods, the Bureau generated detailed measures of the differences among undercounts of particular ethnic, racial, and other groups. In the months that followed, many lawsuits were filed, most contending that the results of the 1980 coverage measurement should have been used to adjust the census. However, the Bureau designed the evaluation to measure errors, not to correct the census results, and the Director of the Census Bureau decided against adopting the adjusted numbers, as they were deemed flawed due to missing and inaccurate data. The quality of the coverage measurement data improved for the 1990 Census, and the Bureau recommended statistically adjusting the results. However, the Secretary of Commerce determined that the evidence to support an adjustment was inconclusive and decided not to adjust the 1990 Census. The adjustment decision was complicated by the fact that the 1990 Census figures had already been released when the coverage measurement results became available in the spring of 1991. The Secretary of Commerce was concerned that two sets of numbers—the actual census results and the adjusted figures—could create confusion and might allow political considerations to play a part in choosing between sets of numbers when the outcome of the choices, such as congressional apportionment, could be known in advance of a decision. To determine the objectives of 2000 Census I.C.M./A.C.E. programs and their results, we reviewed Bureau and other documents that included Federal Register notices; Census Operational Plans; reports to Congress; internal memorandums; research and feasibility studies; and reports of the Executive Steering Committee for Accuracy and Coverage Policy (ESCAP) I and II, which assessed the results of the A.C.E. program and recommended how they should be used. To determine costs for consultants and technical studies for 2000 Census I.C.M./A.C.E. programs, we focused on object class code 25 from the financial management reports to obtain contract data. With Bureau assistance, we identified I.C.M./A.C.E. project accounts and analyzed amounts by fiscal year using the financial management reports generated by the Department of Commerce’s Administrative Management System (CAMS). We reviewed and analyzed obligated and expended data for all coverage measurement programs that existed during the 2000 Census for fiscal years 1991 to 2003. We did not audit financial data provided by the Bureau. To determine ways to track future costs, we reviewed current Bureau financial management reports and considered established standards of accounting, auditing, and internal controls. In addition, we met with key Bureau officials to discuss the results of our analysis and obtain their observations and perspectives. The limitations we encountered in the scope of our work on this assignment are as follows. We were unable to determine the complete contractual and technical studies costs of the I.C.M./A.C.E. programs because the Bureau considered any I.C.M./A.C.E.-related costs from fiscal years 1991 through 1995 as part of its general research and development programs and thus did not separately track these costs. Although some costs were tracked in fiscal year 1996, the Bureau still considered these costs as research and development and did not include these costs as I.C.M./A.C.E. program costs. We were unable to identify I.C.M./A.C.E. portions of costs from projects that covered the entire census, such as the 2000 Census Evaluation program. We did not evaluate the propriety of contracts for I.C.M./A.C.E. programs. Our work was performed in Washington, D.C., and at U.S. Census Bureau headquarters in Suitland, Maryland, from June 2002 through October 2002 in accordance with generally accepted government auditing standards. On January 7, 2003, the Secretary of Commerce provided written comments on a draft of this report. We address these comments in the “Agency Comments and Our Evaluation” section, and have reprinted them in appendix I. In planning the 2000 Census, the Bureau developed a new coverage measurement program, I.C.M., that was designed to address the major shortcomings of the 1990 coverage measurement program. However, as shown in table 1, much like similar programs in earlier censuses, the Bureau did not use I.C.M. and its successor program, A.C.E., to adjust the census because of legal challenges, technical obstacles, and the inability to resolve uncertainties in the data in time to meet the deadlines for releasing the data. In designing I.C.M., the Bureau’s goal was to produce a single, consolidated count or “one-number” census and thus avoid the controversy of having two sets of census results as occurred during the 1990 Census. Thus, as shown in table 1, the objectives of I.C.M. were to (1) measure census coverage, (2) generate, using statistical sampling and estimation methods, the detailed data required for apportionment, congressional redistricting, and federal program purposes, and (3) produce a one-number census. The Bureau’s plans for I.C.M. emerged in response to the unsatisfactory results of the 1990 Census. Although the 1990 headcount was, at that time, the most costly in U.S. history, it produced data that were less accurate than those from the 1980 Census. The disappointing outcome was due in large part to the Bureau’s efforts to count housing units that did not mail back their census questionnaires. The operation, known as nonresponse follow-up, where enumerators visited and collected information from each nonresponding housing unit, proved to be costly and error-prone when a higher-than-expected workload and a shortage of enumerators caused the operation to fall behind schedule. The final stages of nonresponse follow- up were particularly problematic. Indeed, while enumerators finished 90 percent of the follow-up workload within 8 weeks (2 weeks behind schedule), it took another 6 weeks to resolve the remaining 10 percent. Moreover, in trying to complete the last portion of nonresponse follow-up cases, the Bureau accepted less complete responses and information from nonhousehold members such as neighbors, which may have reduced the quality of the data. In the years following the 1990 Census, Congress, the Bureau, several organizations, and GAO, concluded that fundamental design changes were needed to reduce census costs and improve the quality of the data. In response, the Bureau reengineered a number of operations for the 2000 Census. For example, to save time and reduce its nonresponse follow-up workload, the Bureau planned to enumerate a sample of the last remaining portion of nonresponse follow-up cases instead of visiting every nonresponding household as it had done in previous censuses. To adjust for enumeration errors, the Bureau developed I.C.M., which was intended to reconcile the original census figures with data obtained from a separate, independent count of a sample of 750,000 housing units using a statistical process called Dual System Estimation. The Bureau believed that this approach offered the best combination of reduced costs, improved accuracy expected at various geographic levels, and operational feasibility. However, concerned about the legality of the Bureau’s planned use of sampling and estimation, members of Congress challenged the Bureau’s use of I.C.M. in court. In January 1999, the Supreme Court ruled that the Census Act prohibited the use of statistical sampling to generate population data for reapportioning the House of Representatives. Following the Supreme Court ruling, the Bureau planned to produce apportionment numbers using traditional census-taking methods, and provide statistically adjusted numbers for nonapportionment uses of the data such as congressional redistricting and allocating federal funds. The Bureau initiated the A.C.E. program, which was designed to take a national sample of approximately 300,000 housing units to evaluate coverage errors among different population groups and statistically correct for them. Thus, as shown in table 1, the Bureau’s objectives for A.C.E. were to (1) measure how many people were missed in the census and how many were erroneously included and (2) produce the detailed data required in time for redistricting and federal program purposes. However, while the Bureau generally conducted A.C.E. in accordance with its plans, the Bureau later determined that the A.C.E. results did not provide a reliable measure of census accuracy and could not be used to adjust the nonapportionment census data. The first decision against A.C.E. occurred in March 2001, when the Acting Director of the Census Bureau recommended to the Secretary of Commerce that the unadjusted census data be used for redistricting purposes. He cited as a primary reason an apparent inconsistency between the population growth over the prior decade, as implied by A.C.E. results, and demographic analysis, which estimated the population using birth, death, and other administrative records. The inconsistency raised the possibility of an unidentified error in either the A.C.E. or census numbers. He reported that the inconsistency could not be resolved prior to April 1, 2001, the legally mandated deadline for releasing redistricting data. The second decision against A.C.E. came in October 2001 when, based on a large body of additional research, ESCAP decided against adjusting census data for allocating federal aid and other purposes, because A.C.E. failed to identify a significant number of people erroneously included in the census, and other remaining uncertainties. According to Bureau officials, it might be possible to use adjusted data to produce intercensal population estimates for federal programs that require this information; however, the Bureau would need to revise the A.C.E. results before any use of the data could be considered. Although I.C.M. and A.C.E. did not meet their formal objectives, they did produce a body of important lessons learned. As the Bureau’s current approach for the 2010 Census includes coverage measurement to assess the accuracy of the census (but not necessarily to adjust the numbers themselves), it will be important for the Bureau to consider these lessons as its planning efforts continue. The lessons include (1) developing a coverage measurement methodology that is both technically and operationally feasible, (2) determining the level of geography at which coverage measurement is intended, (3) keeping stakeholders, particularly Congress, informed of the Bureau’s plans, and (4) adequately testing the eventual coverage measurement program. 1. A.C.E. demonstrated operational, but not technical feasibility. According to Bureau officials, an important result of the A.C.E. program was that it demonstrated, from an operational perspective only, the feasibility of conducting a large independent field check on the quality of the census. The Bureau canvassed the entire A.C.E. sample area to develop an address list, collected census response data for persons living in the sample areas on census day, and conducted an operation to try and match A.C.E. respondents to census respondents, all independent of the regular census operations and within required time frames. Our separate reviews of two of these operations—interviewing respondents and matching A.C.E. and census data—while raising questions about the impact on final A.C.E. results due to apparently small operational deviations, also concluded that the Bureau implemented those two operations largely as planned. Nevertheless, while the Bureau demonstrated that it could execute A.C.E. field operations using available resources within required time frames, as the Bureau has noted, feasibility also consists of a technical component—that is, whether the A.C.E. methodology would improve the accuracy of the census. Although the Bureau clearly stated in its justification for A.C.E. that the effort would make the census more accurate, as noted earlier, because of unresolved data discrepancies, its experience in 2000 proved otherwise. Moreover, according to the Bureau, because the A.C.E. was designed to correct a census with a net coverage error similar to that observed in previous censuses, the Bureau commented that applying the methodology to the historically low levels of net error observed in the 2000 Census represented a unique and unanticipated challenge for A.C.E. Thus, it will be important for the Bureau to refine its coverage measurement methodology to ensure that it is technically feasible. 2. The level of geography at which the Bureau can successfully measure coverage is unclear. Since the October 2001 decision to not rely on adjusted census data for nonapportionment and nonredistricting purposes, Bureau officials have told us that they now doubt whether census data can reliably be improved down to the level of geography for which A.C.E. was intended to improve the accuracy—the census tract level (neighborhoods that typically contain around 1,700 housing units and 4,000 people). The Bureau’s current position differs from that taken in 2000, when it reported to Congress that it expected accuracy at the tract level to be improved, on average, by A.C.E. statistically adjusting numbers at an even lower level of geography—the census block level. Uncertainty in the level of geography at which accuracy is to be measured or improved can affect the overall design of coverage measurement, as well as its technical feasibility. Therefore, it will be important for the Bureau to determine the level of geography at which it intends to measure accuracy as it decides the role and design of future coverage measurement programs. 3. Keeping stakeholders informed is essential. Throughout the 1990s, Congress and other stakeholders, including GAO, expressed concerns about the Bureau’s planned use of sampling and statistical estimation procedures to adjust the census. A key cause of this skepticism was the Bureau’s failure to provide sufficiently detailed data on the effects that I.C.M. would have at different levels of geographic detail. Information was also lacking on the various design alternatives being considered, their likely implications, and the basis for certain decisions. As a result, it was difficult for Congress and other stakeholders to support the Bureau’s coverage measurement initiatives. For example, on September 24, 1996, the House Committee on Government Reform and Oversight issued a report that criticized the Bureau’s initiatives for sampling and statistical estimation. Among other things, the Committee found that the Bureau had not clarified issues of accuracy, particularly for small geographic areas, raised by the sampling initiative. Congress’s perspective on the process was later reflected in its enactment of legislation in 1997 that included provisions requiring the Department of Commerce to provide Congress with comprehensive information on its planned use of statistical estimation within 30 days.4. Adequate testing of coverage measurement methodologies is critical. Although the Bureau conducted a dress rehearsal for the census in three locations across the country that was intended to demonstrate the overall design of the census, the 1998 operation did not reveal the problems that the Bureau encountered in dealing with the discrepancies between the 2000 A.C.E. results and its benchmarks. According to Bureau officials, this was partly because the sites were not representative of the nation at large. Additionally, as a result of a compromise between Congress and the administration to simultaneously prepare for a nonsampling census, the I.C.M. was tested at only two of the three dress rehearsal sites—an urban area and an Indian reservation—but was not tested in a rural location as was originally planned. An earlier test in 1995 was also not comprehensive in that it did not test a sampling operation designed to help determine whether nonresponse follow-up of the magnitude projected by the Bureau’s current plan could be completed in time for the I.C.M to be done on schedule. From fiscal year 1996 through fiscal year 2001, the Bureau obligated about $207 million for I.C.M./A.C.E. activities. As shown in table 2, of that $207 million, we identified about $22.3 million (11 percent) in obligated amounts for contracts involving more than 170 vendors. These contracts were primarily for technical advisory and assistant services, computer systems support, and training. ($2) (10) (130) (130) Although the Bureau tracked some costs of contracts for the I.C.M./A.C.E. programs, we found that the $22.3 million did not represent the complete contractor costs of the programs because of the following three factors. First, the Bureau only tracked the contractor costs associated with conducting the I.C.M./A.C.E. programs, which covers the period from fiscal year 1997 through 2003. Although life cycle costs for the 2000 Census cover a 13-year period from fiscal years 1991 through 2003, senior Bureau officials said that the I.C.M./A.C.E. program was not viable for implementation until fiscal year 1997. Therefore, the Bureau considered contractor costs from earlier years as part of its general research and development programs, and the Bureau did not assign unique project codes to identify I.C.M./A.C.E. programs and related costs in its financial management system. Second, although $182,000 of fiscal year 1996 obligated contractor costs were identifiable in the Bureau’s financial management system as an I.C.M. special test, the Bureau did not consider these costs as part of the I.C.M./A.C.E. programs. Instead, these costs were considered general research and development. However, because the Bureau separately identified these costs as I.C.M. program contractor costs, we have included the $182,000 as part of the I.C.M./A.C.E. program contractor costs in this report. Finally, we were unable to identify the I.C.M./A.C.E. portions of costs that were part of other programs. For example, in late fiscal year 2000 and after, the Bureau did not separate A.C.E. evaluations from its other 2000 Census evaluations in its financial management systems. Bureau officials stated that the contracts for evaluations included overall 2000 Census and A.C.E. evaluations, and did not have a separate code identifying A.C.E. costs. During the 2000 Census, the Bureau, its auditors, and GAO, found extensive weaknesses in the Bureau’s financial management system, the components of which include hardware, software, and associated personnel. The weaknesses included difficulties in providing reliable and timely financial information to manage current government operations and preparing financial statements and other reports. Together, they affected the completeness, accuracy, and timeliness of data needed for informed management decisions and effective oversight. In light of these weaknesses, the Bureau’s ability to track future costs of coverage measurement activities will largely depend on three factors. First, a sound financial management system is critical. As discussed in our December 2001 report, the Bureau’s core financial management system, CAMS, had persistent internal control weaknesses in fiscal year 2000. In its latest financial report, the Bureau indicated that these weaknesses have continued through fiscal year 2001. The Bureau expects to issue its fiscal year 2002 financial report shortly. Second, it would be important to set up project codes to capture coverage measurement activities as early in the planning process as possible. The Bureau did not set up a specific project code to identify I.C.M. program costs until 1996 because, according to Bureau officials, the I.C.M. program was not viable until 1997 and all costs up to this point were considered general research. Finally, it would be important for Bureau personnel to correctly charge the project codes established for the coverage measurement program activities. During the 2000 Census, for example, while the Bureau established a project code and a budget for the remote Alaska enumeration, the project costs were erroneously charged to and commingled with a project code for enumerating special populations. As a result, the actual costs for remote Alaska enumeration were reported by the Bureau’s financial management system as zero and are unknown, while enumerating special population costs are overstated. The Bureau’s 2000 Census coverage measurement programs did not achieve their primary objectives of measuring the accuracy of the census and adjusting the results because of legal challenges, technical hurdles, and questionable data. However, beyond these formal objectives, there emerged several important lessons learned that Bureau managers should consider because current plans for the 2010 Census include coverage measurement. At the same time, it will also be important for the Bureau to be capable of fully tracking the money it spends on coverage measurement and other census activities so that Congress and other stakeholders can hold the Bureau accountable for achieving intended results. Although the Bureau has never used the results of its coverage measurement programs to adjust census numbers, we believe that an evaluation of the accuracy and completeness of the census is critical given the many uses of census data, the importance of identifying the magnitude and characteristics of any under- and overcounts, and the cost of the census overall. Less clear is whether the results of the coverage measurement should be used to adjust the census. Any Bureau decisions on this matter should involve close consultation with Congress and other stakeholders, and be based on detailed data and a convincing demonstration of the feasibility of the Bureau's proposed approach. Whatever the decision, it is imperative that it be made soon so that the Bureau can design appropriate procedures and concentrate on the business of counting the nation’s population. The longer the 2010 planning process proceeds without a firm decision on the role of coverage measurement, the greater the risk of wasted resources and disappointing results. To help ensure that any future coverage measurement efforts achieve their intended objectives and costs can be properly tracked, we recommend that the Secretary of Commerce direct the Bureau to in conjunction with Congress and other stakeholders, come to a decision soon on whether and how coverage measurement will be used in the 2010 Census; consider incorporating lessons learned from its coverage measurement experience during the 2000 Census, such as (1) demonstrating both the operational and technical feasibility of its coverage measurement methods, (2) determining the level of geography at which coverage can be reliably measured, (3) keeping Congress and other stakeholders informed of its plans, and (4) adequately testing coverage measurement prior to full implementation; and ensure that the Bureau’s financial management systems can capture and report program activities early in the decennial process and ensure that project costs are monitored for accuracy and completeness. The Secretary of Commerce forwarded written comments from the Census Bureau on a draft of this report, which are reprinted in appendix I. The Bureau agreed with our recommendations highlighting the steps that should be followed in the development of a coverage measurement methodology for the 2010 Census and acknowledged their importance. However, the Bureau maintained that it followed most of these steps for the 2000 Census including (1) keeping stakeholders, particularly Congress, informed of the Bureau’s plans, (2) determining the level of geography at which coverage measurement is intended, and (3) adequately testing coverage measurement methodologies. The Bureau also maintained that throughout the 1990s, it had an open and transparent process for implementing the coverage measurement program, including the levels of geography to which its results would be applied. We disagree. As we stated in our report, the Bureau’s failure to provide important information was a key cause of congressional skepticism over the Bureau’s coverage measurement plans. In fact, Congress was so concerned about the lack of comprehensive information on the Bureau’s proposed approach that in July 1997, it passed a law that included provisions requiring the Department of Commerce to provide detailed data on the Bureau’s planned use of statistical estimation within 30 days. We revised the report to include this, and provide other examples to further support our position that the Bureau’s I.C.M. and A.C.E. planning and development processes were less than fully open and transparent. The Bureau also commented that each major component of the I.C.M./A.C.E. program underwent “rigorous” testing in the middle of the decade as well as during the dress rehearsal for the 2000 Census held in 1998. We believe this overstates what actually occurred. As we noted in the report, the dress rehearsal failed to detect the problems that A.C.E. encountered during the 2000 Census because the sites were not representative of the nation. Additionally, because of an agreement between Congress and the administration to simultaneously prepare for a census that did not include sampling, the I.C.M. was only tested at two of the three dress rehearsal sites—an urban area and an Indian reservation— but was not tested in a rural location as was originally planned. We made this and other revisions to strengthen our point. Because the A.C.E. was designed to correct a census with a net coverage error similar to that observed in previous censuses, the Bureau commented that applying the methodology to the historically low levels of net error observed in the 2000 Census represented a unique and unexpected challenge for A.C.E. We revised the report to reflect this additional context. The Bureau took exception to the way we presented our conclusions concerning its ability to properly classify certain costs associated with the development of the Bureau’s coverage measurement programs. The Bureau noted that it decided not to separately track coverage measurement development costs in 1994, because there was no internal or external request for a separate cost accounting of the program. Our report does not make interpretive conclusions or qualitative judgments about which coverage measurement program costs the Bureau decided to track. Instead, the report (1) points out that we could not identify all of the contractor costs associated with the I.C.M./A.C.E. programs because of the three factors described in the report, and (2) underscores the importance of a sound financial management system for tracking, planning, and development costs for the 2010 Census. We are sending copies of this report to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. Please contact Patricia A. Dalton on (202) 512-6806 or by E-mail at daltonp@gao.gov if you have any questions. Other key contributors to this report were Robert Goldenkoff, Roger Stoltz, Carolyn Samuels, Cindy Brown-Barnes, Ty Mitchell, and Linda Brigham. 2000 Census: Complete Costs of Coverage Evaluation Programs Are Not Available. GAO-03-41. Washington, D.C.: October 31, 2002. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002. 2000 Census: Coverage Evaluation Matching Implemented as Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. 2000 Census: Best Practices and Lessons Learned for More Cost-Effective Nonresponse Follow-up. GAO-02-196. Washington, D.C.: February 11, 2002. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO-02-26. Washington, D.C.: December 31, 2001. 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau. GAO-02-30. Washington, D.C.: December 28, 2001. 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census. GAO-02-31. Washington, D.C.: December 11, 2001. 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting. GAO-02-4. Washington, D.C.: October 4, 2001. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. Decennial Censuses: Historical Data on Enumerator Productivity Are Limited. GAO-01-208R. Washington, D.C.: January 5, 2001. 2000 Census: Information on Short- and Long-Form Response Rates. GAO/GGD-00-127R. Washington, D.C.: June 7, 2000. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
To help measure the quality of the 2000 Census and to possibly adjust for any errors, the U.S. Census Bureau (Bureau) conducted the Accuracy and Coverage Evaluation (A.C.E.) program. However, after obligating around $207 million for A.C.E. and its predecessor program, Integrated Coverage Measurement (I.C.M.), from fiscal years 1996 through 2001, the Bureau did not use either program to adjust the census numbers. Concerned about the amount of money the Bureau spent on I.C.M. and A.C.E. programs and what was produced in return, the subcommittee asked us to review the objectives and results of the programs, the costs of consultants, and how best to track future coverage measurement activities. The two programs the Bureau employed to measure the quality of the 2000 Census population data did not meet their objectives. The A.C.E. program achieved results other than those laid out in the Bureau's formal objectives that highlight important lessons learned. They include (1) developing a coverage measurement methodology that is both operationally and technically feasible, (2) determining the level of geography at which coverage measurement is intended, (3) keeping stakeholders, particularly Congress, informed of the Bureau's plans, and (4) adequately testing coverage measurement methodologies. It will be important for the Bureau to consider these as its current plans for the 2010 Census include coverage evaluation to measure the accuracy of the census but not necessarily to adjust the results. Of the roughly $207 million the Bureau obligated for I.C.M./A.C.E. programs from fiscal years 1996 through 2001, we identified about $22.3 million that was obligated for contracts involving over 170 vendors. We could not identify any obligations prior to 1996 in part because the Bureau included them with its general research and development efforts and did not assign the I.C.M./A.C.E. operations unique project codes in its financial management system. To track these costs in the future, it will be important for the Bureau to (1) have a financial management system that has specific project codes to capture coverage measurement costs, (2) establish the project codes as early in the planning process as possible, and (3) monitor the usage of the codes to ensure that they are properly charged.
Iraq’s oil infrastructure is an integrated network that includes crude oil fields and wells, pipelines, pump stations, refineries, gas oil separation plants, gas processing plants, export terminals, and ports (see fig. 1). This infrastructure has deteriorated significantly over several decades due to war damage; inadequate maintenance; and the limited availability of spare parts, equipment, new technology, and financing. Considerable looting after Operation Iraqi Freedom and continued attacks on crude and refined product pipelines have contributed to Iraq’s reduced crude oil production and export capacities. Function is to refine crude oil/gas mixture into usable consumer products (fuel oil, diesel, kerosene, benzene, gasoline, LPG, natural gas, etc.) metering stations and transshipment facilities from pipeline to ship for export 2 export terminals, both in 3 major refineries (Bayji in south–Al Basrah Oil Terminal and Khor al Amaya Oil Terminal plants of various sizes and capacities (18 in north and 34 in south) north, Daura in Baghdad, and Basrah in south) and 14 smaller refineries 2 export pipelines, both in north, to Turkey and Syria (gasoline, kerosene, and other petroleum products) Iraq’s crude oil reserves, estimated at a total of 115 billion barrels, are the third largest in the world. However, Iraq’s ability to extract these reserves has varied widely over time and has been significantly affected by war. Figure 2 shows Iraq’s daily average crude oil production levels annually from 1970 through 2006. Iraq’s crude oil production reached 3.5 mbpd, its highest annual average, in 1979. In September 1980, Iraq invaded Iran and production levels plummeted. Although the Iran-Iraq War continued until 1988, production levels grew steadily after 1983, peaking at 2.9 million barrels per day in 1989. The Gulf War began the following year when Iraq invaded Kuwait. In January 1991, the United States and coalition partners began a counter- offensive (Operation Desert Storm). Crude oil production once again dropped precipitously and remained relatively low from 1990 to 1996, while Iraq was under UN sanctions. Under the UN Oil for Food program, Iraqi crude oil production began to rebound, peaking at an annual average of 2.6 mbpd in 2000. In the 5 years preceding the 2003 U.S. invasion of Iraq, crude oil production averaged 2.3 mbpd. In 2003, crude oil production dropped again to a low of about 1.3 million barrels per day (annual average) but then rebounded. Despite U.S. and Iraqi government efforts to reconstruct Iraq’s key economic sector, oil production has consistently fallen below U.S. program goals. In addition, production levels may be overstated and measuring them precisely is challenging due to limited metering and poor security. Comprehensive metering has been an outstanding goal of the United States, the international community, and the Iraqi government. Key reconstruction goals for Iraq’s oil sector, including those for crude oil production and exports, and refined fuel production capacity and stock levels, have not been met. U.S. goals for the oil sector include reaching an average crude oil production capacity of 3 million barrels per day (mbpd) and crude oil export levels of 2.2 mbpd. However, in 2006, actual crude oil production and exports averaged, respectively, about 2.1 mbpd and 1.5 mbpd. Figure 3 compares Iraq’s oil production and exports with U.S. goals (the data for this figure are presented in appendix I). As the figure shows, production and exports for the first five months of 2007 were still below U.S. goals. In August 2003, the CPA established a U.S. program goal to increase crude oil production to about 1.3 mbpd. The CPA increased this goal every 2 to 3 months until July 2004, when the goal became to increase crude oil production capacity to 3.0 mbpd. Besides production and export of crude oil, the CPA also established goals for the production of natural gas and liquefied petroleum gas (LPG), as well as the national stocks of refined petroleum products (such as gasoline) that are used to generate energy by consumers and businesses. These CPA goals were to increase production capacity of natural gas to 800 million standard cubic feet per day (mscfd); increase production capacity of LPG to 3,000 tons per day (tpd); and meet demand for benzene (gasoline), diesel, kerosene, and LPG by building and maintaining their stock levels at a 15-day supply. However, the 2006 averages did not meet these goals. To increase the stocks of petroleum products and their availability to consumers, Iraq legalized the importation of petroleum products by private companies to supplement its own production and state-owned company imports. For 2006, the IMF estimated that Iraq’s state-owned companies imported about $2.6 billion of petroleum products. At the recommendation of the IMF, the Iraqi government has been reducing subsidies for refined oil products, which raises the prices consumers pay. In the past, refined oil products in Iraq had been highly subsidized, which led to increased demand. Reduction in domestic demand for refined oil products would allow additional crude oil to be exported for revenue rather than refined in Iraq. Iraq’s crude oil production statistics may be overstated. We compared the State Department’s statistics to those published by the EIA, which are based on alternate sources. Part of EIA’s mission is to produce and disseminate statistics on worldwide energy production and use. While these two data sets follow similar trend lines, EIA reports that Iraqi oil production was about 100,000 to 300,000 barrels per day lower than the amounts the State Department reported. At an average price of $50 per barrel, this is a discrepancy of $5 million to $15 million per day, or $1.8 billion to $5.5 billion per year. Figure 4 shows these two data sets over the time period (June 2003 to March 2007) for which data from both State and EIA were available. The data for this figure are presented in appendix I. According to EIA, several factors may account for the discrepancy. One factor is the lack of storage facilities for crude oil in Iraq. Crude oil that cannot be processed by refineries or exported is reinjected into the ground. Another factor affecting the discrepancy may be differences in the frequency and timing of the data. The State Department’s data are reported daily in real time, while EIA produces monthly data that have been reviewed and corroborated from several sources. This lag in reporting and longer time period may allow analysts to address inconsistencies such a double counting and reinjection. In addition, the State Department regularly reports on sabotage and interdictions to crude oil pipelines and other disruptions in the crude oil production process. Also, under Saddam Hussein, Iraq had a history of diverting crude oil production to circumvent UN sanctions. Therefore, it is possible that corruption, theft, and sabotage may also be factors in the discrepancy. Reliable information on Iraqi’s oil production is further complicated by the lack of metering. According to a State Department oil advisor, meters are in place at many locations but are not usable in many instances due to the difficulties in obtaining needed replacements and spare parts. Without comprehensive metering, crude oil production must be estimated using less precise means, such as estimating the flow through pipelines and relying on reports from onsite personnel rather than an automated system that could be verified. An improved metering system has been a U.S. and international donor priority since early 2004, but implementation has been delayed. In 1996, the UN first cited the lack of oil metering when Iraq was under UN sanctions. In 2004, the International Advisory and Monitoring Board (IAMB) for the Development Fund for Iraq recommended the expeditious installation of metering equipment. According to IAMB, in June 2004, the CPA had approved a budget to replace, repair, and calibrate the metering system on Iraq’s oil pipeline network. However, the oil metering contract was not completed due to security and technical issues. In June 2006, IAMB reported that the Iraqi government had entered into an agreement with Shell Oil Company to serve as a consultant for the Ministry of Oil. Shell would advise the ministry on the establishment of a system to measure the flow of oil, gas, and related products within Iraq and in export and import operations. The U.S. government is assisting in this effort by rebuilding one component of the metering system in the Al-Basrah oil port—Iraq’s major export terminal—and expects the project to be complete in July 2007. The U.S. government and Iraq face several key challenges in improving Iraq’s oil sector. First, the U.S. reconstruction program assumed a permissive security environment that never materialized; the ensuing lack of security resulted in project delays and increased costs. Second, corruption and smuggling have diverted government revenues potentially available for rebuilding efforts. Third, future funding needs for reconstruction of Iraq’s oil sector are significant, but the source of these funds is uncertain. The U.S. reconstruction effort was predicated on the assumption that a permissive security environment would exist. However, since May 2003, overall security conditions in Iraq have deteriorated and grown more complex, as evidenced by the increased numbers of attacks (see fig. 5). The average number of daily attacks in June 2007 was about the same level as the prior high of about 180 attacks per day that occurred in October 2006 around the time of Ramadan. Overall, the average number of daily attacks was about 50 percent higher in June 2007 than in June 2006. The deteriorating security environment has led to project delays and increased costs. Insurgents have destroyed key oil infrastructure, threatened workers, compromised the transport of materials, and hindered project completion and repairs by preventing access to work sites. Moreover, looting and vandalism have continued since 2003. U.S. officials reported that major oil pipelines in the north continue to be sabotaged, shutting down oil exports and resulting in lost revenues. For example, according to the Army Corps of Engineers, although eight gas oil separation plants in northern Iraq have been refurbished, many are not running due to interdictions on the Iraq-Turkey pipeline and new stabilization plant. The Corps noted that if the lines and plant were in operation today, an additional 500,000 barrels per day could be produced in northern Iraq. The U.S. government has developed a number of initiatives to protect the oil infrastructure and transfer this responsibility to the Iraqi government. Such efforts include fortifying the infrastructure and improving the capabilities of rapid repair teams and protection security forces such as the Oil Protection Force and the Strategic Infrastructure Battalions (SIB). The U.S. government has paired these security forces with coalition partners and has trained and equipped the SIBs. However, U.S. officials stated that the capability and loyalty of some of these units are questionable. According to Department of Defense (DOD) and Center for Strategic and International Studies reports, these security forces have been underpaid, underequipped, and poorly led, and are sometimes suspected of being complicit in interdiction and smuggling. Additional information on the nature and status of these efforts and the SIBs is classified. U.S. and international officials have noted that corruption in Iraq’s oil sector is pervasive. In 2006, the World Bank and the Ministry of Oil’s Inspector General estimated that millions of dollars of government revenue are lost each year to oil smuggling or diversion of refined products. According to State Department officials and reports, about 10 percent to 30 percent of refined fuels are diverted to the black market or are smuggled out of Iraq and sold for a profit. According to State Department reporting, Iraqi government officials may have profited from these activities. The insurgency has been partly funded by corrupt activities within Iraq and by skimming profits from black marketers, according to U.S. embassy documents. According to a June 2007 DOD report, a variety of criminal, insurgent, and militia groups engage in the theft and illicit sale of oil to fund their activities. For example, DOD reported that as much as 70 percent of the fuel processed at Bayji was lost to the black market—possibly as much as $2 billion a year. As a result, the Iraqi Army assumed control of the entire Bayji refinery, and equipment is being installed to prevent siphoning. One factor that had stimulated black market activities and fuel smuggling to neighboring countries was Iraq’s low domestic fuel prices, which were subsidized by the government. However, under the IMF’s Stand-by Arrangement with Iraq, the government has already increased domestic fuel prices several times, significantly reducing the subsidy for many fuel products. The Iraqi government intends to continue the price increases during 2007 and encourage private importation of fuels, which was liberalized in 2006. The purpose is to decrease the incentive for black market smuggling and to increase the availability of fuel products. While billions have been provided to rebuild Iraq’s oil sector, Iraq’s future needs are significant and sources of funding are uncertain. For fiscal years 2003 through 2006, the United States made available about $2.7 billion, obligated about $2.6 billion, and spent about $2.1 billion to rebuild Iraq’s oil sector. According to various estimates and officials, Iraq will need billions of additional dollars to rebuild, maintain, and secure its oil sector. Since the majority of U.S. funds have been spent, the Iraqi government and international community represent important sources of potential future funding. However, the Iraqi government has not fully spent the capital project funds already allocated to the oil sector in Iraq’s 2006 budget. In 2006, Iraq planned to spend more than $3.5 billion for capital projects in the oil sector. This amount accounted for about 98 percent of the Ministry of Oil’s total budget ($3.6 billion) that year. As of December 2006, the end of Iraq’s fiscal year, only 3 percent of oil sector capital project funds had been spent. While Iraq’s inability to spend its capital budget may not directly affect U.S.-funded projects, U.S. investment alone is not adequate for the full reconstruction and expansion of the oil sector. Therefore, Iraq’s continued difficulties in spending its capital budget could hamper efforts to attain its current reconstruction goals. According to U.S. officials, Iraq lacks the clearly defined and consistently applied budget and procurement rules needed to effectively implement capital projects. For example, the Iraqi ministries are guided by complex laws and regulations, including those implemented under Saddam Hussein, the CPA, and the current government. According to State Department officials, the lack of agreed-upon procurement and budgeting rules causes confusion among ministry officials and creates opportunities for corruption and mismanagement. Additionally, according to the State Department and DOD, personnel turnover within the ministries, fear of corruption charges, and an onerous contract approval process have caused delays in contract approval and capital improvement expenditures. Furthermore, the Iraqi government has not made full use of potential international loans, and future donor funding for the oil sector remains uncertain. Donors other than the United States have not provided any grants to develop the oil sector, and the Iraqi government had not taken advantage of $467 million in loans from Japan to develop a crude oil export facility and upgrade a refinery. According to U.S. and international officials, donor funding has been limited because of an expectation that sufficient funds would be provided through Iraq’s oil revenues and private investors. Moreover, it is unclear to what extent the International Compact with Iraq will serve as a viable mechanism to obtain additional donor support for Iraq, particularly for the oil sector. Launched in May 2007, the compact was intended to secure additional funding for Iraq’s oil, electricity, and other sectors. However, the extent to which the compact will stimulate international assistance for the oil sector remains uncertain. The World Bank reports that additional incentives are needed to stimulate oil production and investment, including a clear legal and regulatory framework; clearly assigned roles for Iraq’s ministries, state agencies, and the private sector; and a predictable negotiating environment for contracts. Iraq has yet to enact and implement comprehensive hydrocarbon legislation that would define the distribution of future oil revenues and the rights of foreign investors. According to U.S. officials, until such legislation is passed and implemented, it will be difficult for Iraq to attract the billions of dollars in foreign investment it needs to modernize the oil sector. As of July 13, 2007, the Iraqi government was in various stages of drafting and enacting four separate, yet interrelated, pieces of legislation: hydrocarbon framework legislation that establishes the structure, management, and oversight for the sector; revenue-sharing legislation (the draft “Law of Financial Resources”); legislation restructuring the Ministry of Oil; and legislation establishing the Iraq National Oil Company (INOC). According to the State Department, to be enacted as law, the four pieces of legislation must be approved by Iraq’s cabinet (Council of Ministers), vetted through the Shura council, and then submitted by the cabinet to a vote by Iraq’s parliament (Council of Representatives). If the laws are passed, they are then made publicly available in the Iraqi government’s official publication, known as the Official Gazette. Figure 6 shows the status of the four proposed pieces of legislation as of July 1, 2007. The draft hydrocarbon framework is the furthest along in the legislative process and is currently before Iraq’s parliament, according to a State Department and a KRG official. According to these officials, it provides an overall framework but lacks key details that will be addressed in the financial resources and other legislation. The UN reported in early June 2007 that there had been no decision on whether the hydrocarbon framework legislation would be voted on as a part of a larger energy package with annexes and supporting legislation or voted on separately. The KRG has published the negotiated “agreed-to” text for the revenue- sharing legislation, which has not yet been approved by the cabinet. Negotiated text of the draft legislation for restructuring the Ministry of Oil and establishing INOC have yet to be developed and published. According to a State Department and KRG officials, the passage and implementation of all four pieces of legislation is essential to achieve increased transparency, accountability, and revenue management. Moreover, enacting and implementing hydrocarbon legislation and subsequent regulations and procedures will likely be impeded by some of the same challenges, such as poor security and corruption, that affect achieving program goals and reconstruction of the oil sector. According to U.S. officials, sectarian attacks and the lack of national unity and trust have resulted in competing sectarian interests and wariness of foreign investment. Also, according to U.S. officials, opportunities to profit from corruption and smuggling reduce the incentive for greater transparency and accountability in oil resource management. U.S. officials recognize that significant implementation challenges will remain once the draft legislation is enacted into law. As we recently reported, the United States has spent billions of dollars to rebuild Iraq’s oil sector, but billions more will be needed to surmount the challenges facing Iraq’s oil sector. Iraq’s oil sector lacks an effective metering system to measure output, determine revenue trends, and identify illicit diversions. Opaque laws governing investment have also limited foreign investment in this critical sector. The passage of comprehensive Iraqi hydrocarbon legislation could serve as an important impetus for stimulating additional investment if and when security conditions improve. The development of the sector is also hindered by weak government budgeting, procurement, and financial management systems and limited donor spending. The absence of an integrated strategic plan that coordinates efforts across the oil and electricity sectors is essential given their highly interdependent nature. Such a plan would help identify the most pressing needs for the entire energy sector and help overcome the daunting challenges affecting future development prospects. In our May 2007 report, we recommended that the Secretary of State, in conjunction with relevant U.S. agencies and in coordination with the donor community, work with the Iraqi government and particularly the Ministry of Oil to: 1. Develop an integrated energy strategy for the oil and electricity sectors that identifies and integrates key short-term and long-term goals and priorities for rebuilding, maintaining, and securing the infrastructure; funding needs and sources; stakeholder roles and responsibilities, including steps to ensure coordination of ministerial and donor efforts; environmental risks and threats; and performance measures and milestones to monitor and gauge progress. 2. Set milestones and assign resources to expedite efforts to establish an effective metering system for the oil sector that will enable the Ministry of Oil to more effectively manage its network and finance improvements through improved measures of production, consumption, revenues, and costs. 3. Improve the existing legal and regulatory framework, for example, by setting milestones and assigning resources to expedite development of viable and equitable hydrocarbon legislation, regulations, and implementing guidelines that will enable effective management and development of the oil sector and result in increased revenues to fund future development and essential services. 4. Set milestones and assign resources to expedite efforts to develop adequate ministry budgeting, procurement, and financial management systems. 5. Implement a viable donor mechanism to secure funding for Iraq’s future oil and electricity rebuilding needs and for sustaining current energy sector infrastructure improvement initiatives once an integrated energy strategic plan has been developed. In commenting on a draft of our May 2007 report, the State Department agreed that all the steps we included in our recommendations are necessary to improve Iraq’s energy sector but stated that these actions are the direct responsibility of the Government of Iraq, not of the Department of State, any U.S. agency, or the international donor community. The State Department also commented that U.S. agencies are already taking several actions consistent with our recommendations. We recognize that these actions are ultimately the responsibility of the Iraqi government. However, it remains clear that the U.S. government wields considerable influence in overseeing Iraq stabilization and rebuilding efforts. We also believe additional actions are warranted given the lack of progress that has been made over the last 4 years in achieving Iraq reconstruction goals. Mr. Chairmen, this concludes my statement. I would be pleased to answer any questions that you or other Members may have at this time. For questions regarding this testimony, please call Joseph A. Christoff at (202) 512-8979 or christoffj@gao.gov. Other key contributors to this statement were Stephen Lord, Assistant Director; Lynn Cothern; Kathleen Monahan; and Timothy Wedding. Table 1 provides the data used in figures 3 and 4 of this testimony. Department of State data on Iraq’s crude oil production and exports are collected by State Department officials in Iraq through Iraq’s Ministry of Oil. We calculated Iraq’s production for domestic consumption (the amount of oil produced that remains in the country) as the remainder of Iraq’s production of crude oil after exports, based on State Department’s data. Data from the Department of Energy’s Energy Information Administration (EIA) are based on EIA’s own analysis and a variety of sources, including Dow Jones, the Middle East Economic Survey, the Petroleum Intelligence Weekly, the International Energy Agency, OPEC’s Monthly Oil Market Report, the Oil & Gas Journal, Platts, and Reuters.
Rebuilding Iraq's oil sector is crucial to rebuilding Iraq's economy. For example, oil export revenues account for over half of Iraq's gross domestic product and over 90 percent of government revenues. This testimony addresses (1) the U.S. goals for Iraq's oil sector and progress in achieving these goals, (2) key challenges the U.S. government faces in helping Iraq restore its oil sector, and (3) efforts to enact and implement hydrocarbon legislation. This statement is based on our May 2007 report and updated data, where appropriate. Despite 4 years of effort and $2.7 billion in U.S. reconstruction funds, Iraqi oil output has consistently fallen below U.S. program goals. In addition, the State Department's data on Iraq's oil production may be overstated since data from the U.S. Department of Energy show lower production levels--between 100,000 and 300,000 barrels less per day. Inadequate metering, re-injection, corruption, theft, and sabotage account for the discrepancy, which amounts to about $1.8 to $5.5 billion per year. Comprehensive metering of Iraq's oil production has been a long-standing problem and continuing need. Poor security, corruption, and funding constraints continue to impede reconstruction of Iraq's oil sector. The deteriorating security environment places workers and infrastructure at risk while protection efforts have been insufficient. Widespread corruption and smuggling reduce oil revenues. Moreover, Iraq's needs are significant and future funding for the oil sector is uncertain as nearly 80 percent of U.S. funds for the oil sector have been spent. Iraq's contribution has been minimal with the government spending less than 3 percent of the $3.5 billion it approved for oil reconstruction projects in 2006. Iraq has yet to enact and implement hydrocarbon legislation that defines the distribution of oil revenues and the rights of foreign investors. Until this legislation is enacted and implemented, it will be difficult for Iraq to attract the billions of dollars in foreign investment it needs to modernize the sector. As of July 13, 2007, Iraq's cabinet has approved only one of four separate but interrelated pieces of legislation--a framework that establishes the structure, management, and oversight. Another part is in draft and two others are not yet drafted. Poor security, corruption, and the lack of national unity will likely impede the implementation of this legislation.
In the 1970s and the 1980s, Congress received numerous reports about problems with the weapon acquisition process, namely that weapon systems often failed to meet their military missions, were operationally unreliable, and had defects in materials or workmanship. To address manufacturing deficiencies and performance shortcomings, Congress began requiring the Department of Defense (DOD) to obtain written warranties on all production contracts for weapon systems costing over $100,000 per unit or whose eventual acquisition cost is more than $10,000,000. Congress expected that obtaining cost-effective warranties would enable DOD to hold contractors accountable for the performance of their systems and that the risk of financial consequences would encourage contractors to improve the quality and reliability of the systems. In 1984, when the warranty provision was first enacted, many DOD and industry officials criticized the law as being impractical, unworkable, and potentially costly. An amended version enacted in the 1985 DOD Authorization Act and codified as 10 U.S.C. 2403, was intended to correct the problems. For most non-weapon system purchases, the Federal Acquisition Regulation (FAR) prescribes the procedures and purposes of obtaining a warranty. Under the FAR, the use of a warranty is not mandatory. The FAR allows contracting officers to require contractors to provide warranties on products sold to the government. The decision is based on a determination that a warranty would be in the government’s best interest. In addition, the Defense Federal Acquisition Regulation Supplement (DFARS) provides additional guidance on when it is appropriate to obtain a weapon system warranty. Under 10 U.S.C. 2403, an agency head is prohibited from entering into a production contract for a weapon system with a per unit cost greater than $100,000, or a total system cost over $10 million, unless the prime contractor provides a warranty. The prime contractor must warrant that items provided under the contract (1) conform to the design and manufacturing requirements delineated in the contract, (2) are free from all defects in materials and workmanship at the time of delivery, and (3) meet the essential performance requirements delineated in the contract. Contractors are not required to provide a warranty on government-furnished equipment. If the Secretary of Defense determines that a warranty is not in the interest of national defense or that a warranty will not be cost-effective, he may waive all or part of the warranty requirement. The Secretary cannot delegate the waiver authority below the level of an Assistant Secretary of Defense or of a military department. The Secretary must also notify the Senate Committee on Armed Services and the House Committee on National Security before granting a waiver for a major weapon system. Generally, warranties require that the contractor repair or replace noncomplying or defective goods covered by the warranty without cost to the government and/or pay the government’s costs of correcting the defective condition. Warranted defects or deficiencies may be caused by poor design, faulty manufacturing processes, or the use of materials that do not meet contract specifications. The cost and coverage of warranties are negotiated on a contract-by-contract basis. Typical weapon system warranties fall into one of the following three categories: failure-free, threshold, and systemic. When a system is covered by a failure-free warranty, the contractor is obligated to correct all defects that occur during the warranty period. Although a failure-free warranty is easy to implement, it is associated with high costs due to the higher risks assumed by the contractor. A threshold warranty requires a contractor to remedy a defect when a threshold, such as a predetermined number of part or system failures, is exceeded. This type of warranty recognizes that all weapon systems malfunction to some degree, and the warranty only requires action if the weapon system does not meet the agreed-upon reliability levels. A systemic warranty covers a system against a defect that occurs with regularity throughout a production lot or fleet. In the case of systemic warranties, the government must prove that the defects are occurring regularly by either conducting its own investigation or supervising an investigation by the contractor. Once the government proves that a systemic defect exists, the contractor is responsible for replacing or repairing all of the items produced under the circumstances that caused the defect. Some systemic warranties also require the contractor to redesign warranted items if the defect is the result of a design problem. A DOD weapon system may be covered by multiple types of warranties. For example, an item may be covered by a failure-free warranty until it is transferred to a unit, and then covered by a systemic warranty. In 1987, we reported that the military services were obtaining warranties without assessing cost-effectiveness. We also found that warranty terms and conditions were not clearly stated in most contracts. Also, many warranties did not delineate whether redesign was a remedy if performance requirements were not met. We concluded that this situation could result in warranty administration problems. In 1989, we reported that (1) the Office of the Secretary of Defense was not actively overseeing warranty administration by the services; (2) the services had not established a fully effective warranty administration system; (3) the procurement activities had problems performing cost-effectiveness analyses; and (4) the services, therefore, did not know whether they should seek warranty waivers. We concluded that DOD had little assurance that warranty benefits were being fully realized. DOD’s Director for Defense Procurement, in 1992, proposed repealing the warranty law. This initiative was included as Section 620 of DOD’s Legislative Program for the 103rd Congress. In a January 1993 report, DOD’s Acquisition Law Advisory Panel, referred to as the Section 800 Panel, recommended repealing the warranty law based upon two reviews that highlighted significant problems with the administration and effectiveness of the law. These reviews found that (1) waiver requests were not seriously considered, (2) the use of waivers had been “virtually nil,” (3) contractor expenses for warranty repairs were less than the negotiated price for the warranty in four out of five cases, (4) only two out of seven threshold warranties ever reached the threshold, (5) no claims had been made on systemic warranties reviewed, and (6) service regulations requiring post-award reviews of warranty cost-effectiveness were not enforced. The Panel’s alternate recommendation was to revise 10 U.S.C. 2403 to address the implementation problems. The Section 800 Panel sought greater flexibility in implementing and tailoring warranties, as well as limiting warranties to major weapon systems. Furthermore, the Section 800 Panel recommended that the waiver approval authority be lowered from the Assistant Secretary level and that a policy statement be issued encouraging the use of waivers when a warranty is not cost-effective. Congress did not repeal the warranty law. Instead, the Federal Acquisition Streamlining Act of 1994 (P.L. 103-355) modified the congressional notification requirement so that an annual report of waivers granted is no longer required, although the defense committees are still to be notified before a waiver is granted for a major weapon system. The act also required DOD to issue guidance on negotiating cost-effective warranties and on waivers. In response, DOD revised subpart 246.7 of DFARS to stress that the use of weapon system warranties may not be appropriate in all situations and that a waiver should be obtained if a warranty is not cost-effective or in the interest of national defense. Our objectives were to determine whether the warranties being obtained for weapon systems provide the expected benefits to the government, and to assess whether the use of warranties, as required by law, is compatible with the acquisition of weapon systems. We analyzed the warranty legislation, DOD and service policy guidance and regulations, and procurement activity guidelines governing the use of warranties in weapon system acquisitions. To obtain insight into the types of issues faced in managing a warranty program, we gathered warranty information from 22 ongoing acquisition programs and reviewed the results of warranty studies performed by the DOD Inspector General, the Office of Defense Procurement, the Acquisition Law Advisory Panel, and others. We selected systems based on the contract value and the type of weapon system for contracts awarded between 1984 and 1994. Our report focuses on the use warranties for DOD major weapon systems and does not cover the use of warranties on commercial subcomponents in weapon systems or commercial items. In some instances, the information available in contract files was limited because the services had not collected the information or there was a lack of centralized documentation. Our work was performed primarily at the six commands responsible for managing the major acquisition programs we selected for our review. The following are the procurement commands visited: Aviation and Troop Command Missile Command Tank-Automotive and Armaments Command Naval Sea Systems Command Naval Air Systems Command At the procurement commands, we reviewed contract files, including basic contract information, warranty and inspection clauses, cost-effectiveness studies, and correspondence. We supplemented the information by interviewing program management, as well as defense contracting, policy, and legal officials. We also held discussions with officials from the Office of the Secretary of Defense and the Defense Systems Management College. In addition, we contacted selected contractor and professional association officials to obtain their viewpoints on the advantages and disadvantages of using warranties in major weapon system acquisitions. We performed our review from November 1994 through February 1996 in accordance with generally accepted government auditing standards. DOD is obtaining weapon system warranties that are not cost-effective because it does not use waivers as expected by Congress and does not perform adequate cost-benefit analyses or post-award assessments to ensure that the decisions to obtain or not to obtain a warranty are based on a valid foundation. Congress did not intend for DOD to obtain warranties that were not cost-effective. Therefore, the warranty law allows the Secretary of Defense to waive the use of a warranty if the Secretary determines that it would not be cost-effective. However, none of the warranties we reviewed, where claim and price data was available, were cost-effective. We found that the government paid $94 million and collected $5 million on these weapon system warranties. We also calculate that the military services spend approximately $271 million annually to pay for warranties. Further, this cost is only the warranty price paid to the contractor. It does not include the additional costs to the government of negotiating and administering warranties. Reviews by others have also found that weapon system warranties are generally not cost-effective. Warranties have both quantified and unquantified costs. The quantified cost is the negotiated price for the warranty, while the unquantified cost includes the negotiation and subsequent administration of warranties. Warranties also provide both quantified and unquantified benefits. The quantified benefit to the government includes financial compensation received as a result of claims and low-cost or no-cost proposals to correct problems, while the unquantified benefits claimed by program officials include prepaid maintenance support for field units and the value of having a process in place for readily resolving product performance problems. We found that the weapon system warranties purchased by DOD were not cost-effective. We were able to obtain warranty price and claim data on four weapon systems and eight contracts. In every case where price and claim data was available, the warranty price exceeded the value of the claims made. The combined warranty price was $94 million, the value of the warranty claims was $5 million, and the quantified price exceeded the quantified benefit by $89 million. (See app. I.) For example: The government paid $12 million for the F-15E (1) design and manufacture and (2) materials and workmanship warranties for the 1989 and 1990 contracts, which covered the purchase of 72 aircraft. The program office identified 260 potential warranty claims, of which 134 were agreed to and corrected by the contractor. There were 126 claims that were not agreed to by the contractor for a variety of reasons, including the fact that failed parts were unavailable for contractor inspection. The program office estimated that the average cost to fix each problem was $3,000 and that the total financial benefit to the government was $402,000. The quantified costs, therefore, exceeded the quantified benefits by about $11.6 million. The F-16 Multiyear II warranty price for 720 aircraft procured between 1986 and 1989 was $27.86 million. In a 1991 study, the program office calculated that the warranty benefit was $2.78 million, or about 10 percent of the warranty price. While the warranty coverage had not expired at the time, the study did project a total potential benefit of $9.94 million for this warranty, or 36 percent of the warranty price. The study found “little tangible return on investment” for this warranty. The program office was unable to provide final claim figures. The Multiple Launch Rocket System 1985 warranty cost $1.584 million. The estimated value of the warranty claims was $126,000. Therefore, the quantified cost exceeded the quantified benefit by $1.458 million. In 1992, the Army found a similarly large imbalance between the costs incurred and the total dollars recovered under several warranties. A review of 36 expired warranties on 12 weapon systems at the Missile Command through December 1990 showed that the warranty cost for these contracts was $27.9 million and the dollar value of the warranted repairs was $12.5 million—meaning that these warranties had a negative monetary return on investment of $15.4 million. The Air Force has also recognized that it has been obtaining some non-cost-effective warranties. The Deputy Assistant Secretary of the Air Force for Contracting, stated in a memorandum in 1992, “. . . we agree that warranties are not always cost-effective. Recent experience indicates that contractors are unwilling to provide reasonable cost proposals in some cases, even when historical warranty cost data is available that suggests a much lower warranty price is appropriate.” The warranty price does not include all warranty costs to the government. The costs not included in the warranty price are associated with warranty development, administration, training, the need to obtain and provide special data, in-plant warranty monitoring, special transportation, increased spare component requirements because of longer logistical repair times, decreased competition opportunities, and reduced self-sufficiency of the military services. We did not estimate these additional costs to the government. In some cases, we had no basis for an estimate and in others the additional costs due to the warranty could not be readily identified. One cause for the quantified cost exceeding quantified benefits is the low claims submission rate for warranted items. Air Force officials told us that one reason for the low claims rate is that submitting warranty reports and holding parts for warranty purposes is contrary to the primary mission of field units—to repair the equipment as soon as possible so that the equipment and the unit can resume its mission. A warranty functions contrary to the primary mission by requiring maintenance personnel to hold parts until a determination can be made as to whether the part is warranted and how it should be repaired. In addition, maintenance personnel sometimes replace broken parts on one system with good parts from one or more other systems to keep the maximum number of weapon systems operating and available, thereby fulfilling their primary mission. As a consequence, the broken or defective parts are moved from their original weapon system. This can void a warranty, which can require that the part submitted for a warranty claim come from the original weapon system that the contractor delivered to the service. The Tank-Automotive and Armaments Command official responsible for their cost-effectiveness analyses said that historically contractors only accept about 30 percent of potential claims. As a result of the claims submission problem, the Tank-Automotive and Armaments Command is primarily obtaining systemic warranties instead of threshold warranties. However, according to this official, the Tank-Automotive and Armaments Command has never successfully filed a systemic warranty claim, and the probability that claims will be filed under a systemic warranty is zero. A report on the Army’s warranty program further supports the claims submission problem. “Claims submission from the field is low. Only a fraction of the work orders are submitted for claims. For many of these, the data are inaccurate and incomplete.” The report looked at several commands and weapon systems and calculated that, at the Tank-Automotive and Armaments Command, only 537 actual claims were made out of 8,567 potential claims for 21 contracts. The Air Force faces similar low claims submission problems. Officials from the Air Force Materiel Command stated that the lack of reports filed on warranted items from the field is a serious problem. They further stated that the most important mission to field personnel is to repair the items as soon as possible so that the aircraft can resume its mission. Many Air Force warranties rely on the submission of product quality deficiency reports for filing claims. An Air Force Inspector General’s reportestimated that only 15 to 20 percent of failures are actually reported on these forms because they are complicated and cumbersome for maintenance personnel to fill out. Therefore, the Air Force estimates that 80 to 85 percent of failures go unreported. In addition to the lack of incentives for field staff to track and report warranty claims, there is a lack of credible data systems and manpower to administer warranty claims. One Air Force official responsible for overseeing warranties at a major command said that because no system is in place to track the warranties or to process claims efficiently, administering the program is a “nightmare.” The Deputy Assistant Secretary of the Air Force for Contracting indicated in 1992 that (1) the problems with warranty administration are not new and (2) the Air Force does not possess and has not been able to develop data systems designed to track warranted items. He added that the lack of necessary manpower resources in the field and in the program offices for accomplishing warranty administration compounded this problem. Claimed warranty benefits include providing support that could be viewed as a form of prepaid maintenance and a process for resolving product performance problems. Viewed as prepaid maintenance, warranties pay contractors a sum of money up front based on an estimate of the number of defects that the government might claim. The contractor keeps the difference between actual claims and the warranty price as his profit. If the 1989 and 1990 F-15E contracts were considered a form of prepaid maintenance, then the government paid the contractor $12 million and made claims totaling $402,000. The contractor kept as profit $11.6 million. Further, the warranty provides the government a process for dealing with the contractor and delineates the contractor’s responsibilities. However, the value of this process seems to vary from system to system. While several program officials told us that the contractors settled claims and fixed problems much more quickly under a warranty, other officials said that the penalties to the contractor are low under a warranty and that contractors routinely dispute government claims. According to one Air Force official, the Air Force has had poor results in getting a return on the claims it has filed. Warranty officials stated that contractors often stall and argue about claims because (1) the contractor asserts that it could have repaired the item more quickly or efficiently, if government maintenance personnel repair an item and bill the contractor; (2) the maintenance personnel did not keep the broken part for contractor inspection; (3) the contractor may believe that the weapon system was operated outside the performance parameters to which it was designed or the maintenance personnel damaged the part using improper procedures, and (4) the contractor may find that the defective part has been shifted from the original weapon system in which it was delivered to the government. Congress included a provision in the warranty law that allows the Secretary of Defense or his designee (no lower than an Assistant Secretary of Defense or a military department) to waive the requirement for a weapon system warranty for either national defense or cost-effectiveness reasons. However, since 1985 only 21 waiver requests DOD-wide have reached the assistant secretary level. Of those, 15 have been approved. The conference report accompanying the bill repealing the 1984 warranty law and enacting the current section 2403 noted clearly that the House and Senate Committees on Armed Services did not intend DOD obtain to warranties that are not cost-effective. The report stated that “a failure to conduct cost-benefit analyses and to process waivers where cost-effective guarantees are not obtainable would defeat the legislative intent of congressional warranty initiatives.” The majority of program officials we interviewed said that they do not consider waivers a viable option because of (1) the high placement of the waiver approval authority required by the warranty law (2) the potential for negative attention being focused on the program by these high level officials, and (3) the administrative burden of processing a waiver request. The result of this reluctance to seek waivers is that warranties have become essentially mandatory for all major contracts. This was noted by the DOD Acquisition Law Advisory Panel (the Section 800 Panel) in a January 1993 report where it stated that “the reluctance of DOD to issue warranty waivers fosters the use of warranties without regard to their cost-effectiveness.” Service officials at several major commands and at the assistant secretary level said that requests for waivers bring unwanted and often negative attention to an acquisition program. One service official stated that there is a definite “stigma” attached to waiver requests and another referred to it as a “nightmare.” Further, waiver requests impose a significant burden on the program office, which has to generate all the necessary paperwork, including cost-benefit analyses, and brief them up the chain of command to the assistant secretary level, with little or no expectation that a waiver will be approved. As an example, the F-16 program office sought a waiver for the essential performance warranty of the third F-16 multiyear contract in November 1991. The entire process—from the completion of the cost-benefit analysis and decision to seek a waiver to the rejection of the waiver request—took 11 months. This was the first attempt by this program office to obtain a waiver for the weapon system. The second attempt, on the 1994 procurement, was rejected after 8 months. The Air Force waiver approval process for the F-16 is shown in figure 2.1. The cost-benefit analysis for the third multiyear warranty concluded that both the design and manufacture as well as the materials and workmanship warranties would not be cost-effective. According to an Air Force program official involved in seeking these waivers, the program office did not request a waiver on these parts of the warranty because it believed it would be impossible to get approval. In addition, the official said that the program office was certain that it would be directed to renegotiate these warranties with the contractor. Rather than seek a waiver for these warranties, the program office focused on what it considered its strongest case for a waiver, the essential performance warranty. The contractor had produced approximately 1,500 F-16s when the program office began seeking a waiver and, according to an official in the program office, the program office knew how the aircraft would perform and also knew that the essential performance warranty would provide no benefit to the government. The program office therefore sought a waiver, which was denied because the warranty covered only subsystems rather than the entire weapon system. The assistant secretary indicated a warranty covering the entire weapon system was needed before a decision on whether to grant a waiver could be made. The F-16 program office again sought and was denied a waiver for the essential performance requirement warranty for the fiscal year 1994 procurement. According to Air Force officials, a determination was made at the assistant secretary level that, pursuant to the law, a front-line fighter (a major weapon system) should have a warranty and the F-16, because it is a system that has been in production for many years, should have enough data to craft a valid warranty. Therefore, the waiver request was rejected. The warranty obtained contained no risk to the contractor because the warranty was tied to performance measurements that the system had already passed, a fact the contractor and the government already knew. The warranty thresholds have a mission reliability of 90 percent and aircraft availability of 85 percent. The aircraft achieved a mission reliability rate of 97.2 percent and an aircraft availability rate of 91 percent during the official measurement period. This warranty was a warranty in name only. It was clear to us from discussions with several officials that they believe that obtaining a warranty requires a relatively small amount of time and effort for the program office compared to the amount of time and effort required to avoid spending that money by obtaining a waiver. In addition, according to officials at a major Air Force command, contractors are aware that waivers are exceedingly difficult to obtain and may insist on a high warranty price if the program office seeks an effective warranty. This in turn may drive the program office to obtain reduced warranty coverage to reduce the warranty price. Program managers do not request a waiver because they believe it will not be granted and, consequently, unnecessary or costly warranties are purchased. As required by the Federal Acquisition Streamlining Act of 1994 (P.L.103-355), DOD revised DFARS regarding weapon system warranties. The new regulations provide guidelines for contracting officers and program managers to use when developing and negotiating weapon system warranties. These regulations state that the use of a weapon system warranty may not be appropriate in all situations. Further, a waiver should be requested if it is determined that obtaining a warranty is not cost-effective or is inconsistent with the national defense. However, program managers still need to obtain a waiver before deciding not to obtain a warranty. That process has not been affected by the revised regulations. The warranty law requires DOD to obtain a warranty unless the Secretary of Defense determines that warranty would not be cost-effective or in the interest of the national defense. Applicable regulations (DFARS 246.770-7) require that a cost-benefit analysis be conducted and documented in the contract file to determine if the warranty is cost-effective. The Air Force and the Army had conducted cost-benefit analyses for 21 of the warranties on the 30 contracts we reviewed. The cost-benefit analyses performed, however, were inadequate because (1) the warranties were often not separately priced, (2) the government’s administrative costs were not fully included, (3) the analyses assumed all potential defects would be identified and claims submitted, and (4) they did not include a present value analysis. The Navy conducted only one cost-benefit analysis and has not adhered to DFARS 246.770-7, which states that “in assessing the cost effectiveness of a proposed warranty, perform an analysis which considers both the quantitative and qualitative costs and benefits of the warranty.” The Navy’s policy is to obtain what it calls “no-cost” warranties. However, no warranty is without cost, and not separately pricing a warranty does not mean the government does not incur a cost for the warranty, only that the price of the warranty is built into the cost of the system. Contracts in all services are often signed without separately pricing the warranty. Of the 38 contracts we reviewed, 24 warranties were not separately priced. Instead, the warranty price was included in the price of the product, thereby making it almost impossible to perform a realistic cost-benefit analysis. The services have different policies on pricing a warranty. Although the Air Force’s policy since 1994 has been to separately price all warranties, the Army does not require that the warranty be separately priced, and the Navy maintains that it is not appropriate to negotiate additional costs for weapon system warranties. In addition, we were told by a DOD official that the government often pays twice for the warranty, once in the actual price of the product and separately in the price of a warranty. The services performed cost-benefit analyses for 22 out of the 38 contracts we reviewed. In general, these cost-benefit analyses do not appear to fully include the administrative costs paid by the government for warranty development and administration. Thus, the cost element of the analysis is kept artificially low. However, the Tank-Automotive and Armaments Command uses an estimate, developed in the early 1980s, that calculates the administrative costs of processing each warranty claim as $150. The Army could not provide us with a copy of the study from which this figure was obtained. The cost-benefit analyses generally assume that all or a high percentage of claims will be made and accepted. We found that the U.S. Army Missile Command and the U. S. Army Tank-Automotive and Armaments Command sometimes made greatly different claims submission and acceptance assumptions when analyzing the costs and benefits of warranties. The 1989 Multiple Launch Rocket System cost-benefit analysis by the Missile Command assumed claims would always be filed and accepted when warranted items break. The Tank-Automotive and Armaments Command official responsible for its cost-effectiveness analyses used a range of 20 to 90 percent for the probability that claims would be filed. As discussed previously, Army and Air Force studies indicate that actual claims submission from the field is low. In the case of the Air Force, possibly as low as 15 to 20 percent of actual failures. Nine of 12 cost-benefit analyses we sampled did not conduct a present value analysis as part of the cost-effectiveness review. Cost-benefit analyses normally involve comparing different costs incurred at different times. For two or more alternatives to be compared on an equal economic basis, it is necessary to consider the costs of each alternative currently or at their “present values.” This recognizes that money has earning power over time. A present value analysis is important because it allows the comparison of current expenses with expected future benefits by taking into account the time value of money. Without a present value analysis, comparing a contract with and without a warranty cannot be done because the stream of dollars involved are not comparable. Our review indicated that the services had not prepared post-award assessments in 35 of 38 warranties that we reviewed. There are two types of post-award assessments required, an in-process assessment and a final payoff assessment. The in-process assessments evaluate whether the claims made under a warranty justify its cost and document the desirable and undesirable warranty provisions and tasks for follow-on procurements. A final payoff assessment evaluates the economic benefits derived from the warranty compared to the cost of corrective actions had there been no warranty. Army regulations specifically require an in-process and final payoff assessments. Air Force regulations only require annual assessments and specify how the assessments should be performed. Navy regulations only require that data be collected to perform an annual assessment of warranty activity but does not require a final payoff assessment. Cost-effectiveness analyses and final payoff assessments provide the management tools and internal controls that are essential and critically needed to make sure the intent of the law is satisfied in a way that adequately protects the interests of the Army. Without cost-effectiveness analyses and warranty assessments, there is little assurance that the warranties obtained and the associated costs were commensurate with the benefits received. According to Air Force officials with one program office, their office knows it will have to obtain warranties “no matter what,” so there is no reason for a post-award assessment. These officials were referring to the difficulty of receiving a waiver from the requirement to obtain a warranty, discussed previously. For the 18 Army contracts we reviewed, the Army either ignored the requirement to conduct the final payoff assessments or ignored findings that showed the warranty benefits did not justify the costs. For example, the 1987 Multiple Launch Rocket System final payoff assessment specifically cited the fact that the thresholds were set so that the government would repair the first four failures on each launcher, but only about one claim was actually filed per launcher. Because the performance thresholds were set so high, the warranty was unlikely to serve as an incentive to the contractor to improve the system’s reliability and the warranty may have had a negative effect on reliability. The Army obtained a follow-on warranty. Air Force regulations covering post-award assessments require the program manager to monitor warranty feasibility and cost-effectiveness using annual warranty activity reports submitted by the contractor or the government. These assessments are to include a remarks section that “identifies the warranted tasks or services that are considered desirable or undesirable based on the claim frequency, failure mode, and dollar value.” The Air Force had not completed this annual assessment on any of the 12 Air Force contracts we reviewed. In 1987, the Navy issued instructions on warranties and stated that the Chief of Naval Operations will develop a system for collecting and analyzing actual warranty use and claim data on an annual basis. To date, the Navy has not approved a warranty information system. We found only one of the eight naval systems we reviewed had performed a post-award assessment. Air Force program officials stated that final payoff assessments and post-award assessments are difficult to perform for two reasons. First, the warranty is not always separately priced. Second, the weapon system warranted may have had many engineering changes from the time the contract was initially signed until the end of the warranty period. These changes make comparing expected costs and benefits to actual costs and benefits difficult because the initially projected and actually produced weapon systems are different. Weapon system warranties are generally not cost-effective. They have resulted in a significant cost to the government that substantially exceeds their benefit. The necessity of negotiating and administering the warranties also imposes a large, but unquantified burden on the services. The waiver process has resulted in a system in which warranties are virtually mandatory. In this system, the program office seeking a waiver must demonstrate why a warranty would not be cost-effective and seek approval from an assistant secretary. It is easier and less disruptive for that program office to obtain a warranty, regardless of whether it is necessary or cost-effective, than it is to seek a waiver. Because the waiver process is so burdensome and protracted, warranties are obtained without regard to their cost-effectiveness and the officials in the program offices have no incentive to conduct rigorous cost-benefit analyses. In addition, post-award assessments have little value to program officials as tools to identify desirable and undesirable warranties for future contracts because they believe warranties will have to be obtained “no matter what.” The current DFARS revision, which stresses that weapon system warranties may not be appropriate in all situations, is a step in the right direction. However, it is inadequate to resolve the difficulties in obtaining a waiver because the regulation could not change the high level required for approving waivers. The waiver approval authority is stipulated in the law itself, and the incentives that arise from it could not have been changed by this revision. As DOD and Congress proceed with acquisition reform, we believe they need to reexamine the need for and practical implementation of weapon system warranties. Requiring the routine use of warranties in weapon system acquisitions is often not appropriate and does not provide the government much in the way of benefits. The Institute for Defense Analyses has identified three functions of a warranty in weapon system acquisitions—insurance, assurance-validation, and incentivization. In the commercial marketplace, warranties have similar functions. Commercial buyers believe warranties protect them against catastrophic financial losses and excessive operating costs through a warranty’s insurance aspect. A warranty may also indicate to a buyer that a product is of better quality, which can be equated to the assurance-validation function, and may motivate the contractor to maintain product quality, which equates to the incentivization function.However, these functions are not as significant in weapon system acquisitions as they are in buying a commercial product on the open market. For insurance to be cost-effective to the buyer, the risk must be shared and spread over many insured customers, which is not the case in weapon acquisitions. DOD is the only buyer of most weapon systems and must pay the full cost of the insurance provided by the warranty. Also, DOD already has quality assurance processes built into its contracts to ensure that the product complies with all contract specifications. This lessens the need for a warranty’s assurance-validation function. Finally, in our review of 20 weapon systems, we could not find any evidence that indicated that a warranty was a factor in improving system reliability. Since none of the traditional benefits conferred by warranties apply to weapon system purchases, the main benefit that warranties seem to provide is the extension of the time period DOD has to identify defects to be corrected by the contractor. Commercial buyers are interested in stabilizing their operating costs and protecting themselves against catastrophic losses. A manufacturer’s warranty provides a commercial buyer with a measure of insurance against the risks of repair or replacement costs. If a warranted product does not perform as specified, the buyer whose product failed does not face a total financial loss. The concept of insurance is based on the principle of shared risk. In the commercial marketplace, the cost of offering a warranty is shared by many buyers who individually pay a small amount of the total warranty as part of the product’s price. Manufacturers generally estimate how many of their products will be defective and price the product to cover this risk. However, because DOD is usually the only buyer for a weapon system, the contractor cannot allocate the cost of insuring against that risk among multiple buyers. The complete cost for the estimated risk must be borne by the sole buyer or absorbed by the contractor. If it is borne by the buyer it becomes the price of the warranty. If it is absorbed by the contractor then it is a cost that must be covered by the price of the system. In both cases the buyer pays. A further factor in the cost of a warranty is the extent of unproven technology or innovative design that a weapon system encompasses, which may cause the contractor to perceive its financial risk is significant. This will tend to drive up the warranty price to the government and causes the contractors to try to limit warranty coverage as much as possible. As a result, insuring against weapon system failures generally is not beneficial to the government since the government will be responsible for 100 percent of the estimated cost of that risk. The government will only achieve a positive financial result if failures in the system substantially exceed the contractor’s estimates of risk. While it may seem that a warranty would make sense in cases where the extent of system failures exceeds the cost of the warranty, this occurs in very few instances and the cost of insuring all weapon systems to cover costs in these instances is not a good financial decision. It is for this reason that the government maintains a policy of self-insurance against losses in almost all other areas. A RAND study reported that shifting financial risk to the manufacturer is seldom an appropriate or sufficient rationale for obtaining a weapon system warranty, for the same reason that it is not to the government’s advantage to buy insurance from a commercial firm. Another factor limiting the potential insurance benefits of warranties is the relationship of the government to major defense contractors. If a contractor were someday to incur large losses as a result of a warranty on a weapon system, the threat of insolvency might cause the government to excuse the contractor from its warranty obligation. Historically, when a defense firm incurs large losses because of a contract, the government has taken action to provide relief and prevent the firm from going out of business. DOD’s ability to provide extraordinary contractual relief to a defense contractor is recognized in Public Law 85-804. From 1959 to 1993, DOD used this provision to provide over $4.3 billion (in constant 1996 dollars) in relief to assist contractors in recovering from losses. As a result, the utility of warranties may be limited to collecting small or marginal amounts from a contractor rather than making good a catastrophic loss. The second purpose of warranties is assurance-validation. Applied to weapon systems, assurance-validation means assuring DOD that the manufacturer’s product conforms to the design, quality, and performance levels specified in the contract. Although DOD is moving more toward a commercial acquisition system, it uses many quality assurance processes to verify product quality and contract conformance independent of the warranty clauses. Since the cognizant contract administration office is responsible for verifying that the accepted product conforms to the specifications in the contract, the assurance-validation function of a weapon system warranty may not be necessary. In weapon system acquisitions, DOD uses many different program management tools to reduce the inherent risks of the acquisition process. DOD’s weapon acquisition policies seek to reduce the risk of obtaining a poor quality product by establishing a disciplined multiphased process that (1) translates mission needs into stable and affordable programs; (2) acquires quality products; and (3) provides a program management structure that has clear lines of responsibility, authority, and accountability. As each weapon system progresses through the phases, it is subject to comprehensive programmatic reviews. During the reviews, an assessment is made of the program’s accomplishments to date, plans for the next phase, and acquisition strategies for the remainder of the program. Additionally, the program risks and risk management planning is evaluated. In theory, for each of the acquisition phases, program-specific results are required before a program is permitted to proceed to the next phase. For example, a program may be required to demonstrate the maturity of a manufacturing process before being permitted to start production. Techniques available to manage risk include the use of technology demonstrations and prototyping to test hardware, software, manufacturing processes and/or critical subsystems. Another technique is to test and evaluate the weapon system or its components to determine system maturity and identify technical risks. Finally, DOD establishes quality assurance programs to provide confidence that a weapon system will conform to the technical requirements and provide satisfactory performance. A warranty is one more management tool at DOD’s disposal to assure quality as the weapon system begins to be put in use. Given the extensive quality assurance efforts made over the whole development and production cycle, however, a warranty may actually be insurance against the failure of the quality assurance process. Finally, warranties are supposed to serve as an incentive to manufacturers to improve quality. Conceptually, all warranties motivate a manufacturer to improve product quality because the goal is to maximize profits by not having to perform warranty service. However, when a commercial manufacturer provides a warranty, it generally knows the projected failure rates of the product and the probable repair costs. The costs for these failures are included in the cost of the product. Efforts to keep commercial prices competitive probably undercuts the incentive to make additional profit by pricing the warranty higher than its expected cost. In the commercial market, a warranty may also signal product quality to the buyer because the commercial buyer generally is not familiar with how the product was made and does not know how the product will perform.Therefore, the warranty becomes a marketing tool to help sell the product by convincing the buyer it is a superior product. In weapon acquisition, the contractor is generally not faced with direct price competition. Absent direct price competition, there is no incentive for the contractor to limit its ability to fully cover the estimated cost of risks of system failure being warranted. Further, the marketing aspects of a warranty are for the most part irrelevant to DOD. The warranty’s indication of manufacturer confidence in product quality is not needed because, as shown in the previous section, DOD is not a typical consumer. DOD is knowledgeable about how the weapons it obtains are made and in some cases helped to design the system. Generally, DOD knows how the product will perform and uses other management techniques to maintain product quality. In addition, the majority of officials in the weapon system program offices we visited stated that either (1) the warranties had not induced the contractors to take actions to improve the quality of their warranted products or (2) if there were quality and reliability improvements due to the warranty, those improvements were marginal and unmeasurable. For example, the F-15 warranty manager stated that the contractor has not designed components of the weapon system differently nor is the contractor building the weapon system differently because of the warranty. The F-110 engine warranty manager also said that the warranty has not helped to improve the reliability of the engine. In a report issued in August 1992, the U.S. Army Materiel Systems Analysis Activity, reached a similar conclusion regarding different warranties on several systems. The report stated: “It is extremely unlikely that hardware improvements that were performed under the warranty would not have been performed had there not been a warranty. Therefore, it is unlikely that any reliability growth from these improvements could be attributed to the warranty.” The report stated that no models were available that could be used to measure reliability improvements resulting from a warranty. With or without a warranty, a contractor is obligated to produce a product that complies with the terms delineated in the contract, including all design, manufacturing, and performance specifications. During DOD’s inspection and quality assurance processes, the government needs to identify defects or deficiencies and notify the contractor of problems at the earliest reasonable time. If DOD did not obtain a warranty, the contractor could be released from any further obligation to correct problems once the government accepted the product. A warranty extends beyond acceptance the period during which the government can identify defects and require the contractor to correct them at no charge. Prior to acceptance, if defects are discovered, the government, even without a warranty, can (1) order the contractor to correct the defects at no additional charge, (2) reject the nonconforming product, (3) terminate the contract for default, or (4) seek a price reduction. A warranty obligates the contractor to correct defects, even if the government did not identify them before acceptance. This can reduce disputes because the warranty eliminates the need to prove where and when a defect came into existence. For example, in the case of a failure-free warranty, the government merely needs to demonstrate that an item does not work. A warranty also supplements inspection by providing the opportunity to observe the product’s performance during a period of use when additional problems may become apparent. However, a properly structured test program could identify such problems early in the acquisition process, when it is less costly to address deficiencies. In a contract without a warranty, the government’s acceptance of the product generally ends the contractor’s obligation. Even without a warranty, however, the government can revoke its acceptance and hold the contractor financially accountable if latent defects are discovered. In cases of latent defects, the government must demonstrate that the defects existed at the time of delivery, but could not have been discovered by a reasonable inspection. While warranties may have value to a consumer in the commercial world, obtaining a warranty for a weapon system may be a flawed concept because (1) the government does not need the insurance coverage provided by a warranty and cannot share the expense of the warranty with other customers; (2) warranties are an expensive way to assure the quality of a weapon system; (3) DOD’s quality assurance activities, should provide much greater assurance of compliance with contract specifications than does a warranty; and (4) warranties may not cause contractors to improve the quality of the weapon systems’ they produce. Further, weapon system warranties provide very limited benefits to the government. The only measurable benefit of a warranty is the ability to have the contractor correct defects for some negotiated period after acceptance of the product, and as discussed in the prior chapter this comes at a high cost. We believe warranties should be used judiciously and only in cases where their cost-effectiveness can be clearly demonstrated. We recommend that the Secretary of Defense establish an expedited waiver process that limits the disincentives inherent in the current process. We also recommend that the Secretary of Defense revise DOD’s acquisition policies to adequately manage those warranties that the military services determine should be obtained. Consideration should be given to (1) requiring that all weapon system warranties be separately priced in order to allow meaningful cost-benefit analyses; (2) improving cost-benefit analyses through more realistically reflecting the likelihood of claim submission, performing present value analyses, and including the government’s administrative costs; and (3) ensuring that the services enforce the regulations requiring post-award assessments of weapon system warranties so that the services will know why these warranties were or were not beneficial to the government. We also recommend that the Secretary of Defense direct the Secretaries of the Air Force and the Navy revise their regulations to require a final payoff assessment for weapon system warranties as the basis for purchasing more beneficial follow-on warranties and building institutional knowledge for procuring and administering effective warranties. The administrative problems that we have identified appear to be unintended consequences of the warranty law due to the de facto mandatory nature of warranties. Attempts to administratively correct the problem have not been very successful. Since DOD continues to have problems administering weapon system warranties and the warranties provide minimal benefits for the costs incurred, Congress should repeal 10 U.S.C. 2403. Were the warranty requirement repealed, DOD and the services would still have management flexibility to obtain warranties for major weapon systems only when deemed appropriate. As was done prior to the warranty law, DOD and the services would rely on the FAR and their own policies to determine when it is appropriate to obtain a weapon system warranty. The decision should be documented as part of the system acquisition strategy. In commenting on a draft of this report, DOD stated that it “strongly supports” our recommendation that Congress should repeal 10 U.S.C. 2403. Since 1992, DOD has supported the need for congressional repeal of the weapon system warranty law. DOD only partially concurred with the recommendations to the Secretary of Defense, stating that the solution to the problems we cited is repeal of the law. DOD indicated that it will ask the military departments to review their warranty waiver process. DOD noted, however, that in order to remove the disincentives and streamline the waiver process it needs relief from the congressional notification requirement and the warranty waiver approval level. DOD stated that it did not see the need to separately price warranties because it has insight into warranty costs through cost reporting and can project warranty cost from actual claim data. Although DOD currently has insight into warranty costs, we found that this information is often not used in the cost-benefit analyses. Therefore, we believe that separately pricing the warranty would permit DOD to perform better warranty cost-benefit analyses. DOD stated that all the military departments have in place regulations that require post-award assessments, but acknowledged that the military departments were not fully complying with existing regulations. DOD stated that it would reiterate the importance of such assessments in a memorandum to the military departments. Our review indicated, however, that only the Army’s regulation specifically requires a final pay-off assessment to determine the economic benefit derived from a warranty. We believe that the Air Force and the Navy regulations should be revised to explicitly require final payoff assessments. DOD’s comments are reprinted in their entirety in appendix II.
GAO reviewed the Department of Defense's (DOD) use of major weapon system warranties, focusing on whether these warranties: (1) provide expected benefits to the government; and (2) are compatible with the weapon systems acquisition process. GAO found that: (1) DOD receives about $1 in direct benefit for every $19 paid to a contractor for a warranty; (2) DOD program officials rarely seek to waive the warranty requirement because waivers require the approval of an Assistant Secretary of Defense and congressional defense authorization and appropriations committees; (3) despite DOD regulations that require a cost-benefit analysis to determine if the warranty is cost-effective, some cost-benefit analyses are inadequate; (4) the military services are not conducting post-award assessments to determine whether warranty costs are commensurate with the benefits received and to identify advantageous and disadvantageous warranty provisions for future contracts; (5) the government has traditionally self-insured because its large resources make protection against catastrophic loss unnecessary, and it is often the sole buyer for a product and cannot share the insurance costs with other buyers; (6) because a contractor cannot allocate the cost of insuring against the risk of failure among multiple buyers, DOD ends up bearing the entire estimated cost; (7) DOD officials said that warranties do not motivate contractors to improve the quality of their products; and (8) warranties only extend the period that DOD can determine that a product does not conform to contract specifications and requirements and require the contractor to make repairs.
Seaports are critical gateways for the movement of international commerce. More than 95 percent of our non-North American foreign trade (and 100 percent of certain commodities, such as foreign oil, on which we are heavily dependent) arrives by ship. In 2001, approximately 5,400 ships carrying multinational crews and cargoes from around the globe made more than 60,000 U.S. port calls each year. More than 6 million containers (suitable for truck-trailers) enter the country annually. Particularly with “just-in-time” deliveries of goods, the expeditious flow of commerce through these ports is so essential that the Coast Guard Commandant stated after September 11, “even slowing the flow long enough to inspect either all or a statistically significant random selection of imports would be economically intolerable.” This tremendous flow of goods creates many kinds of vulnerability. Drugs and illegal aliens are routinely smuggled into this country, not only in small boats but also hidden among otherwise legitimate cargoes on large commercial ships. These same pathways are available for exploitation by a terrorist organization or any nation or person wishing to attack us surreptitiously. Protecting against these vulnerabilities is made more difficult by the tremendous variety of U.S. ports. Some are multibillion- dollar enterprises, while others have very limited facilities and very little traffic. Cargo operations are similarly varied, including containers, liquid bulk (such as petroleum), dry bulk (such as grain), and iron ore or steel. Amidst this variety is one relatively consistent complication: most seaports are located in or near major metropolitan areas, where attacks or incidents make more people vulnerable. The federal government has jurisdiction over harbors and interstate and foreign commerce, but state and local governments are the main port regulators. The entities that coordinate port operations, generally called port authorities, differ considerably from each other in their structure. Some are integral administrative arms of state or local governments; others are autonomous or semi-autonomous self-sustaining public corporations. At least two—The Port Authority of New York and New Jersey and the Delaware River Port Authority—involve two states each. Port authorities also have varying funding mechanisms. Some have the ability to levy taxes, with voter approval required. At other port authorities, voter approval is not required. Some have the ability to issue general obligation bonds, and some can issue revenue bonds. Some ports receive funding directly from the general funds of the governments they are a part of, and some receive state funding support through trust funds or loan guarantees. A terrorist act involving chemical, biological, radiological, or nuclear weapons at one of these seaports could result in extensive loss of lives, property, and business; affect the operations of harbors and the transportation infrastructure (bridges, railroads, and highways) within the port limits; cause extensive environmental damage; and disrupt the free flow of trade. Port security measures are aimed at minimizing the exploitation or disruption of maritime trade and the underlying infrastructure and processes that support it. The Brookings Institution reported in 2002 that a weapon of mass destruction shipped by container or mail could cause damage and disruption costing the economy as much as $1 trillion. Port vulnerabilities stem from inadequate security measures as well as from the challenge of monitoring the vast and rapidly increasing volume of cargo, persons, and vessels passing through the ports. Port security is a complex issue that involves numerous key actors including federal, state, and local law enforcement and inspection agencies; port authorities; private sector businesses; and organized labor and other port employees. The routine border control activities of certain federal agencies, most notably the Coast Guard, Customs Service, and INS, seek to ensure that the flow of cargo, vessels, and persons through seaports complies with all applicable U.S. criminal and civil laws. Also, the Coast Guard, the Federal Bureau of Investigation, the Transportation Security Administration (TSA), and the Department of Defense (DOD) seek to ensure that critical seaport infrastructure is safeguarded from major terrorist attack. While no two ports in the United States are exactly alike, many share certain characteristics that make them vulnerable to terrorist attacks or for use as shipping conduits by terrorists. These characteristics pertain to both their physical layout and their function. For example: Many ports are extensive in size and accessible by water and land. Their accessibility makes it difficult to apply the kinds of security measures that, for example, can be more readily applied at airports. Most ports are located in or near major metropolitan areas; their activities, functions, and facilities, such as petroleum tank farms and other potentially hazardous material storage facilities, are often intertwined with the infrastructure of urban life, such as roads, bridges, and factories. The sheer amount of material being transported through ports provides a ready avenue for the introduction of many different types of threats. The combination of many different transportation modes (e.g., rail and roads) and the concentration of passengers, high-value cargo, and hazardous materials make ports potential targets. The Port of Tampa illustrates many of these vulnerability characteristics. The port is large and sprawling, with port-owned facilities interspersed among private facilities along the waterfront, increasing the difficulty of access control. It is Florida’s busiest port in terms of raw tonnage of cargo, and the cargoes themselves include about half of Florida’s volume of hazardous materials, such as anhydrous ammonia, liquid petroleum gas, and sulfur. The port’s varied business—bulk freighters and tankers, container ships, cruise ships, fishing vessels, and ship repair and servicing—brings many people onto the port to work daily. For example, in orange juice traffic alone, as many as 2,000 truck drivers might be involved in off loading ships. The Tampa port’s proximity to substantial numbers of people and facilities is another reason for concern. It is located close to downtown Tampa’s economic core, making attacks on hazardous materials facilities potentially of greater consequence than for more isolated ports. A number of busy public roads pass through the port. In addition, located nearby are facilities such as McDill Air Force Base (the location of the U.S. Central Command, which is leading the fighting in Afghanistan) and the Crystal River nuclear power plant, both of which could draw the attention of terrorists. Since September 11, the various stakeholders involved in ports have undertaken extensive initiatives to begin strengthening their security against potential terrorist threats. As might be expected given the national security aspects of the September 11 attacks, these activities have been most extensive at the federal level. However, states, port authorities, local agencies, and private companies have also been involved. The efforts extend across a broad spectrum of ports and port activities, but the levels of effort vary from location to location. While many federal agencies are involved in aspects of port security, three play roles that are particularly key—the Coast Guard, Customs Service, and INS. The Coast Guard, which has overall federal responsibility for many aspects of port security, has been particularly active. After September 11, the Coast Guard responded by refocusing its efforts and repositioning vessels, aircraft, and personnel not only to provide security, but also to increase visibility in key maritime locations. Some of its important actions included the following: Conducting initial risk assessments of ports. These limited risk assessments, done by Coast Guard marine safety personnel at individual ports, identified high-risk infrastructure and facilities within specific areas of operation. The assessments helped determine how the Coast Guard’s small boats would be used for harbor security patrols. The Port of Tampa received one of these assessments, and the Coast Guard increased the frequency of harbor patrols in Tampa. Redeploying assets. The Coast Guard recalled all cutters that were conducting offshore law enforcement patrols for drug, immigration, and fisheries enforcement and repositioned them at entrances to such ports as Boston, Los Angeles, Miami, New York, and San Francisco. Many of these cutters are now being returned to other missions, although some continue to be involved in security-related activities. Strengthening surveillance of passenger-related operations and other high-interest vessels. The Coast Guard established new guidelines for developing security plans and implementing security measures for passenger vessels and passenger terminals, including access controls to passenger terminals and security zones around passenger ships. In Tampa and elsewhere, the Coast Guard established security zones around moored cruise ships and other high-interest vessels, such as naval vessels and tank ships carrying liquefied petroleum gas. The Coast Guard also boarded or escorted many of those vessels to ensure their safe entry into the ports. In some areas, such as San Francisco Bay, the Coast Guard also established waterside security zones adjacent to large airports located near the water. Laying the groundwork for more comprehensive security planning. The Coast Guard began a process for comprehensively assessing the security conditions of 55 U.S. ports over a 3-year period. The agency has a contract with a private firm, TRW Systems, to conduct detailed vulnerability assessments of these ports. The first four assessments are expected to begin in mid-August 2002, following initial work to develop a methodology and identify security standards and best practices that can be used for evaluating the security environment of ports. Tampa is expected to be among the first eight ports assessed under this process. Driving Maritime Security Worldwide. The Coast Guard is working through the International Maritime Organization to improve maritime security worldwide. It has proposed accelerated implementation of electronic ship identification systems, ship and port facility security plans, and the undertaking of port security assessments. The proposals have been approved in a security-working group and will be before the entire organization in December 2002. According to the U.S. Customs Service, it has several initiatives under way in the United States and elsewhere to help ensure the security of cargo entering through U.S. ports. These initiatives include the following: Inspecting containers and other cargoes. Beginning in the summer of 2002, Customs plans to deploy 20 new mobile gamma ray imaging devices at U.S. ports to help inspectors examine the contents of cargo containers and vehicles. Customs is also adapting its computer-based system for targeting containers for inspection. The system, originally designed for the agency’s counter-narcotics efforts, flags suspect shipments for inspection on the basis of an analysis of shipping, intelligence, and law enforcement data, which are also checked against criteria derived from inspectors expertise. These new efforts would adjust the system to better target terrorist threats as well. Prescreening cargo. In its efforts to increase security, Customs has entered into an agreement to station inspectors at three Canadian ports to prescreen cargo bound for the United States. The agency has since reached similar agreements with the Netherlands, Belgium, and France to place U.S. inspectors at key ports and initiated similar negotiations with other foreign governments in Europe and Asia. Working with the global trade community. Customs is also engaging the trade community in a partnership program to protect U.S. borders and international commerce from acts of terrorism. In this recent initiative, U.S. importers—and ultimately carriers and other businesses—enter into voluntary agreements with Customs to enhance the security of their global supply chains and those of their business partners. In return, Customs will agree to expedite the clearance of the members’ cargo at U.S. ports of entry. INS is also working on a number of efforts to increase border security to prevent terrorists or other undesirable aliens from entering the United States. INS proposes to spend nearly $3 billion on border enforcement in fiscal year 2003—about 75 percent of its total enforcement budget of $4.1 billion. A substantial number of INS’s actions relate to creating an entry and exit system to identify persons posing security threats. INS is working on a system to create records for aliens arriving in the United States and match them with those aliens’ departure records. The Immigration and Naturalization Service Data Management Improvement Act of 2000 requires the U.S. Attorney General to implement such a system at airports and seaports by the end of 2003, at the 50 land border ports with the greatest numbers of arriving and departing aliens by the end of 2004, and at all ports by the end of 2005. The USA Patriot Act, passed in October 2001, further instructs the U.S. Attorney General and the Secretary of State to focus on two new elements in designing this system—tamper-resistant documents that are machine-readable at ports of entry and the use of biometric technology, such as fingerprint and retinal scanning. Another Act passed by Congress goes further by making the use of biometrics a requirement in the new entry and exit system. A potentially more active agency in the future is the new TSA, which has been directed to protect all transportation systems and establish needed standards. To date, however, TSA has had limited involvement in certain aspects of improving port security. TSA officials report that they are working with the Coast Guard, Customs, and other public and private stakeholders to enhance all aspects of maritime security, such as developing security standards, developing and promulgating regulations to implement the standards, and monitoring the execution of the regulations. TSA, along with the Maritime Administration and the Coast Guard is administering the federal grant program to enhance port security. TSA officials also report that they plan to establish a credentialing system for transportation workers. The Congress is currently considering additional legislation to further enhance seaport security. Federal port security legislation is expected to emerge from conference committee as members reconcile S. 1214 and H.R. 3983. Key provisions of these two bills include requiring vulnerability assessments at major U.S. seaports and developing comprehensive security plans for all waterfront facilities. Other provisions in one or both bills include establishing local port security committees, assessing antiterrorism measures at foreign ports, conducting antiterrorism drills, improving training for maritime security professionals, making federal grants for security infrastructure improvements, preparing a national maritime transportation security plan, credentialing transportation workers, and controlling access to sensitive areas at ports. The Coast Guard and other agencies have already started work on some of the provisions of the bills in anticipation of possible enactment. Some funding has already been made available for enhanced port security. As part of an earlier DOD supplemental budget appropriation for fiscal year 2002, the Congress appropriated $93.3 million to TSA for port security grants. Three DOT agencies—the Maritime Administration, the Coast Guard, and TSA— screened grant applications and recently awarded grants to 51 U.S. ports for security enhancements and assessments. Tampa received $3.5 million to (1) improve access control, which Tampa Port Authority officials believe will substantially eliminate access to the port by unauthorized persons or criminal elements and (2) install camera surveillance to enforce security measures and to detect intrusions. More recently, Congress passed legislation authorizing an additional $125 million for port security grants, including $20 million for port incident training and exercises. The federal government has jurisdiction over navigable waters (including harbors) and interstate and foreign commerce and is leading the way for the nation’s ongoing response to terrorism; however, state and local governments are the main regulators of seaports. Private sector terminal operators, shipping companies, labor unions, and other commercial maritime interests all have a stake in port security. Our discussions with public and private sector officials in several ports indicates that although many actions have been taken to enhance security, there is little uniformity in actions taken thus far. Florida has been a leader in state initiated actions to enhance port security. In 2001—and prior to September 11—Florida became the first state to establish security standards for ports under its jurisdiction and to require these ports to maintain approved security plans that comply with these standards. According to Florida state officials, other states have considered similar legislation. However, according to an American Association of Port Authorities official, Florida is the only state thus far to enact such standards. Although other states have not created formal requirements as Florida has done, there is evidence that many ports have taken various actions on their own to address security concerns in the wake of September 11. State and local port administrators we spoke with at such locations as the South Carolina State Ports Authority and the Port Authority of New York and New Jersey, for example, said they had conducted security assessments of their ports and made some improvements to their perimeter security and access control. At the eight ports where our work has been concentrated thus far, officials reported expending a total of more than $20 million to enhance security since September 11. Likewise, private companies said they have taken some actions, although they have varied from location to location. For example, one shipping company official said that it had performed a security assessment of its own facility; another facility operator indicated that it had assessed its own security needs and added access controls and perimeter security. In addition, private sector officials at the port of Charleston, South Carolina, told us that some facility operators had done more than others to improve their security. The Coast Guard’s Captain of the Port in Charleston agreed with their assessment. He said that one petroleum company has tight security, including access control with a sign-in at the gate and visitor’s badge and identification checks for everyone entering the facility. Another petroleum facility requires all visitors to watch a safety and security video, while a third petroleum facility had done so little that the Captain characterized security there as inadequate. Several challenges need to be addressed to translate the above initiatives into the kind of enhanced security system that the Congress and other policymakers have envisioned. A significant organizational change appears likely to occur with congressional action to establish a new Department of Homeland Security (DHS), which will integrate many of the federal entities involved in protecting the nation’s borders and ports. The Comptroller General has recently testified that we believe there is likely to be considerable benefit over time from restructuring some of the homeland security functions, including reducing risk and improving the economy, efficiency, and effectiveness of these consolidated agencies and programs. Despite the hopeful promise of this significant initiative, the underlying challenges of successfully implementing measures to improve the security of the nation’s ports remain. These challenges include implementation of a set of standards that define what safeguards a port should have in place, uncertainty about the amount and sources of funds needed to adequately address identified needs, and difficulties in establishing effective coordination among the many public and private entities that have a stake in port security. One major challenge involves developing a complete set of standards for the level of security that needs to be present in the nation’s ports. Adequate standards, consistently applied, are important because lax security at even a handful of ports could make them attractive targets for terrorists interested in smuggling dangerous cargo, damaging port infrastructure, or otherwise disrupting the flow of goods. In the past, the level of security has largely been a local issue, and practices have varied greatly. For example, at one port we visited most port facilities were completely open, with few fences and many open gates. In contrast, another port had completely sealed all entrances to the port, and everyone attempting to gain access to port property had to show identification and state their port business before access to the port was granted. Practices also vary greatly among facilities at a single port. At Tampa, for example, a set of state standards applies to petroleum and anhydrous ammonia tanks on port property; but security levels at similar facilities on private land are left to the discretion of private companies. Development of a set of national standards that would apply to all ports and all public and private facilities is well under way. In preparing to assess security conditions at 55 U.S. ports, the Coast Guard’s contractor has been developing a set of standards since May 2002. The Coast Guard standards being developed cover such things as preventing unauthorized persons from accessing sensitive areas, detecting and intercepting intrusions, checking backgrounds of those whose jobs require access to port facilities, and screening travelers and other visitors to port facilities. These standards are performance-based, in that they describe the desired outcome and leave the ports considerable discretion about how to accomplish the task. For example, the standards call for all employees and passengers to be screened for dangerous items or contraband but do not specify the method that must be used for these screenings. The Coast Guard believes that using performance standards will provide ports with the needed flexibility to deal with varying conditions and situations in each location rather than requiring a “cookie-cutter” approach that may not be as effective in some locations as it would be in others. Developing and gaining overall acceptance of these standards is difficult enough, but implementing them seems likely to be far tougher. Implementation includes resolving thorny situations in which security concerns may collide with economic or other goals. Again, Tampa offers a good example. Some of the port’s major employers consist of ship repair companies that hire hundreds of workers for short-term projects as the need arises. Historically, according to port authority officials, these workers have included persons with criminal records. However, new state requirements for background checks, as part of issuing credentials, could deny such persons needed access to restricted areas of the port. From a security standpoint, excluding such persons may be advisable; but from an economic standpoint, a company may have difficulty filling jobs if it cannot include such persons in the labor pool. Around the country, ports will face many such issues, ranging from these credentialing questions to deciding where employees and visitors can park their cars. To the degree that some stakeholders believe that the security actions are unnecessary or conflict with other goals and interests, achieving consensus about what to do will be difficult. Another reason that implementation poses a challenge is that there is little precedent for how to enforce the standards. The Coast Guard believes it has authority under current law and regulations to require security upgrades, at both public and private facilities. Coast Guard officials have also told us that they may write regulations to address the weaknesses found during the ongoing vulnerability assessment process. However, the size, complexity, and diversity of port operations do not lend themselves to an enforcement approach such as the one the United States adopted for airports in the wake of September 11, when airports were shut down temporarily until they could demonstrate compliance with a new set of security procedures. In the case of ports, compliance could take much longer, require greater compromises on the part of stakeholders, and raise immediate issues about how compliance will be paid for—and who will bear the costs. Many of the planned security improvements at seaports will require costly outlays for infrastructure, technology, and personnel. Even before September 11, the Interagency Commission on Crime and Security in U.S. Seaports estimated the costs for upgrading security infrastructure at U.S. ports ranging from $10 million to $50 million per port. Officials at the Port of Tampa estimated their cost for bringing the port’s security into compliance with state standards at $17 million—with an additional $5 million each year for security personnel and other recurring costs. Deciding how to pay for these additional outlays carries its own set of challenges. Because security at the ports is a concern shared among federal, state, and local governments, as well as among private commercial interests, the issue of who should pay to finance antiterrorism activities may be difficult to resolve. Given the importance of seaports to our nation’s economic infrastructure and the importance of preventing dangerous persons or goods from entering our borders, it has been argued by some that protective measures for ports should be financed at the federal level. Port and private sector officials we spoke with said that federal crime, including terrorism, is the federal government’s responsibility, and if security is needed, the federal government should provide it. On the other hand, many of the economic development benefits that ports bring, such as employment and tax revenue, remain within the state or the local area. In addition, commercial interests and other private users of ports could directly benefit from security measures because steps designed to thwart terrorists could also prevent others from stealing goods or causing other kinds of economic damage. The federal government has already stepped in with additional funding, but demand has far outstripped the additional amounts made available. For example, when the Congress appropriated $93.3 million to help ports with their security needs, the grant applications received by TSA totaled $697 million—many multiples of the amount available (even including the additional $125 million just appropriated for port security needs). However, it is not clear that $697 million is an accurate estimate of the need because, according to the Coast Guard and Maritime Administration officials, applications from private industry may have been limited because of the brief application period. In Tampa, while officials believe that they need $17 million for security upgrades, they submitted an application for about $8 million in federal funds and received $3.5 million. In the current environment, ports may have to try to tap multiple sources of funding. Tampa officials told us that they plan to use funds from a variety of state, local, and federal sources to finance their required security improvements. These include such sources as federal grants, state transportation funds, local tax and bond revenues, and operating revenues from port tenants. In Florida, one major source for security money has been the diversion of state funds formerly earmarked for economic development projects. According to Florida officials, in 2002, for example, Florida ports have spent virtually all of the $30 million provided by the state for economic development on security-related projects. Ports throughout the nation may have varying abilities to tap similar sources of funding. In South Carolina, for example, where port officials identified $12.2 million in needed enhancements and received $1.9 million in TSA grants, officials said no state funding was available. By contrast, nearby ports in North Carolina, Georgia, and Virginia do have access to at least some state-subsidized funding. South Carolina port officials also reported that they had financed $755,000 in security upgrades with operating revenue, such as earnings from shippers’ rental of port-owned equipment, but they said operating revenues were insufficient to pay for much of the needed improvements. These budget demands place pressure on the federal government to make the best decisions about how to use the funding it makes available. Governments also have a variety of policy tools, including grants, regulations, tax incentives, and information-sharing mechanisms to motivate or mandate other lower levels of government or the private sector to help address security concerns, each with different advantages or drawbacks, for example, in achieving results or promoting accountability. Security legislation currently under consideration by the Congress includes, for example, federal loan guarantees as another funding approach in addition to direct grants. Finally, once adequate security measures are in place, there are still formidable challenges to making them work. As we have reported, one challenge to achieving national preparedness and response goals hinges on the federal government’s ability to form effective partnerships among many entities. If such partnerships are not in place—and equally important, if they do not work effectively—those who are ultimately in charge cannot gain the resources, expertise, and cooperation of the people who must implement security measures. One purpose in creating the proposed DHS is to enhance such partnerships at the federal level. Part of this challenge involves making certain that all the right people are involved. At the ports we reviewed, the extent to which this had been done varied. The primary means of coordination at many ports are port security committees, which are led by the Coast Guard; the committees offer a promising forum for federal, state, and local government and private stakeholders to share information and make decisions collaboratively. For example, a Captain of the Port told us that coordination and cooperation among port stakeholders at a port in his area of responsibility are excellent and that monthly meetings are held with representation from law enforcement, the port authority, shipping lines, shipping agents, and the maritime business community. However, in another port, officials told us that their port security committees did not always include representatives from port stakeholders who were able to speak for and make decisions on behalf of their organization. An incident that occurred shortly before our review at the Port of Honolulu illustrates the importance of ensuring that security measures are carried out and that they produce the desired results. The Port had a security plan that called for notifying the Coast Guard and local law enforcement authorities about serious incidents. One such incident took place in April 2002, when, as cargo was being loaded onto a cruise ship, specially trained dogs reacted to possible explosives in one of the loads, and the identified pallet was set aside. Despite the notification policy, personnel working for the shipping agent and the private company providing security at the dock failed to notify either local law enforcement officials or the Coast Guard about the incident. A few hours after the incident took place, Coast Guard personnel conducting a foot patrol found the pallet and inquired about it, and, when told about the dogs’ reaction, they immediately notified local emergency response agencies. Once again, however, the procedure was less than successful because the various organizations were all using radios that operated on different frequencies, making coordination between agencies much more difficult. Fortunately, the Honolulu incident did not result in any injuries or loss, and Coast Guard officials said that it illustrates the importance of practice and testing of security measures. They also said that for procedures to be effective when needed they must be practiced and the exercises critiqued so the procedures become refined and second nature to all parties. According to a Coast Guard official, since the April incident, another incident occurred where another possible explosive was detected. This time all the proper procedures were followed and all the necessary parties were contacted. One aspect of coordination and cooperation that was lacking in the standard security measures we observed is the sharing of key intelligence about such issues as threats and law enforcement actions. No standard protocol exists for such an information exchange between the federal government and the state and local agencies that need to react to it. In addition, no formal mechanism exists at the ports we visited for the coordination of threat information. State and local officials told us that for their governments to act as partners with the federal government in homeland security, of which port security is a critical part, they need better access to threat information. We identified a broad range of barriers that must be overcome to meet this challenge. For example, one barrier involves security clearances. Officials at the National Emergency Management Association (NEMA), the organization that represents state and local emergency management personnel, told us that personnel in the agencies they represent have difficulty in obtaining critical intelligence information. Although state or local officials may hold security clearances issued by the Federal Emergency Management Agency, other federal agencies, such as the Federal Bureau of Investigation, do not generally recognize these security clearances. Similarly, officials from the National Governors Association told us that because most state governors do not have a security clearance, they cannot receive any classified threat information. This could affect their ability to effectively use the National Guard or state police to prevent and respond to a terrorist attack, as well as hamper their emergency preparedness capability. The importance of information-sharing on an ongoing basis can be seen in an example of how discussions among three agencies, each with its own piece of the puzzle, first failed but then uncovered a scheme under which port operations were being used to illegally obtain visas to enter the United States. The scheme, which was conducted in Haiti, was discovered only after a number of persons entered the United States illegally. Under this scheme, people would apply at the U.S. Consulate in Haiti for entrance visas on the pretext that they had been hired to work on ships that were about to call at the Port of Miami. However, the ships were no longer in service. The Coast Guard knew that these ships were no longer in service, but this information was not known by the State Department (which issued the visas) or INS (which admitted the people into the United States). A Coast Guard official at the Miami Marine Safety Office estimated that hundreds of people entered the country illegally in 2002. Once this was discovered by Coast Guard personnel, they contacted certain American embassies to inform them of the vessels that have been taken out of active service or have been lost at sea and instituted procedures to ensure that the potential crew member was joining a legitimate vessel. The breadth of the challenge of improved coordination and collaboration is evident in the sheer magnitude of the players, even if the proposed DHS is enacted. Coordination challenges will remain among the 22 federal entities that would be brought together in the proposed DHS; between these diverse elements of DHS and the many entities with homeland security functions still outside DHS; and between the full range of federal entities and the myriad of state, local, and private stakeholders. In summary, Mr. Chairman, making America’s ports more secure is not a short-term or easy project. There are many challenges that must be overcome. The ports we visited and the responsible federal, state, and local entities have made a good start, but they have a long way to go. While there is widespread support for making the nation safe from terrorism, ports are likely to epitomize a continuing tension between the desire for safety and security and the need for expeditious, open flow of goods both into and out of the country. This completes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For information about this testimony, please contact JayEtta Z. Hecker, Director, Physical Infrastructure Issues, on (202) 512-2834. Individuals making key contributions to this testimony included Randy Williamson, Steven Calvo, Jonathan Bachman, Jeff Rueckhaus, and Stan Stenersen. To learn of the vulnerabilities present at ports, the initiatives undertaken since September 11 to mitigate them and the challenges that could impede further progress, we judgmentally selected 10 ports—8 of which we visited—to provide a geographically diverse sample and, in many cases, include ports where special attention had been devoted to security issues. For example, we visited the ports in Tampa, Miami, and Ft. Lauderdale (Port Everglades) because they—like all of Florida’s deepwater ports—are required to implement state-mandated security standards, and because they handle large numbers of cruise passengers or large quantities of containerized or bulk cargoes. While in Florida, we also met with state officials from the Office of Drug Control, which developed the port security standards and the legislation codifying them, and from the Department of Law Enforcement, charged with overseeing the implementation of the state standards. In addition, we visited ports in Charleston, South Carolina, and Honolulu, Hawaii, which had been the subject of detailed vulnerability studies by the Defense Threat Reduction Agency (DTRA), in order to determine their progress in implementing the security enhancements recommended by DTRA. For further geographical representation we visited the ports in Oakland, California; Tacoma, Washington; and Boston, Massachusetts, and held telephone discussions with officials from the Port Authority of New York and New Jersey and with the Coast Guard in Guam. At each port visit, we toured the port on land and from the water in order to view the enhancements made since September 11 and the outstanding security needs. We also interviewed officials from the Coast Guard and other public and private sector port stakeholders, such as port authorities, state transportation departments, marine shipping companies, shipping agents, marine pilots, and private terminal operators. To determine federal, state, local, and private initiatives to enhance port security and the implementation challenges, we had several conversations with officials from the Coast Guard headquarters, DTRA, the Maritime Administration, the American Association of Port Authorities, and the private contractor recently hired by the Coast Guard to conduct comprehensive vulnerability assessments at 55 U.S. ports. These discussions included issues related to port security assessments—both completed and planned—communication and coordination with port stakeholders, federal funding of port security enhancements, and other issues. In addition, we analyzed administrative data from the federally funded TSA Port Security Grant Program for additional information on the security needs of ports and the ports’ progress since September 11 in enhancing their security. Homeland Security: Critical Design and Implementation Issues (GAO-02-957T, July 17, 2002) Homeland Security: Title III of the Homeland Security Act of 2002 (GAO-02-927T, July 9, 2002) Homeland Security: Intergovernmental Coordination and Partnerships Will Be Critical to Success (GAO-02-899T, July 1, 2002). Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting (GAO-02-893T, June 28, 2002). Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will be Pivotal to Success (GAO-02-886T, June 25, 2002). Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains (GAO-02-610, June 7, 2002). National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy (GAO-02-811T, June 7, 2002). Homeland Security: Responsibility And Accountability For Achieving National Goals (GAO-02-627T, April 11, 2002). National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security (GAO-02-621T, April 11, 2002). Homeland Security: Progress Made; More Direction and Partnership Sought (GAO-02-490T, March 12, 2002). Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs (GAO-02-160T, November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, October 31, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, October 12, 2001). Homeland Security: A Framework for Addressing the Nation’s Issues (GAO-01-1158T, September 21, 2001).
Although most of the attention following the September 11 terrorist attacks focused on airport security, an increasing emphasis has since been placed on ports. Ports are inherently vulnerable to terrorist attacks because of their size, generally open accessibility by water and land, metropolitan area location, the amount of material being transported through ports, and the ready transportation links to many locations within the country's borders. Since September 11, federal, state, and local authorities, and private sector stake holders have addressed vulnerabilities in the security of the nation's ports. The Coast Guard has acted as a focal point for assessing and addressing security concerns, anticipating many of the requirements that Congress and the administration are contemplating or have already put into place. Although the proposal to consolidate the federal agencies responsible for border security may offer some long-term benefits, overcoming three challenges will be key to successfully enhancing security at the nation's ports: standards, funding, and collaboration.
The 1994 Uruguay Round agreements created the WTO dispute settlement system. The new system replaced the one under the General Agreement on Tariffs and Trade (GATT), the predecessor to the WTO. The Uruguay Round created a stronger dispute settlement system that, unlike the system under the GATT, discourages stalemates by not allowing parties to block decisions. In addition, the new system established a standing Appellate Body, with the aim of making decisions more stable and predictable. The WTO dispute settlement system operates in four major phases: consultation, panel review, Appellate Body review (when a party appeals the panel ruling), and implementation of the ruling. To initiate, or file, a dispute, a WTO member requests consultations with the defending member. If the parties do not settle the case during consultations, the complainant may then request that a panel be established. Nonpermanent, three-person panels issue formal decisions, or reports, for cases that are appealed; three members of a permanent, seven-member Appellate Body— comprised of individuals with recognized standing in the field of law and international trade—review panel findings. The Dispute Settlement Body, which is comprised of representatives of all WTO members, approves all final reports, and only a consensus of the members can block decisions. Thus, no individual member can block a decision. When a WTO member challenges a trade remedy measure, the panels and the Appellate Body apply standards of review, outlined in certain WTO agreements, to evaluate members’ factual and legal determinations supporting these measures. In the United States, the Department of Commerce and the International Trade Commission (ITC) investigate whether the United States should impose antidumping or countervailing duties to offset unfair foreign trade practices. The ITC also investigates whether the conditions exist for the United States to invoke safeguards in response to import surges. From 1995 through 2002, WTO members brought 198 formal dispute settlement cases against other members. One-third (64 cases) involved members’ trade remedies, and the ratio of trade remedy cases filed, versus all other types, generally increased over the time period. Among WTO members, the United States has been by far the most frequent defendant in trade remedy cases but relatively less active in filing complaints. Overall, however, WTO members have challenged a relatively small share of the trade measures that their fellow members imposed, although the proportion of U.S. trade measures challenged was larger. Overall, about one-third (64) of all WTO cases involved members’ trade remedies. From 1995 to 2000, an increasing proportion of the cases filed pertained to trade remedy measures and laws, as shown in figure 1. In 2001 and 2002, there was somewhat of a shift in this trend. In comparing WTO members’ participation in the trade remedy cases, the United States by far has been the most frequent defendant but less active as a complainant. As shown in figure 2, the United States was a defendant in 30 (47 percent) of the 64 trade remedy cases, a majority of which were filed since January 2000. The next most frequent defendants were Argentina, which defended 6 cases, and the EU, a defendant in 5 cases. On the other hand, the United States was less active than other WTO members in filing trade remedy cases. As figure 2 also shows, the EU was the most frequent complainant in the 64 trade remedy cases, filing 16 complaints. Six WTO members each filed more complaints than the United States. U.S. agency officials said that it was not surprising that the United States had been a defendant more often than a complainant in WTO disputes since (1) the United States has the world’s biggest economy and most desirable market and (2) U.S. laws and procedures are more detailed and transparent than those of other members that are large users of trade remedies. These officials also pointed to the easy availability in the United States of trade lawyers, who could assist in bringing trade remedy actions, as another factor. Although members notified the WTO that they imposed 1,405 trade remedy measures from 1995 through 2002, only a small percentage of these measures were challenged in the dispute settlement system. Specifically, WTO members challenged only 63 (4 percent) of the 1,405 measures, but nearly one-half of these challenges involved U.S. trade measures. Over the same period, as shown in figure 3, the United States imposed the most trade remedy measures (239) and had the biggest number and share (29, or 12 percent) of its measures challenged by other WTO members. On the other hand, India, the next biggest user of trade remedy measures, had none of its 226 measures challenged. WTO members challenged 4 (2 percent) of the EU’s 182 trade remedy measures and 7 (6 percent) of Argentina’s 127 trade remedy measures. While the 25 WTO trade remedy rulings completed from 1995 through 2002 generally rejected domestic agency determinations supporting trade measures, the rulings upheld a vast majority of the trade remedy laws that were challenged. The WTO rejected at least half of the domestic agency determinations in most of the 21 cases dealing with such determinations. The WTO also rejected roughly the same proportion of U.S. and non-U.S. domestic determinations. The 21 rulings addressed issues ranging from whether domestic agencies adequately justified imposing a trade remedy measure to whether WTO members followed proper procedures in initiating the disputes. Regarding WTO rulings on members’ laws, only U.S. laws were challenged during the period. The WTO upheld more than three- quarters of the U.S. laws challenged in 9 cases involving 13 challenges. The WTO made findings on a total of 175 domestic agency determinations in 21 of the 25 trade remedy cases completed through 2002. As shown in figure 4, in 17 of the 21 cases the panels rejected 50 percent or more of the domestic agency’s determinations—rejecting all determinations in 5 cases. In all 21 cases, the WTO found at least one aspect of a measure to be inconsistent with WTO requirements. When comparing rulings among WTO members on domestic determinations, the United States and other WTO members fared similarly. Overall, as shown in figure 5, the WTO rejected almost the same proportion of the U.S.’s and other WTO members’ domestic determinations—57 percent and 56 percent, respectively. Although to date WTO members have challenged only U.S. laws, the WTO upheld a large majority of these laws. As shown in table 1, in the 13 instances (in 9 cases), in which WTO members directly challenged U.S. laws, the WTO upheld U.S. laws in 11 challenges and rejected U.S. laws in 2 challenges. Addressing why only U.S. trade remedy laws were challenged, a U.S. agency official said that U.S. laws tend to be more vulnerable because they are more detailed than those of other members, and their language is not the same as the language in the WTO agreements. In contrast, according to the official, some WTO members essentially take the language in the relevant WTO agreement and make it their law. The 25 WTO trade remedy rulings completed from 1995 through 2002 did not result in many changes to WTO members’ laws, regulations, or practices. However, the rulings more often resulted in the onetime revision to, or removal of, trade remedy measures. The rulings affected a number of U.S. laws, regulations, practices, and measures; but for other WTO members, no laws or regulations were affected, and only one practice was subject to change. Furthermore, fewer foreign trade measures were subject to removal or revision. Nonetheless, U.S. officials told us that the rulings to date had not significantly impaired their ability to impose trade remedies. However, they told us they were concerned about the potential for rulings to have a greater adverse impact in the future. In addition, U.S. agencies said that, with few exceptions, the rulings did not question U.S. methodologies for determining whether to impose remedies but have required them to provide fuller explanations and justifications for their decisions. WTO rulings resulted in a small number of changes to members’ laws, regulations, and practices, with all but one of those changes involving U.S. trade remedies. In the 14 completed trade remedy cases in which the United States was the defendant, two U.S. laws, one regulation, and three practices were changed or are subject to change, as shown in table 2. In the 11 cases involving other WTO members, only one practice was subject to change. Specifically, the two U.S. laws subject to change are a section of the Antidumping Act of 1916 and a section of the Tariff Act of 1930 involving calculation of the “all others” rate. In the 1916 Antidumping Act case, the WTO found the U.S. law to be in violation of GATT 1994 and the WTO Antidumping Agreement because it authorized imposing fines, imprisonment, and recovery of damages in response to the dumping of products in the U.S. market—remedies that are not provided for in those agreements. Both the U.S. Senate and the House of Representatives have introduced legislation to repeal the 1916 Act. The proposed change to the Tariff Act of 1930 involves making calculation of the “all others” rate consistent with the WTO Antidumping Agreement. The WTO granted the United States until the end of December 2003 to comply, but so far Congress has not addressed this change. The one change to a U.S. regulation stemmed from a case involving U.S. antidumping duties imposed on imports of Korean dynamic random access memory semiconductors (DRAMS). To implement the ruling, the United States replaced its regulatory standard for revoking an antidumping order—that dumping was “not likely” to occur—with the standard in the WTO Antidumping Agreement—that “continued imposition of the antidumping duty is necessary to offset dumping.” The three changes to U.S. practices involved a revision of the “arm’s- length” methodology in antidumping cases and two privatization methodologies that the Commerce Department used in countervailing duty cases to calculate the extent to which the benefit of past subsidies are passed on to private purchasers of state-owned enterprises. The United States revised its “arms-length” methodology to conform to the WTO Antidumping Agreement by expanding the scope of sales to an affiliated business that could be considered to be made in the ordinary course of trade. Commerce revised its countervailing duty methodology to conform to the Appellate Body’s first privatization decision, but the Appellate Body later ruled that the revised methodology was also inconsistent with the Subsidies and Countervailing Measures Agreement. Commerce revised its methodology a second time to reflect the Appellate Body’s finding that an arm’s-length, fair market value sale of a subsidized, state-owned entity to a private buyer creates a presumption that the privatized entity no longer benefits from past subsidies. Aside from the changes to U.S. laws, regulations, and practices, 1 case resulted in a change to an EU practice. In that case, the WTO ruled that the EU’s practice of “zeroing” was not permitted under the WTO Antidumping Agreement. Zeroing in that case concerned the EU’s changing negative dumping margins to zero when comparing dumping margins of different models of like products—for example, comparing dumping margins of high-end satin sheets with low-end polyester/cotton blend sheets. In contrast to the relatively few changes in members’ laws, regulations, and practices, most of the rulings in the 25 completed trade remedy cases involved a case-specific removal or revision of a WTO member’s trade remedy measure. More U.S. measures were affected than those of all other members. In the 14 completed cases brought against the United States, 21 U.S. trade measures were subject to revision or removal, while the 11 completed cases against other countries resulted in 7 trade measures being subject to revision or removal, as shown in table 2. Specifically, the United States reduced antidumping margins on measures in response to 3 WTO rulings, removed countervailing duty measures in 1 case as a result of domestic litigation, and is revising countervailing duty measures in 2 other cases. And in 3 cases, the United States removed, or allowed to expire, safeguard measures that the Appellate Body found inconsistent with the WTO Safeguards Agreement. By contrast, other WTO members removed antidumping measures in 3 cases and are due to remove or revise antidumping measures in 2 cases. In addition, other members removed safeguard measures as a result of 2 WTO rulings. While U.S. officials told us that WTO trade remedy rulings had not yet significantly impaired the U.S.’s fundamental right and ability to use its trade remedies, they are concerned about the rulings’ potential to do so in the future. For example, Commerce Department officials said that implementing the second Appellate Body ruling on privatization may have a substantial impact on similar proceedings in the future as well as existing countervailing duty orders. In addition, U.S. officials expressed concern about the potential negative ramifications of the WTO ruling in the EU bed linen case. First, U.S. officials said that although the United States did not change its “zeroing” practice as a result of the ruling against the EU, they noted that the ruling could affect a current Canadian dispute against the United States involving U.S. zeroing practices. Furthermore, the EU has recently challenged 21 Commerce Department antidumping determinations with regard to the U.S.’ zeroing practice. The EU alleged that U.S. application of its zeroing practice is inconsistent with the WTO Antidumping Agreement and GATT 1994. The EU also asserted that U.S. laws and regulations providing for this zeroing practice appear to be inconsistent with those agreements. As shown by this challenge, U.S. officials believe that when the WTO strikes down a practice, there is significant potential for WTO members to challenge similar practices of other members. Accordingly, these officials said they are monitoring WTO rulings and recommendations in cases not involving the United States in order to prepare for similar, potential challenges against the United States. In the safeguards area, U.S. officials indicated that some WTO rulings were confusing and extremely difficult to implement, particularly regarding certain aspects of causation—the extent to which increases in imports cause serious injury, or threaten serious injury, to domestic industry. U.S. officials also said that they have had to increase the level of detail they provide in explaining their analyses and how they apply their methodologies in safeguard investigations. For example, they cited safeguard rulings dealing with “nonattribution,” an aspect of causation requiring that injury to domestic industry caused by factors other than increased imports not be attributed to increased imports. U.S. officials said that these rulings could be viewed as calling for domestic agencies to quantify the amount of injury due to increased imports versus the amount due to other factors—a task they consider to be difficult, if not impossible. Moreover, the officials said they would now have to expend more resources in conducting safeguard investigations. WTO panels use two standards of review in evaluating the factual and legal determinations of WTO members’ domestic agencies in trade remedy cases. Article 11 of the WTO Dispute Settlement Understanding applies to all cases brought under the WTO dispute settlement system and calls for an objective assessment of domestic agency determinations. The Appellate Body has stated that in applying article 11, panels should not conduct a new review of domestic agency fact-finding nor totally defer to domestic agency determinations. Article 17.6 of the Antidumping Agreement applies only to antidumping cases and is more specific and deferential than article 11. Appellate body guidance on article 17.6 calls for panels first to apply established international rules of treaty interpretation to interpreting provisions of the Antidumping Agreement before deciding whether to uphold a domestic agency’s interpretation. In the relatively few number of instances in which the Appellate Body has considered standard of review issues, it has found that panels have generally interpreted and applied both standards of review correctly. Finally, panel and Appellate Body decisions generally discuss the standards of review, but the extent of the discussion varies by trade remedy area, case, and issue. The standard of review that WTO panels and the Appellate Body apply in WTO dispute settlement cases refers to how they evaluate and defer to the factual and legal determinations of domestic agencies of WTO members. The two principal standards of review that WTO panels and the Appellate Body use to evaluate these determinations are article 11 of the WTO Dispute Settlement Understanding and article 17.6 of the WTO Antidumping Agreement. Article 11 applies to cases brought under all the WTO agreements that are covered by the dispute settlement system and supplements article 17.6 in antidumping cases. Article 17.6 only applies to cases brought under the Antidumping Agreement, which is the only WTO agreement that has a specific standard of review. Article 11 obligates a panel to make an “objective assessment of the matter before it, including an objective assessment of the facts of the case and the applicability of and conformity with the relevant” WTO agreement. The Appellate Body has interpreted this requirement to mean that panels should neither conduct a new review of domestic agency fact-findings, often referred to as a “de novo review,” nor totally defer to domestic agency determinations. In rejecting both these extremes, the Appellate Body has found that the panels are poorly suited to engage in new reviews and cannot ensure an objective assessment by totally deferring to domestic agency determinations. What the panels should do in safeguards cases, according to the Appellate Body, is ascertain whether domestic agencies have evaluated all relevant facts and provided an adequate, reasoned, and reasonable explanation about how the facts supported their determinations. Article 17.6 is more specific than article 11 and calls for more deference to domestic agency determinations. Article 17.6 is divided into two subparts— factual and legal—and establishes standards of review for panel evaluations of domestic agency determinations. Under the factual standard of review in article 17.6(i), panels must determine whether domestic agencies have properly established the facts and evaluated them in an unbiased and objective manner. When a panel finds that the domestic agency has performed this task, the panel cannot overturn the domestic agency’s determination even if it might have reached a different conclusion. The Appellate Body has stated that the panel’s obligation under the factual standard in article 17.6(i) closely reflects the obligation imposed on panels under article 11. Under the legal standard of review in article 17.6(ii), panels must apply established international rules in interpreting provisions of the WTO Antidumping Agreement. These rules are set forth in articles 31 and 32 of the Vienna Convention on the Law of Treaties and provide a method for interpreting provisions of the Antidumping Agreement. When a panel applies these rules and finds that there is more than one permissible way to interpret a provision of the Antidumping Agreement, the panel must uphold the domestic agency’s determination if it is consistent with one of the permissible interpretations. The Appellate Body’s guidance to panels about how they are to apply this standard is consistent with the sequence implied above. Thus, panels should first use the international rules to interpret the WTO provision in question, and only after completing this task should panels then decide whether to uphold the domestic agency’s legal determination. The Appellate Body has stated that application of the international rules could give rise to at least two permissible interpretations of some provisions of the Antidumping Agreement. WTO members did not often challenge panel interpretations and applications of the standards of review, and most challenges involved article 11. In most instances, the Appellate Body upheld the panels’ treatment of the standards. In the 14 instances in which the Appellate Body specifically ruled on panel interpretations and applications of standard of review, it found that the panels had correctly addressed the standards in 11 instances—9 involving article 11 and 2 involving article 17.6. As indicated above, panels have the responsibility for applying the standards of review in articles 11 and 17.6 when evaluating determinations of WTO member domestic agencies. The Appellate Body’s function is to review how panels have interpreted and applied these standards and to uphold, modify, or reverse panel actions. For the most part, Appellate Body decisions in trade remedy cases have included longer and more detailed discussions of standard of review than the panels. Aside from differences between the panels and the Appellate Body, the extent to which standards of review are discussed vary by trade remedy area, case, and issue. Thus, standards of review are discussed, at least to some extent, in all safeguard and antidumping cases involving determinations of domestic agencies but are not mentioned in a number of countervailing duty cases. In many of the safeguard and antidumping cases, the panels discuss article 11 or article 17.6, respectively, at the beginning of the case, indicating that they are the standards of review to be applied in evaluating the domestic agency determinations involved, though the amount of introductory discussion varies from case to case. The standards of review are sometimes also discussed, or alluded to, later in panel and Appellate Body reports in connection with evaluations of particular domestic agency determinations. These allusions to the standards of review involve use of language from the standards themselves or interpretations of the standards rather than any specific mention of them. For example, in the safeguard cases, panels often invoke Appellate Body guidance about what kind of domestic agency explanation is necessary— an “adequate, reasoned, and reasonable explanation”—without mentioning article 11. Similarly, in antidumping cases, panels sometimes refer to the requirement in article 17.6(i) to conduct an “unbiased and objective” evaluation of domestic agency fact-finding without specifically mentioning 17.6(i). Finally, for some issues, panels neither specifically mention nor allude to standard of review provisions. How the WTO has interpreted and applied the standard of review in trade remedy cases and how it has resolved important trade remedy issues are highly controversial issues in the United States. Further, a number of these important trade remedy issues are highly complex, technical, and not easily explained, as evidenced by their lengthy treatment in WTO panel and Appellate Body reports. Accordingly, we decided to interview a wide range of WTO legal experts to obtain their views on these issues. The most common concern identified by the experts with whom we spoke, although a minority view, was about how the WTO was applying article 17.6(ii) in antidumping cases. Notwithstanding this concern, overall a majority of the experts believed that the WTO had not exceeded its authority in applying the standard of review in the trade remedy cases we reviewed. Commenting on more general issues surrounding the WTO trade remedy rulings, almost all of the experts believed that the United States and other WTO members have received the same treatment in trade remedy cases. In addition, a majority of the experts who responded concluded that WTO decisions generally have not added to obligations or diminished rights of WTO members and that it was appropriate for the WTO to interpret vague and ambiguous provisions in WTO agreements, sometimes referred to as “gap filling.” However, a significant minority of experts strongly disagreed with this view about WTO members’ obligations and rights and considered gap filling to be inconsistent with several provisions of the Dispute Settlement Understanding. Regarding specific rulings, a number of experts cited some safeguard rulings as confusing and unclear. In contrast to the majority views expressed above, the U.S. agencies most involved in trade remedy activities believed that article 17.6(ii) has been improperly applied in some trade remedy cases, mainly because the WTO has not applied article 17.6(ii) in a way that allows for upholding permissible interpretations of WTO members’ domestic agencies. They also believed that in certain trade remedy cases, the WTO has found obligations and imposed restrictions on WTO members that are not supported by the texts of the WTO trade remedy agreements. A common concern raised by a significant minority of experts with whom we spoke was that the WTO was not properly applying the legal standard of review in article 17.6(ii) of the Antidumping Agreement. Specifically, these experts maintained that Appellate Body guidance calling for panels to first apply international rules in the Vienna Convention on the Law of Treaties to interpret provisions of the Antidumping Agreement before they evaluate the domestic agencies’ legal determinations necessarily leads to only one interpretation. Consequently, panels never reach the point of applying the part of article 17.6(ii) that allows for multiple permissible interpretations and upholding an agency determination that is based on one of these interpretations. In fact, while several experts mentioned specific rulings in which panels or the Appellate Body had upheld domestic agency determinations as permissible, it was unclear whether this was due to these bodies going through the article 17.6(ii) analysis or solely because they agreed with the domestic agency. In this regard, in the trade remedy cases we reviewed, no expert pointed to a clear instance in which a panel first applied the Vienna Convention, found several permissible interpretations, and then upheld the agency determination because it was consistent with one of them. One expert, who was a former U.S. negotiator in the Uruguay Round, stated that U.S. negotiators in the round had not fully appreciated how application of the Vienna Convention would limit the possibility of panels or the Appellate Body finding multiple permissible interpretations of the Antidumping Agreement. Some experts also believed that panels and the Appellate Body have not applied the legal standard of review in article 17.6(ii) in the deferential way intended by the United States, as expressed in the U.S. Statement of Administrative Action (SAA) accompanying the U.S. Uruguay Round Agreements Act. The SAA describes article 17.6 as a special standard of review analogous to the deferential standard applied by U.S. courts in reviewing actions by the Commerce Department and the ITC, commonly referred to as the Chevron standard. Thus, from the U.S. perspective, article 17.6 was intended to ensure that WTO panels neither second-guess the factual conclusions of domestic agencies, even when panels might have reached a different conclusion, nor rewrite, under the guise of legal interpretation, the provisions of the Antidumping Agreement. Despite the concerns expressed above, the majority of the experts with whom we spoke indicated that the panels and the Appellate Body generally had not exceeded their authority in applying the standards of review in articles 11 and 17.6 in the trade remedy cases we reviewed. These experts indicated that panels and the Appellate Body had properly applied article 11 in safeguards and countervailing duty cases as well as the factual standard of review in article 17.6(i) in antidumping cases. Several of this group even questioned whether article 11 was intended to be a standard of review provision at all and, if it was, that it did not intend the same level of deference as article 17.6. Majority support for how panels and the Appellate Body applied the legal standard in article 17.6(ii) included experts who thought the panels and the Appellate Body had generally applied the article correctly and provided the right amount of deference, those who believed the article was not particularly deferential, and those who considered the article to primarily set forth a method for interpreting provisions of the Antidumping Agreement rather than for conferring deference. Finally, a number of experts, including a few with divergent opinions about whether the legal standard in article 17.6(ii) had been properly applied, stated that evaluation of panel and Appellate Body decisions should focus on their substantive rulings and not the technical issue of standard of review. A majority of experts also maintained that the United States was not successful in getting the standard of review it wanted in the Antidumping Agreement and that the SAA only expresses the U.S.’s view about the intent of article 17.6. They pointed out that while the United States was the main proponent for having a strongly deferential standard included in the Antidumping Agreement, numerous WTO members opposed the United States on this issue. Although the experts agreed that the lack of written negotiating history makes it difficult to determine how much deference article 17.6 was intended to provide, a large number believed that the language that was ultimately agreed to did not include the Chevron standard. Experts with markedly divergent views on other issues were in near unanimous agreement that the United States generally was being treated about the same as other WTO members in trade remedy cases. Although several experts pointed out that the United States was the most frequent defendant and was losing more often than other WTO members, they believed that the panels and the Appellate Body had ruled against other WTO members with the same frequency and in the same or similar manner as they had for the United States. Several experts also were emphatic in describing the WTO as a plaintiff’s court in trade remedy cases and pointed out that in nearly all trade remedy decisions and all the safeguards decisions we reviewed, respondents were asked to take some action—for example, to ensure that a safeguard measure was applied consistent with the Safeguards Agreement. When asked why respondents usually lose trade remedy cases, some experts cited a WTO free trade bias or bias against trade remedies as the principal reason. Several others said that WTO members only bring trade remedy actions in the WTO that they are confident they can win. As to why the United States was the most frequent defendant in trade remedy cases, several experts mentioned the fact that the United States was the biggest market as well as the biggest user of trade remedies. In addition, several experts believed that some of the Commerce Department’s decisions to impose trade remedy measures were unfounded. A majority of experts who responded to this issue agreed that panels and the Appellate Body generally have not added to the obligations or diminished the rights of the United States and other WTO members in trade remedy cases. They believed panels and the Appellate Body generally had ruled appropriately in these cases, including the rulings on issues that the experts cited most frequently as being important and controversial— zeroing, facts available, nonattribution, unforeseen developments, and privatization. A number of these experts believed that the panels and the Appellate Body had both the authority and the need to interpret vague or ambiguous provisions, or to fill gaps, in the trade remedy agreements when no provision clearly deals with an issue. A number also cited article 3.2 of the Dispute Settlement Understanding, which calls for dispute settlement to “clarify the . . . provisions of the Agreements,” as support for panel and Appellate Body interpretations of vague or ambiguous provisions. Furthermore, a number stated that it is a common and accepted practice for courts to interpret vague or ambiguous provisions of laws and agreements, or to fill gaps, when the meaning of a legal provision is unclear. A significant minority of experts, however, strongly believed that panel and Appellate Body findings on a number of important issues, including those listed above, had added to obligations or diminished the rights of the United States and other WTO members. For example, some in this group believed that panels or the Appellate Body should have upheld the domestic agency determinations on the antidumping issues of zeroing, facts available, and nonattribution as permissible under the legal standard of review in article 17.6(ii). In addition, they contended that gap filling was prohibited by articles 3.2 and 19.2 of the Dispute Settlement Understanding, both of which preclude the Dispute Settlement Body from adding to obligations or diminishing the rights of WTO members as provided in the WTO agreements covered by dispute settlement. Furthermore, they believed that the WTO had engaged in improper gap filling in its rulings regarding the aforementioned issues, including privatization. They said that WTO provisions on these issues were unclear and that privatization was not specifically referred to in the Subsidies and Countervailing Measures Agreement. Finally, some experts concluded that it was improper for the panels and the Appellate Body to rule on issues that the negotiating members had intentionally left unclear. They believed that the proper way to deal with vague and ambiguous language in the WTO agreements was through additional negotiations rather than through panel or Appellate Body rulings. A substantial number of experts stated that WTO rulings on the safeguard issues of causation and unforeseen developments were confusing and difficult to follow. This group included experts with sharply divergent views on other trade remedy issues. Specifically, these experts believed that the lack of clarity in the rulings on the causation issue of nonattribution has made it difficult for domestic agencies to implement the rulings. Some in this group were concerned that the rulings seemed to require a quantitative analysis of each factor causing serious injury to domestic industry to ensure the factors were not being improperly attributed to increased imports, and several questioned whether domestic agencies could perform this kind of analysis. The experts also had concerns about how domestic agencies could implement the Appellate Body rulings on the issue of unforeseen developments. Specifically, they were unsure how WTO members would show that increased imports causing serious injury resulted from developments they had not foreseen when they made tariff concessions or assumed other obligations under GATT. A few experts were surprised that the Appellate Body had resurrected the GATT requirement on unforeseen developments, which they thought had been abandoned and had not been specifically included in the Safeguards Agreement. In its December 2002 report to Congress, the executive branch concluded that, overall, the United States had fared well in WTO dispute settlement, including in a number of trade remedy cases. Nevertheless, the report raised concerns about how the WTO had applied standard of review in trade remedy cases and stated that some rulings were troubling in “their failure to recognize that agreement terms may be susceptible of multiple, reasonable interpretations among which WTO members may properly choose.” The report specifically criticized the Appellate Body ruling in United States—Antidumping Measures on Certain Hot-Rolled Steel Products from Japan for how it had applied the legal standard of review in article 17.6(ii). The executive branch report also stated that in certain trade remedy cases, the WTO had found obligations and imposed restrictions on WTO members that were not supported by the texts of the WTO agreements. The report mentioned the rulings on facts available, unforeseen developments, nonattribution, and several others as examples. The report qualified these criticisms by stating that not all of the WTO findings it cited were based on a problematical analytical approach or that the WTO would have necessarily found in favor of the United States had the proper approach been used. Nevertheless, the report emphasized that the problematic findings were troubling due to their lack of grounding in the texts of the negotiated agreements. During the course of our work, the Commerce Department and ITC officials reiterated these concerns. ITC officials indicated that they do not agree that the WTO has properly applied standard of review in trade remedy cases. Specifically, they stated that the WTO has applied article 17.6(ii) of the Antidumping Agreement in a manner that raises a question about whether the second sentence of the provision, requiring the WTO to uphold domestic agency determinations that rest on permissible interpretations of the Antidumping Agreement, has real meaning. In these officials’ view, the WTO has not allowed for more than one permissible interpretation of the relevant provisions. In this regard, the United States recently proposed that article 17.6 be considered as a topic for discussion in the Negotiating Group on Rules in the ongoing WTO negotiations. In its submission, the United States stated that panels and the Appellate Body have not accepted WTO members’ reasonable, permissible interpretations of the Antidumping Agreement. ITC officials also stated that in some instances, the Appellate Body had ruled incorrectly on important issues and created new obligations, which do not appear in and are unsupported by the plain language of the relevant agreements. One example involved the Appellate Body findings on the nonattribution provision of the Safeguards Agreement. The ITC also found it particularly significant that the WTO had enunciated systemic requirements for this issue, as well as unforeseen developments, even though they are not specifically covered by U.S. law. We requested comments on a draft of this report from the Secretary of Commerce, the Chairman of the U.S. International Trade Commission, and the U.S. Trade Representative (USTR). The Commerce Department and the ITC provided written comments, which are reprinted in appendixes IV and V. We obtained oral comments from USTR officials, including the Assistant U.S. Trade Representative for Monitoring and Enforcement. The Commerce Department had three areas of concern regarding our report. First, it emphasized the potential future impact of WTO trade remedy rulings on the U.S.’s ability to impose trade remedies, noting that this potential is far more significant than these rulings’ limited impact to date. Commerce cited, in particular, the possible negative ramifications of two WTO rulings. Specifically, it said that the ruling on privatization could impact a significant number of U.S. countervailing duty orders, and that as a result of the EU bed linen ruling, the EU has recently challenged more than 20 U.S. antidumping investigations and reviews. As a result of this increased emphasis, we modified the sections of this report that present U.S. agency views on the potential future ramifications of WTO decisions on the U.S.’s ability to impose trade remedies. Second, Commerce raised concerns regarding the composition of the group of legal experts we consulted and our characterization of their views as “majority” and “minority.” However, we believe that our methodology for selecting these experts was sound (see app. I). In addition, we believe that our report sufficiently addresses the concerns of the minority of experts. Nevertheless, we have made modifications to the relevant sections of our report to ensure that majority positions and minority concerns are presented in a balanced manner. Finally, Commerce expressed concern that we did not adequately address the executive branch’s views on the WTO’s application of standard of review and other trade remedy issues. As a result, we modified our report to give more prominent treatment to U.S. agency positions. The ITC had two main areas of concern regarding the report. First, the ITC said that the report understated the full effect of WTO rulings on the ability of the United States to impose and maintain trade remedy measures in that the full effect of WTO rulings likely has not yet been realized, citing for example several systemic WTO requirements for safeguard determinations. In response to this comment as well a similar comment from the Commerce Department, we modified the relevant sections of the report as discussed above and used examples that the ITC cited. Second, the ITC did not agree that WTO panels and the Appellate Body have properly applied the standard of review in article 17.6(ii) of the Antidumping Agreement. In response to this concern, we have incorporated the ITC’s views in our report. In addition, we obtained technical comments from the Commerce Department and the ITC, which we have incorporated into the report as appropriate. For example, Commerce noted that we had included challenges to WTO members’ sunset reviews in some of our statistics on trade remedy measures. As a result, we eliminated the sunset review challenges from our statistics. USTR provided technical comments such as clarification of certain terminology. For example, USTR noted that the term “domestic determination” usually connotes a final decision by the appropriate agency as to whether dumping has occurred or whether increased imports have caused injury or are threatening injury to domestic industry. Accordingly, we clarified our definition in this report and made other technical changes as appropriate. USTR also noted that U.S. trade remedy measures had been challenged more frequently than those of other WTO members in part because U.S. trade remedy laws and investigations are more transparent. We have added this point to our report. We are sending copies of this report to interested congressional committees, the U.S. Trade Representative, the Secretary of Commerce, and the Chairman of the U.S. International Trade Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VI. The Ranking Minority Member of the Senate Committee on Finance asked us to conduct a review of the World Trade Organization’s (WTO) dispute settlement activity during the past 8 years, focusing on trade remedy disputes. Specifically, in this report we (1) identified the major trends in WTO dispute settlement activity concerning trade remedies; (2) analyzed the outcome of WTO rulings in completed trade remedy cases; (3) assessed the major impacts of these rulings on WTO members’ laws, regulations, and practices and on their ability to impose trade remedies; (4) identified the standards of review for trade remedy cases and Appellate Body guidance on how they should be applied; and (5) summarized legal experts’ views and U.S. agencies’ positions on standard of review and other trade remedy issues. To identify the major trends in dispute settlement activity during the last 8 years, we developed a database containing all members’ requests for consultation (complaints) filed from 1995 through 2002. We obtained the data for the database from the WTO Web site, including data on each request for consultation; data on the complainant(s), defendant, and complaint date; and a short title. To determine which disputes related to trade remedies, we examined the short titles of the cases; the initial complaint filed with the WTO; and WTO documents, including the Update of WTO Dispute Settlement Cases, January 2003. Our analysis of trade remedy cases focused exclusively on cases brought under the WTO trade remedy agreements—the Antidumping Agreement, the Agreement on Safeguards, the Subsidies and Countervailing Measures Agreement, and parts of the General Agreement on Tariffs and Trade 1994. To obtain the number 198 for formal dispute settlement cases filed with the WTO from 1995 through 2002, we combined multiple complaints against one WTO member on the same law, measure, or action into one distinct case for the purposes of our analysis. We did this because multiple WTO members can file complaints against one member. For example, 9 WTO members filed complaints regarding 1 U.S. steel safeguard measure imposed in March 2002. As a result, the 276 separate complaints filed from 1995 through 2002 resulted in 198 distinct cases. To determine which WTO members imposed the most trade remedy measures from 1995 through 2002, we used WTO data that were based on the notifications filed with the WTO by each member. We excluded challenges to WTO members’ sunset reviews in our data on trade remedy measures in response to agency comments. For antidumping and countervailing duty measures, we used summary data that the WTO Secretariat compiled. Department of Commerce officials noted that these WTO data differ from Commerce’s data on U.S. antidumping and countervailing measures and recommended that we use Commerce data. However, because the WTO is the only source of comparable data on the use of trade remedy measures by all WTO members, we ultimately used the WTO data. For safeguards, we analyzed the information contained in the annual reports of the WTO Committee on Safeguards. These reports included information on both preliminary and definitive safeguard measures imposed. To analyze the outcome of WTO rulings in the completed trade remedy cases, we compiled statistics on panel and Appellate Body findings about whether domestic agency determinations and members’ laws were found to be consistent or inconsistent with WTO trade remedy provisions. We defined “completed” cases as those cases in which the Dispute Settlement Body had adopted a panel or Appellate Body decision as of December 31, 2002. To analyze WTO findings about domestic determinations, for the most part, we reviewed the concluding findings at the end of the panel and Appellate Body reports. When several findings were included within a single paragraph in the concluding findings, we generally counted each finding separately. In the several instances in which concluding sections of panel reports did not clearly indicate these findings, we obtained our numbers by evaluating the full reports. For our statistics on findings about domestic agency determinations, we did not distinguish between more important issues—such as the causal relationship between increased imports and injury to domestic industry—and those that seemed less important—for example, notification requirements and certain evidentiary issues. To analyze direct challenges to members’ laws in the completed cases, we analyzed the full panel and Appellate Body reports. To assess the major impacts of the WTO rulings in the completed trade remedy cases on members’ laws, regulations, and practices, and on their ability to impose trade remedies, we identified compliance actions taken, or in the process of being taken, by WTO members as a result of the rulings. First, we consulted the WTO Web site to find any and all official documents filed in the completed trade remedy cases. WTO members and relevant parties in the cases file such documents with the WTO to report actions taken following the rulings and recommendations of adopted panel and Appellate Body reports. Alternatively, some documents indicate only agreements between the relevant parties for compliance actions to be taken, or the status of any ongoing negotiations regarding compliance. For cases where official documentation regarding compliance actions was not found on the WTO Web site, we searched the Dispute Settlement Body archives. We also consulted U.S. agency officials on the one case in which the United States was the complainant. For the cases in which the United States was the defendant, we also consulted officials from the Commerce Department, the U.S. International Trade Commission (ITC), and the U.S. Trade Representative (USTR). These officials provided us the most up-to-date information on the status of bilateral negotiations and U.S. intentions for certain completed cases where compliance information was not yet publicly available. In addition, we monitored congressional Web sites to glean information on the status of legislation in cases involving challenges to U.S. laws. Finally, we obtained copies of the changes to one U.S. regulation and two established practices from the Federal Register. For cases not involving the United States, for the most part, we did not consult with foreign government officials. We relied primarily on official documents that WTO members and relevant parties had filed with the WTO to report their compliance actions and on pertinent comments from U.S. agency officials. To identify the WTO standards of review for trade remedy cases, we analyzed the standards and obtained the views of legal experts, including practitioners and academics (see below). To identify how the panels and the Appellate Body were interpreting and applying the standards, we read the panel and Appellate Body reports for the trade remedy cases completed from 1995 through 2002 as well as Appellate Body reports for other relevant WTO dispute settlement cases. In reading these reports, we identified Appellate Body guidance on how the standards should be applied. Finally, we also read the provisions of the Vienna Convention on the Law of Treaties that the Appellate Body had identified as pertinent to how one of the standards should be applied. To obtain and summarize legal experts’ views on WTO standard of review and other trade remedy issues, we conducted structured interviews with 18 legal experts, including practitioners, academics, and advisers on WTO- related trade remedy issues. In addition, we interviewed a current WTO official and an European Union (EU) official; however, in response to agency comments, we reviewed our decision rule on the composition of our expert group and excluded the WTO official and EU representative from our discussion of expert views, since we did not include U.S. agency officials in this group. To identify the legal experts for our study, we conducted literature searches, read formal publications on WTO standard of review and trade remedies, sought recommendations from other experts and the International Trade Committee of the American Bar Association, and attended seminars on issues surrounding standard of review and trade remedies. Our main criteria for selecting the experts for our study were that they (1) had past experience with WTO trade remedy cases; (2) had been active in writing and/or speaking about issues pertaining to WTO dispute settlement, including standard of review and trade remedies; and (3) constituted a mix of experts representing or affiliated with U.S. domestic interests, foreign interests, or both. We did not choose experts on the basis of their expressed views, because we did not believe that this was methodologically sound. To obtain the views of the experts, we conducted structured interviews to ensure that we asked all of the experts the same questions. We coded the answers to key survey questions to help us analyze the experts’ views and assess the frequency with which particular views were held. To write the case summaries, we consulted the WTO Web site and reviewed the panel and Appellate Body reports for the 25 completed trade remedy cases. We also reviewed the dispute settlement commentaries on the www.WorldTradeLaw.net Web site. We performed our work from September 2002 to July 2003 in accordance with generally accepted government auditing standards. Between the inception of the World Trade Organization (WTO) in 1995 and December 31, 2002, the WTO ruled on 25 cases involving the trade remedies of antidumping, countervailing duties (CVD), and safeguards. Table 3 lists the cases in order of their WTO dispute case number. It is followed by a brief summary of each case that includes information on the case's outcome and major issues. In June 1994, Brazil initiated a countervailing duty (CVD) investigation to determine whether imports of desiccated coconut and coconut milk from Côte d’Ivoire, Indonesia, Malaysia, the Philippines, and Sri Lanka had been subsidized. Brazil imposed provisional CVDs on imports of desiccated coconut from all of these countries except Malaysia in March 1995 and final CVDs in August 1995. The Philippines challenged the Brazilian CVDs under various provisions of the General Agreement on Tariffs and Trade 1994 (GATT 1994) and the World Trade Organization (WTO) Agreement on Agriculture. Brazil’s principal argument was that none of the WTO provisions relied upon by the Philippines applies in this case because the Brazilian subsidy investigation was initiated on the basis of an application received prior to the date the WTO Agreement entered into force. The Appellate Body upheld the panel finding that GATT 1994 provisions on CVD investigations did not apply because this dispute involved application of a Brazilian CVD measure based on an investigation initiated prior to January 1, 1995—the date on which the WTO Agreement entered into effect. Accordingly, the Appellate Body upheld the panel’s finding that the dispute was not properly before it. No compliance action was necessary. Mexico challenged both the initiation of Guatemala’s antidumping investigation of imports of grey portland cement from Mexico and various decisions and conduct of the Guatemalan domestic authority during the investigation. Guatemala’s principal claim was that Mexico’s panel request did not identify any of the three measures listed in article 17.4 of the Antidumping Agreement (ADA), and therefore the panel should not hear the claim. The panel found that Guatemala had failed to comply with article 5.3 of the ADA by initiating the antidumping investigation on the basis of insufficient evidence of dumping, injury, and casual link between dumping and injury. The panel also found that the matters referred to in Mexico’s panel request for establishment of a panel were properly before it. The Appellate Body reversed the panel and determined that the dispute was not properly before the panel because Mexico’s panel request did not identify the measure it was complaining about. Consequently, it did not consider the panel’s findings on article 5.3. After the Appellate Body effectively dismissed this case, Mexico brought the case again with a new panel request (see our case summary 11 of Guatemala – Definitive Antidumping Measures on Grey Portland Cement from Mexico, DS 156). The new panel considered many of the same issues that were involved in this case. European Union (EU) The EU challenged Korea’s imposition of a safeguard measure on imports of skimmed milk powder preparations from the EU. The safeguard measure was in the form of a quantitative restriction on imports of these dairy products. The EU argued that Korea’s safeguard measure was inconsistent with various provisions of the Safeguards Agreement as well as article XIX:1 of GATT 1994. Generally, the EU contended that Korea had not shown that increases in imports resulted from “unforeseen developments,” had not examined all factors in its examination of serious injury, and had not adequately considered the extent of application of the safeguard measure. The Appellate Body upheld several panel findings that Korea had acted inconsistently with the Safeguards Agreement because of its determinations regarding serious injury. The Appellate Body also reversed a panel finding on the issue of “unforeseen developments.” Accordingly, it recommended that Korea bring its safeguard measure into conformity with the Safeguards Agreement. Korea reported to the WTO that it had effectively terminated the safeguard measure on imports of the dairy products on May 20, 2000. By lifting the safeguard measure, Korea considers that it has implemented the recommendations and rulings of the Dispute Settlement Body (DSB). Korea challenged the U.S.’s failure to revoke an antidumping order on Korean dynamic random access memory semiconductors (DRAMS) of one megabyte or above. Korea contended that the U.S. regulatory standard under which it refused to revoke the antidumping order with respect to two Korean producers violated the ADA. Korea also challenged the Department of Commerce’s rejection of certain cost information and its application of the de minimis standard during the administrative review of the antidumping order. The panel found that the U.S. regulatory standard for revoking an antidumping order was inconsistent with the ADA. However, the panel also upheld several aspects of the U.S.’s application of its antidumping laws. The panel recommended that the DSB request that the United States bring its regulatory standard for revoking an antidumping order, and the results of its third administrative review, into conformity with its obligations under the ADA. The parties did not appeal the panel findings. The United States took several compliance actions as a result of the panel’s findings. The United States deleted the “not likely” criterion from its regulation and replaced it with a requirement that the Secretary of Commerce consider “whether the continued application of the antidumping duty order is otherwise necessary to offset dumping.” Using this modified standard, the United States found that the continued application of the dumping order was necessary to offset dumping and, accordingly, did not revoke the antidumping order. Korea asserted that these actions failed to comply with the DSB’s recommendations and rulings. During the compliance panel proceeding, however, the United States revoked the antidumping order as a result of the U.S. sunset review process, primarily because the petitioner withdrew from the proceeding. The United States and Korea then notified the DSB of a mutually agreed- upon solution to the dispute, and the compliance panel proceeding was terminated. European Union (EU) The EU challenged Argentina’s imposition of safeguards on imports of EU footwear. The safeguard measure took the form of minimum specific duties on these imports. For several years prior to this EU challenge, Argentina had maintained various measures regarding imports of footwear and other clothing and textiles. The EU contended that the safeguard measure violated article XIX:1(a) of GATT 1994 and various provisions of the Safeguards Agreement. The Appellate Body upheld panel findings that Argentina’s safeguard investigation and determinations of increased imports, serious injury, and causation were inconsistent with articles 2 and 4 of the Safeguards Agreement, and thus there was no legal basis for applying safeguards. As a result, it recommended that the DSB request that Argentina bring its safeguard measures into conformity with its obligations under the Safeguards Agreement. Argentina indicated to the WTO in February 2000 that it intended to implement the DSB’s rulings and recommendations. Poland challenged Thailand’s imposition of antidumping duties on imports of certain Polish steel products. The final antidumping duty was a percentage of a determined value of these products. Poland contended that Thailand’s injury and dumping determinations were inconsistent with various provisions of the ADA. The Appellate Body affirmed the panel’s findings that Thailand had violated the ADA with regard to Thailand’s findings about injury to domestic industry and the causal relationship between dumped imports and injury to domestic industry. Although the Appellate Body also upheld the panel’s application of the standard of review in article 17.6(ii) of the ADA, it reversed a panel interpretation of article 17.6(i). As a result of these rulings, the Appellate Body recommended that the DSB request that Thailand bring its antidumping measure into conformity with its obligations under the ADA. Thailand reexamined aspects of the injury determination that the panel and Appellate Body had found to be inconsistent with the ADA and found that the antidumping measure should be maintained. Subsequently, in December 2001, Thailand informed the WTO that it had fully implemented the DSB’s recommendations. In January 2002, Poland and Thailand announced they had reached agreement that this case should no longer be on the DSB’s agenda. The United States challenged Mexico’s imposition of antidumping duties on imports of two grades of high-fructose corn syrup (HFCS) from the United States. The final antidumping measure imposed duties of up to $175.50 per metric ton of imported HFCS and ordered the collection of duties retroactive to the imposition of provisional duties. The United States contended that both the initiation of the antidumping investigation and the determination of threat of injury were inconsistent with the ADA. Although the panel upheld the way in which Mexico initiated its antidumping investigation, it concluded that Mexico’s imposition of the antidumping measure was inconsistent with various provisions of the ADA. As a result, the panel recommended that the DSB request that Mexico bring its antidumping measure into conformity with its obligations under the ADA. The panel findings were not appealed. Mexico revised its antidumping determination following the panel report. However, in a subsequent proceeding, Mexico again concluded that the imports of HFCS constituted a threat of material injury to the domestic sugar industry. As a result, the United States requested a compliance review under article 21.5 of the DSU. In the article 21.5 proceeding, the Appellate Body upheld panel findings that Mexico’s revised determination was inconsistent with various provisions of the ADA. According to U.S. officials, Mexico revoked the antidumping measure in May 2002. Japan and the EU separately challenged section 801 of the Revenue Act of 1916 (1916 Act) as being inconsistent with article VI of GATT 1994 and various provisions of the ADA. Section 801 of the 1916 Act allows for private claims against, and criminal prosecutions of, parties that import or assist in importing goods into the United States at a price substantially less than actual market value or wholesale price. The Japan and EU challenges were to the law itself rather than to its implementation. The Appellate Body affirmed the panel conclusions that antidumping legislation, including the 1916 Act, can be directly challenged, absent any particular application. It also upheld the panel findings that the 1916 Act itself was inconsistent with article VI of GATT 1994 and various provisions of the ADA. Accordingly, the Appellate Body recommended that the United States bring the 1916 Act into conformity with its obligations under these agreements. The United States continues to work to enact legislation to implement the WTO ruling. Although a number of bills have been introduced in the Congress calling for repeal of section 801 of the 1916 Act, to date no legislation has been passed. As of July 15, 2003, the latest bills were H.R. 1073, introduced in the House of Representatives on March 4, 2003; S. 1080, introduced in the Senate on May 19, 2003; and S. 1155, introduced in the Senate on May 23, 2003. The bills are somewhat different in that the repeals under H.R. 1073 and S. 1155 would not affect pending cases, whereas the S. 1080 repeal would apply to them. European Union (EU) (DS 138) The United States imposed CVDs on imports of certain hot-rolled lead and bismuth carbon steel products originating in the United Kingdom, as a result of alleged subsidies the British Government granted to British Steel Corporation, a state-owned company, between 1977 and 1986. The British Government began the privatization of British Steel in 1986 and completed it in 1988. The Commerce Department found the sale to be at arm’s length for fair market value and consistent with commercial considerations. Notwithstanding these factors, the Commerce Department imposed CVDs on these United Kingdom imports, initially in 1993 and in subsequent annual reviews, on the grounds that a certain proportion of the subsidies granted to British Steel had passed through to the new entities. The EU claimed that the U.S. methodology in calculating the amount of these subsidies was inconsistent with several provisions of the WTO SCM Agreement. The Appellate Body upheld the panel finding that the financial contributions provided to British Steel did not confer a benefit on the new owners. In doing so, the Appellate Body also upheld a panel finding that faulted the Commerce Department’s methodology in presuming that a benefit had been provided to the new owners. Accordingly, it found that the U.S. CVDs were inconsistent with the SCM Agreement and recommended that the DSB request that the United States bring its measures into conformity with its obligations under that agreement. The panel suggested that the United States take all appropriate steps, including revision of its administrative practices, to prevent a violation of the SCM Agreement, but the Appellate Body did not make this specific recommendation. Prior to the issuance of the Appellate Body report, the Commerce Department revoked the CVD measure in response to a request from the U.S. industry. However, the Commerce Department changed its methodology as a result of related domestic litigation. European Union (EU) India challenged the EU’s imposition of antidumping duties on imports of various types of cotton bed linens from India. Due to the high number of domestic producers involved in its investigation, the EU established a sample of domestic producers consisting of 17 of the 35 companies identified as the EU industry. The dumping duties that were imposed differed in amount depending on the exporter in question. India argued that the imposition of antidumping duties was inconsistent with various provisions of the ADA. One of the principal issues involved the EU’s practice of zeroing in calculating antidumping margins. The Appellate Body affirmed the panel’s finding that the EU’s practice of zeroing was inconsistent with the ADA. The Appellate Body also reversed several panel findings and concluded that the EU had acted inconsistently with the ADA in calculating amounts for administrative, selling, and general costs and profits in its investigation. As a result, the Appellate Body recommended that the DSB request that the EU bring its antidumping measure into conformity with its obligations under the ADA. After the DSB adopted the Appellate Body report, the EU established lower dumping margins for Indian imports of bed linens. Although it also concluded that dumped imports from India were still causing material injury to the EU industry, the EU suspended application of the duties for these imports. In a subsequent proceeding, the EU determined that there was a causal link between dumped imports from India and material injury to the EU industry, but the EU continued to suspend imposition of the antidumping duties. Because India believed that the EU had not complied with the recommendations of the DSB, it brought a proceeding under article 21.5 of the DSU contesting compliance. Although the panel in the article 21.5 proceeding determined that the EU had implemented the recommendation of the DSB, the Appellate Body reversed and found that the EU was still acting inconsistently with the ADA. Accordingly, it recommended that the DSB request that the EU bring its antidumping measure into conformity with that agreement. In 1999, Mexico challenged Guatemala’s imposition of an antidumping measure on imports of portland cement from Mexico. The measure was in the form of an antidumping duty of 89.54 percent that was imposed on these imports. In its challenge, Mexico contended that the initiation and conduct of the antidumping investigation and the imposition of the measure violated article VI of GATT 1994 and various provisions of the ADA. Mexico’s challenge was a follow-up to an earlier Mexican challenge to Guatemala’s imposition of antidumping duties on imports of the same product (see case summary 2). Although the panel in that case ruled that Guatemala had acted inconsistently with several provisions of the ADA and recommended that Guatemala revoke the dumping order, the Appellate Body reversed the panel and found that the dispute was not properly before the panel. The panel determined that Guatemala did not properly determine that there was sufficient evidence to justify initiation of the antidumping investigation. It also found that Guatemala did not properly determine that the imports under investigation were being dumped, that the domestic producer of cement in Guatemala was being injured, or that the imports were the cause of the injury. Accordingly, it concluded that Guatemala had acted inconsistently with various provisions of the ADA. Under the authority provided in article 19.1 of the DSU, the panel recommended that Guatemala revoke its antidumping measure on these imports. However, the panel rejected a Mexican request that the panel recommend that Guatemala refund previously collected antidumping duties. The panel findings were not appealed. In December 2000, Guatemala informed the WTO that it had removed the antidumping measures in question and complied with its recommendations. European Union (EU) The EU challenged a United States safeguard measure imposed on imports of wheat gluten from the EU. The safeguard measure consisted of a quantitative restriction on these imports for 3 years. The United States excluded products from Canada, a U.S. NAFTA partner, and from certain other WTO members from the application of the safeguard. The EU contended that the safeguard measure violated provisions of the Safeguards Agreement and GATT 1994. The EU complaints were directed at the U.S.’s serious injury determination, its causation analysis, and its findings about the relationship between the members included in its investigation and those covered by the safeguard measure. The Appellate Body found that the U.S.’s safeguard measure was applied inconsistently with the Safeguards Agreement and GATT 1994 and recommended that the DSB request that the United States bring the measure into conformity with those agreements. Although the Appellate Body upheld part of the panel findings on serious injury, it reversed the panel findings on another serious injury issue and on an important aspect of the panel’s causation analysis. In addition, the Appellate Body agreed with the panel that the United States inappropriately excluded imports from Canada from its safeguard measure after including such imports in its injury investigation. The safeguard measure expired in June 2001. Australia and New Zealand challenged a U.S. safeguard measure imposed on imports of fresh, chilled, and frozen lamb meat from New Zealand and Australia. The measure was in the form of a tariff rate quota that was to span 3 years. The safeguard measure did not apply to imports from Canada, Mexico, certain other U.S. trading partners, and developing countries. Australia and New Zealand contended that the safeguard measure violated various provisions of the Safeguards Agreement and GATT 1994. Their complaints were directed at U.S. findings about the definition of the domestic lamb meat industry, the existence of serious injury, and the causal relationship between increased imports and injury to the domestic lamb meat industry. The Appellate Body found that the United States safeguard measure was applied inconsistently with the Safeguards Agreement and GATT 1994 and recommended that the DSB request that the United States bring its measure into conformity with those agreements. Although the Appellate Body upheld a number of important panel findings—including those involving the definition of the domestic lamb meat industry, serious injury, and a part of the panel’s causation analysis—it reversed the panel’s interpretation of the causation requirements in the Safeguards Agreement. The Appellate Body also concluded that the panel incorrectly applied the standard of review in article 11 in evaluating the U.S.’s determination about the existence of a threat of serious injury. In August 2001, the United States decided to end the application of the safeguard measure on imports of lamb meat, effective in November 2001. Korea challenged several aspects of the U.S. antidumping investigation and measures on imports of stainless steel plate in coils (plate) and stainless steel sheet and strip (sheet) from Korea. Specifically, Korea challenged the U.S. treatment of currency conversions and of sales to U.S. companies that failed to pay for the imports due to bankruptcy. Finally, Korea challenged the U.S. calculation of the dumping margin. The panel found several aspects of the U.S. investigation to be inconsistent with the ADA. It found that the currency conversions in the sheet investigation were inconsistent with the ADA, though it also found that the conversions in the plate investigation were consistent with the ADA. The panel also found the U.S. treatment of sales for which payment was never received and its use of multiple averaging periods in its calculation of dumping margins were inconsistent with the ADA. Accordingly, the panel recommended that the United States bring its antidumping duties on Korean steel plate and sheet into compliance with the ADA. The panel findings were not appealed. As of April 2003, the antidumping orders were still in effect. According to officials from the Commerce Department, the United States made some revisions in its calculation of dumping margins in this case. Japan challenged the U.S.’s imposition of antidumping duties on imports of hot-rolled steel products from Japan. Japan claimed that certain provisions of U.S. antidumping laws, regulations, and administrative procedures were inconsistent with the ADA. For example, Japan challenged the U.S.’s application of “facts available” and adverse facts as inconsistent with its ADA obligations. Japan also challenged the U.S.’s statutory method for calculating an “all others” rate as inconsistent with the ADA. The Appellate Body upheld panel findings of U.S. violations relating to the use of facts available, adverse facts, calculation of all other rates, and application of the arm’s-length test. The Appellate Body also reversed the panel finding on the issue of nonattribution without specifically finding against the United States on that issue. Although the Appellate Body upheld a panel finding that United States law on captive production was consistent with the ADA, it reversed the panel’s finding that the United States had applied the law properly. As a result of the findings against the United States, the Appellate Body recommended that the DSB request that the United States bring its measures into conformity with the ADA. The Appellate Body also made important statements about the standard of review in article 17.6 of the ADA. In November 2002, the United States completed a new antidumping determination that implemented the recommendations and rulings of the DSB. As a result of the changes made to the dumping margin calculations, the dumping margins for all three companies and all others were reduced. The United States also revised its rules regarding its arm’s-length test to determine if sales are “in the ordinary course of trade.” The United States continues work to implement the recommendations and rulings regarding the U.S. antidumping statutory provision on the “all others rate.” The United States and Japan agreed to extend the deadline for implementation to December 2003, or until the end of the first session of the next Congress, whichever is earlier. European Union (EU) The EU challenged Argentina’s imposition of antidumping measures on imports of ceramic floor tiles from Italy. The antidumping measures took the form of specific antidumping duties that were based on the difference between the actual import price and a designated minimum export price, whenever the former was lower than the latter. The EU claimed that the antidumping measures were inconsistent with various provisions of the ADA. Among other things, the EU maintained that Argentina disregarded important information provided by exporters, failed to allow for differences in physical characteristics between models of tiles exported to Argentina and those sold in Italy, and did not inform Italian exporters of important facts that formed the basis for the decision to apply antidumping measures. The panel found that Argentina acted inconsistently with various provisions of the ADA and upheld most of the EU claims. As a result, the panel recommended that Argentina bring its antidumping measures into conformity with its obligations under the ADA. The panel findings were not appealed. In May 2002, Argentina informed the DSB that on April 24, 2002, it had revoked the antidumping measure at issue in this case. Canada directly challenged a number of U.S. legal measures that it argued required the United States to treat export restraints as financial contributions, and thus potential subsidies, in violation of the SCM Agreement. Canada argued that export restraints could result in providing subsidies to other products that used or incorporated the restricted product when the domestic price of the restricted product was affected by the restraint. Canada’s challenge was only to U.S. legal measures and not to a particular instance in which an export restraint had been the subject of a CVD investigation. The panel found against Canada and concluded that U.S. CVD law is not inconsistent with the SCM Agreement; U.S. law does not require that export restraints be treated as financial contributions and thus subsidies. In addition, the panel suggested that three of the legal measures Canada contested could not be challenged independently of the relevant U.S. statute. To facilitate its analysis of the challenge to the U.S. legal measures, the panel first concluded that export restraints, as defined in the dispute, do not constitute financial contributions within the meaning of the SCM Agreement. The panel findings were not appealed. No compliance action was necessary. (DS 202) Korea challenged the U.S. imposition of a safeguard measure on imports of certain line pipe from Korea. The safeguard measure that was imposed was in the form of a duty increase for 3 years. The measure applied to imports from all WTO members except Canada and Mexico. Korea maintained that parts of the U.S. investigation as well as the safeguard measure itself violated provisions of the Safeguards Agreement and GATT 1994. The panel and the Appellate Body found several aspects of the U.S. safeguard measure to be inconsistent with provisions of the Safeguards Agreement and GATT 1994. This included U.S. determinations about causation. The Appellate Body also reversed several panel findings about exclusion of certain WTO members from the safeguard measure and the extent of application of the measure, which resulted in findings against the United States. The Appellate Body also reversed the panel on one of its injury findings, which resulted in upholding a United States determination. As a result of the findings against the United States, the Appellate Body recommended that the DSB request that the United States bring its measure into conformity with the Safeguards Agreement and GATT 1994. In July 2002, the United States and Korea agreed on several steps to implement the recommendations of the DSB. They agreed that the United States would increase the amount of imports exempt from additional tariffs, beginning in September 2002 and ending in March 2003. The measure then expired in March 2003. (DS 206) India challenged several aspects of the U.S. antidumping investigation for imports of steel plate from India. Specifically, India challenged the U.S. rejection of certain sales information and its reliance on facts available in its investigation. India further challenged U.S. statutory provisions governing the use of “facts available” and the U.S. treatment of India as a developing country. The panel upheld the U.S. statutory provisions governing the use of “facts available,” but found that the United States had not provided a legally sufficient justification for rejecting some sales information during its investigation. Accordingly, the panel recommended that the DSB request that the United States bring its antidumping measure into conformity with its obligation under the ADA. The panel also found that the U.S. “practice” governing total facts available is not a “measure” that can violate the ADA. The panel findings were not appealed. In February 2003, the United States informed the DSB that it had implemented the WTO’s ruling by issuing a second determination regarding antidumping duties imposed on imports of steel plate from India. Also in February 2003, the United States and India came to an agreement regarding the procedure to be followed if India believes that the United States has not fully complied with the findings and recommendations of the DSB. Argentina made two distinct challenges to Chilean restrictions on imports of Argentine wheat, wheat flour, sugar, and edible vegetable oils. Thus, Argentina challenged both Chile’s price band system, which Chile applied to calculate tariff rates on these imports, and its imposition of safeguard measures on these imports. In certain situations, the use of Chile’s price band system resulted in tariff rates higher than the bound tariff rate in Chile’s WTO schedule. Chile also used its price band system to calculate the safeguard measures it imposed on the Argentine imports. Argentina claimed (1) that Chile’s price band system violated GATT 1994 and the WTO Agreement on Agriculture and (2) that Chilean safeguards violated various provisions of the Safeguards Agreement as well as GATT 1994. Argentina’s safeguards challenges were directed at how Chile evaluated increases in imports, the causal connection between imports and injury to Chile’s domestic industry, and the scope of the safeguard measures. With respect to the safeguards issues, the panel determined that Chile had violated various provisions of the Safeguards Agreement and GATT 1994. Nevertheless, the panel did not make a recommendation regarding removal of the safeguard measures because they had been removed before the panel published its report. Although the panel findings on safeguards were not appealed, the Appellate Body upheld panel findings that Chile’s price band system was inconsistent with GATT 1994 and the Agreement on Agriculture. As a result, the Appellate Body recommended that the DSB request that Chile bring its price band system into conformity with its obligations under the Agreement on Agriculture. No action was required with regard to the safeguard measures. Chile’s compliance with regard to its price band system involves the WTO Agreement on Agriculture and is due by December 23, 2003. Turkey challenged Egypt’s imposition of antidumping duties on imports of steel rebar from Turkey. The antidumping duties imposed ranged from about 23 percent to 61 percent, depending on the exporter. Turkey contended that Egypt’s determinations of injury and dumping and the causal relationship between the dumped imports and injury to domestic injury were inconsistent with the ADA. A number of Turkey’s claims involved questionnaires that the Egyptian investigating authority sent to respondent companies requesting information about sales prices and the cost of producing rebar. Although the panel upheld 19 determinations of the Egyptian investigating authority, it found that Egypt had violated articles 3.4 and 6.8 of the ADA. Accordingly, the panel recommended that Egypt bring its definitive antidumping measure on imports of steel rebar from Turkey into compliance with the ADA. The panel findings were not appealed. In November 2002, Egypt and Turkey informed the WTO that they had agreed Egypt would implement the DSB’s recommendations and rulings by July 31, 2003. In May 2003, Egypt reported to the WTO that it was reexamining the dumping calculations of two Turkish companies, and the general injury assessment, in light of this case. European Union (EU) (DS 212) The EU challenged U.S. CVDs resulting from 12 investigations on imports of certain EU steel products. The steel products subject to these proceedings were formerly produced by state-owned enterprises that had been privatized in arm’s-length transactions for fair market value. The EU complained that the two methodologies the United States used to determine whether past subsidies continued to benefit the privatized company violated the SCM Agreement. In addition, the EU claimed that a provision of U.S. countervailing law—section 771(5)(F) of the Tariff Act of 1930—was, on its face, inconsistent with that agreement. The panel found that where a privatization is at arm’s length and for fair market value, the benefit from a prior subsidy to a state-owned enterprise is not passed on to the privatized entity. The Appellate Body affirmed the panel’s finding that the Commerce Department’s privatization methodologies were inconsistent with the SCM Agreement but disagreed with the panel reasoning that a fair market value sale of a government entity necessarily extinguishes prior subsidy benefits. The Appellate Body reversed the panel and found that section 771(5)(F) of the Tariff Act of 1930 was consistent with the SCM Agreement. On June 23, 2003, the Commerce Department published in the Federal Register its final modification to its privatization practice in order to comply with the WTO’s rulings and recommendations. The parties have agreed that the United States will use the new methodology in the 12 disputed investigations and reviews by November 8, 2003, and in future cases. In addition, Commerce is evaluating how many other CVD orders might be affected by this new methodology. European Union (EU) The EU challenged provisions of U.S. countervailing law and regulations as well as application of the law and regulations to a sunset review of a CVD order on certain imports of carbon steel from Germany. The EU argued that, among other things, the United States had acted inconsistently with the SCM Agreement by automatically self-initiating the sunset review, by failing to apply a 1 percent de minimis standard of subsidization set forth in the SCM Agreement, and by applying an improper standard to determine whether a continuation or recurrence of subsidization was likely. The Appellate Body upheld the panel findings that U.S. laws—regarding (1) the automatic self-initiation of sunset reviews and (2) the obligation in the SCM Agreement to determine the likelihood of continuation or recurrence of subsidization in sunset reviews—were consistent with the SCM Agreement. Nevertheless, with regard to the de minimis standard, the Appellate Body reversed the panel and found that the 1 percent de minimis standard applied only to initial CVD investigations and not to sunset reviews of CVD orders. Accordingly, it found that U.S. law setting forth a de minimis subsidization threshold for sunset reviews below that set forth for original investigations, as well as its application, was consistent with the SCM Agreement. In an issue that was not appealed, the panel found that the United States had acted inconsistently with the SCM Agreement in the sunset review by failing to properly determine the likelihood of the continuation or recurrence of subsidization. On the basis of this finding, the Appellate Body recommended that the United States bring its CVD measure into conformity with the SCM Agreement. The United States has agreed to implement the panel’s finding on the likelihood of continuation or recurrence of subsidization. Commerce Department officials said that implementation would require the agency to conduct a new sunset analysis with respect to this particular German steel order, but would not require a regulatory change. (DS 221) Canada directly challenged section 129(c)(1) of the U.S. Uruguay Round Agreements Act (URAA), claiming that it was inconsistent with provisions of a number of WTO agreements. Canada specifically argued that section 129(c)(1) of the URAA has the effect of requiring the United States to act inconsistently with or precludes the United States from complying with various agreements. Canada failed to establish that section 129(c)(1) is inconsistent with WTO rules. The panel findings were not appealed. No compliance action was necessary. Canada challenged the U.S. imposition of provisional CVD measures on certain softwood lumber imports from Canada. Canada also claimed that the U.S. law and regulations concerning expedited and administrative reviews of CVD orders were, in several respects, inconsistent with the SCM Agreement and Article VI of GATT 1994. Although the panel upheld the United States on several issues, including the direct challenges to U.S. law, it found that the methodology the Commerce Department used to determine the subsidy benefit was inconsistent with the SCM Agreement. The panel also found that the Commerce Department’s retroactive application of the provisional measure was inconsistent with the SCM Agreement. Accordingly, it recommended that the DSB request that the United States bring its provisional measure into conformity with its obligations under that agreement. The panel findings were not appealed. In November 2002, the United States notified the DSB that the CVD measures challenged by Canada were no longer in effect and that the provisional cash deposits had been refunded. Canada, however, argued that Commerce’s final determination was substantially unchanged and subsequently brought another WTO complaint challenging that determination. The WTO panel’s decision in that case is due to be made public around the time this report is issued. The following are GAO’s comments on the Department of Commerce’s letter dated July 14, 2003. 1. Our report presents data on changes to WTO members’ laws, regulations, and practices that have resulted from WTO rulings through December 2002. The data clearly indicate there have been few changes in WTO members’ laws, regulations, and practices to date. 2. In response to the Commerce Department’s (and the ITC’s) comment(s), we modified our characterization of U.S. agency views on the impact of WTO rulings on the U.S.’s ability to impose trade remedies. The sections of this report that provide U.S. agencies’ viewpoints now reflect the agencies’ increased emphasis on the potential future ramifications of WTO decisions indicated by the Commerce Department (and ITC). 3. The Commerce Department states that our report’s presentation implies that the impact of the WTO dispute settlement system on members’ ability to impose trade remedies must be small based on statistical information we present. However, our report simply provides data on the number of WTO members’ measures that were notified to the WTO from 1995 through 2002 and the number that were challenged. Moreover, we have modified the report to reflect agency concerns about the impact of the dispute settlement system on members’ ability to impose trade remedies. 4. While our report provides aggregate data on the number of trade remedy measures imposed by all WTO members from 1995 to 2002, it was beyond the scope of our review to analyze trends in the growth of these measures for individual WTO members and reasons for the challenges to these measures. 5. While the Commerce Department raised concerns regarding the composition of the group of legal experts we consulted, we believe that our methodology for selecting these experts as outlined in appendix I is sound. As noted, we selected individuals who were identified as leading experts on WTO dispute settlement. These individuals— academics, practitioners, and advisors on WTO-related trade remedy issues—have been active in writing and/or speaking about issues pertaining to WTO dispute settlement. Moreover, the Commerce Department’s assertion that we only included three experts representing domestic petitioners’ interests is incorrect. Although we did not choose experts on the basis of their expressed views because we believe that approach would have been methodologically flawed, our information indicates that of the nine practitioners we interviewed, three represent domestic petitioners, three represent foreign respondents, and three represent both. Nevertheless, in responding to agency comments, we reviewed our decision rule on the composition of the group of experts we consulted. Subsequently, we excluded the views of the current WTO official and the EU representative from our discussion of expert views since we did not include current U.S. officials in this group. However, we briefly noted the views of the current WTO and EU officials. 6. While we believe that our report sufficiently emphasizes the concerns of the minority of experts regarding standards of review and the other trade remedy issues discussed in this report, we have made modifications to the relevant sections of our report to ensure that majority positions and minority concerns are presented in a balanced manner. 7. See comment 2. 8. In response to the Commerce Department’s (and the ITC’s) comment(s), we added a section to our report presenting U.S. agencies’ positions on WTO dispute settlement issues, including the executive branch’s position as outlined in its December 2002 report to Congress. 9. In response to the Commerce Department’s comments, we have added material to our report that discusses relevant aspects of the recent U.S. submission to the WTO Negotiating Group on Rules. The following are GAO’s comments on the U.S. International Trade Commission’s letter dated July 14, 2003. 1. In response to the ITC’s (and Commerce’s) comment(s), we modified our characterization of U.S. agency views on the impact of WTO rulings on the U.S.’s ability to impose trade remedies. The sections of this report that provide U.S. agencies’ viewpoints now reflect the agencies’ increased emphasis on the potential future ramifications of WTO decisions. 2. In response to the ITC’s comments, we have added some discussion of the safeguards issues that the ITC raises in the report’s section on expert views and U.S. agencies’ positions. 3. In response to the ITC’s comments, we have added some discussion of their views on article 17.6(ii) in the report’s section on expert views and U.S. agencies’ positions. 4. See comment 1. In addition to those named above, Jason Bair, Josey Ballenger, Sharron Candon, Martin De Alteriis, Rona Mendelsohn, Mary Moutsos, Mark Speight, and Laura Turman made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
World Trade Organization (WTO) members rely on trade remedies in the form of duties or other import restrictions to protect their industries from injury due to unfair foreign trade practices or unexpected import surges. There is congressional concern that the WTO, created in 1995 to administer trade rules, is interfering with this ability. There is also congressional concern that the WTO is not treating the United States fairly in resolving trade remedy disputes. A congressional requester asked GAO to identify trends in WTO trade remedy disputes since 1995, including the outcomes of these disputes and how they affected members' ability to impose trade remedies. The requester also asked GAO to discuss the standards of review that the WTO applies when ruling on trade remedy disputes and to present U.S. agencies' and legal experts' views on the WTO's application of these standards and related trade remedy issues. In their comments on a draft of this report, the Department of Commerce and the U.S. International Trade Commission stated that the report needed to put more emphasis on U.S. agencies' concerns about the potential adverse impact of WTO rulings on the U.S.'s use of trade remedies. The U.S. Trade Representative provided only technical comments on the report. GAO modified the report as appropriate. About a third of the cases filed in the WTO dispute settlement system from 1995 through 2002 challenged members' trade remedies, with the ratio of such cases increasing over time. Although a relatively small proportion of WTO members' trade remedy measures were challenged in the WTO, the United States faced substantially more challenges than other WTO members. The WTO generally rejected members' decisions to impose trade remedies in the 25 trade remedy disputes resolved from 1995 through 2002. However, GAO found that the WTO ruled for and against the U.S. and other members in roughly the same ratios. Overall, WTO rulings resulted in few changes to members' laws, regulations, and practices but had a relatively greater impact on those of the United States. While U.S. agencies stated that WTO rulings have not yet significantly impaired their ability to impose trade remedies, they had concerns about the potential future adverse impact of WTO rulings. Of the legal experts GAO consulted, a majority concluded that the WTO has properly applied standards of review and correctly ruled on major trade remedy issues. However, a significant minority strongly disagreed with these conclusions. U.S. agencies also said that the WTO has not always properly applied the standards and has, in some cases, imposed obligations on members that are not found in WTO agreements. Nonetheless, the experts almost unanimously agreed that the WTO was not treating the United States any differently than other members.
Responsibility for designing and carrying out federal export promotion programs is widely dispersed. Numerous federal agencies have offices across the country and overseas and operate a wide variety of programs that are intended, at least in part, to assist U.S. companies in entering foreign markets or expanding their presence abroad. For example, agencies provide companies with information on market opportunities and help them connect with potential buyers abroad, provide access to export financing, and negotiate with other countries to lower trade barriers. The dispersion of export promotion activities among numerous agencies led us to observe in a 1992 report that “funding for … agencies involved in export promotion is not made on the basis of an explicit government- wide strategy or set of priorities. Without an overall rationale it is unclear whether export promotion resources are being channeled into areas with the greatest potential return.” In 1992, Congress passed the Export Enhancement Act of 1992, which directed the President to establish the TPCC. The TPCC is chaired by the Secretary of Commerce, and its day- to-day operations are carried out by a secretariat that is housed in Commerce’s International Trade Administration. The TPCC has 20 members, including 7 core members. Oversight of these agencies is dispersed across many congressional committees. Table 1 identifies the authorizing and appropriating subcommittees with jurisdiction over the seven core TPCC agencies. We have reviewed the TPCC’s operations on several occasions since its creation in 1992. We have found that the TPCC and its member agencies have improved coordination in several areas, but we also found shortcomings in the committee’s response to the budget-related portions of its mandate. In 2002, we observed that the Secretary of Commerce, as the chair of the TPCC, made recommendations to the President, through OMB, on selected export promotion budget matters on multiple occasions. However, with no authority to reallocate resources among member agencies and occasional agency resistance to its guidance, the TPCC provided limited direction over the use of export promotion resources in support of its strategies. We also noted that the TPCC had not used its National Export Strategies to examine how agencies’ resources aligned with their goals, and we recommended that the TPCC consistently do so. The TPCC agreed with our findings and recommendation. However, in 2006 we determined that the committee had not implemented our recommendation; we found that the committee’s annual strategies did not review agencies’ allocation of resources in relation to identified priorities. In 2009, we observed that the TPCC’s most recently published National Export Strategy continued to lack an overall review of agency resource allocations relative to government-wide priorities. Export promotion has recently been emphasized as a high priority for the federal government. In his 2010 Executive Order announcing the NEI, the President emphasized that creating jobs and sustainable economic growth in the United States was his top priority, and that increasing exports was a critical component of those efforts. He also laid out eight priority areas to be addressed through the NEI. OMB subsequently identified the NEI’s goal of doubling U.S. exports as one of 14 interim crosscutting priority goals under the GPRA Modernization Act. Additionally, as part of his 2013 and 2014 budget proposals, the President proposed consolidating six departments and agencies involved in export promotion into one new cabinet-level department. In his directives regarding the NEI, the President established a new body, the Export Promotion Cabinet, to develop and implement the initiative. The Export Promotion Cabinet is coordinated by a White House official, has most of the same member agencies as the TPCC, and is to coordinate its efforts with the TPCC. Among other things, the President tasked the Export Promotion Cabinet to work with the TPCC to determine how resources should be allocated. In particular, a February 2012 Presidential Memorandum instructed the Export Promotion Cabinet, in consultation with the TPCC, to evaluate the current allocation of federal government resources, make recommendations to the Director of OMB for their more effective allocation, and propose a unified federal trade budget, consistent with the administration’s priorities, to the Director of OMB as part of the annual process for developing the President’s budget. The Export Enhancement Act states that the TPCC’s strategies should establish a set of priorities for federal export promotion activities and propose a unified federal trade promotion budget that supports the plan. Additionally, we have previously reported that one of the six characteristics of an effective interagency national strategy is that it identifies the resources needed to carry out the strategy. Specifically, an effective national strategy should address what it will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted based on balancing risk reductions with costs. The most recent National Export Strategies, published in 2011 and 2012, outline federal priorities for export promotion, but provide little information on member agencies’ resources for carrying out these priorities. Both strategies outline progress made toward the eight NEI priorities and identify specific areas federal agencies will focus on in the coming year. In fact, the 2011 strategy includes the NEI recommendation to “increase the budget for trade promotion infrastructure” as one of five critical recommendations on which TPCC agencies would focus. However, these strategies do not provide summary information on the total resources available for export promotion and do not discuss how resources are currently allocated across priorities. Without this information, decision makers lack a clear understanding of the total federal resources being dedicated to export promotion activities, and it is not possible to assess the appropriate levels or allocations of export promotion resources. The 2011 and 2012 strategies contain very limited discussions on agencies’ export promotion resources, consisting only of a few bullets that broadly discuss agencies’ budget requests. For example, figure 1 reproduces in its entirety the section in the 2012 report titled “The Administration’s FY2013 Trade Promotion Budget.” The section includes three bullets relating to agencies’ requested export promotion budgets for 2013, but provides no context on the total federal export promotion budget or on the budgets of the individual agencies it discusses. The first bullet, for example, notes that the President’s budget proposed $30.3 million in additional funding for the U.S. and Foreign Commercial Service’s overseas export promotion activities. However, it does not indicate what the Commercial Service’s baseline budget is, whether the increase supports specific priorities laid out in the strategy, or whether resources could be shifted from existing Commerce activities, or from other agencies, to meet these needs. The remaining bullet points do not tie specific funding requests to individual agencies. The second bullet states that the fiscal year 2013 President’s budget seeks “support” for SBA’s Office of International Trade without stating what amount of funding, if any, SBA is requesting. The final bullet point simply states that five other core TPCC agencies seek a total increase of $19 million over 2012 funding levels. Despite the current emphasis on export promotion as a high-priority goal, the level of detail on agencies’ budgets presented in the TPCC’s National Export Strategies has decreased. During much of the 1990s, the TPCC provided trade promotion budget information by agency and by activity, noting as it did so that presenting meaningful information across agencies was difficult because of the variety of programs involved. The strategies provided in-depth tables on how agency resources were allocated, for example, the 1997 report included 44 pages of material on this topic. After 2000, the TPCC stopped reporting budget information in such depth. The National Export Strategies from 2002 through 2008 provided only a summary budget table that presented information on each agency’s total budget authority for export promotion activities. As already noted, the most recent reports have eliminated these summary budget tables. Figure 2 compares the budget information presented by the TPCC in 1996, 2004, and 2012. TPCC secretariat officials acknowledged that the amount of budget information presented in the National Export Strategies has declined and that the TPCC members currently place little emphasis on displaying or discussing agencies’ resources. They noted that changes in the political and budget environment over time have affected the TPCC’s processes. First, TPCC secretariat officials said that in the early 2000s, the TPCC shifted its focus away from resources in favor of efforts to improve the management of existing programs. For example, in 2003, a TPCC secretariat memo to member agencies stated that, given the budget environment, agencies should assume their budgets would be flat. The TPCC recommended that agencies look for opportunities to leverage resources through coordination or by sharing costs. Because the TPCC anticipated that members’ appropriations would not be increasing, secretariat officials stated that the TPCC largely stopped talking about or examining resources. Officials further noted that, while the NEI has generated enthusiasm for export promotion, the TPCC’s current focus remains on better managing and coordinating existing resources. Second, TPCC secretariat officials also stated that because final appropriations have not been passed until later in the fiscal year, it has been more difficult to collect up-to-date budget data. Finally, though GPRA sought to improve agency management and reporting processes, TPCC secretariat officials indicated that, as member agencies increasingly worked to comply with the law in 1999, it hindered their ability to do crosscutting analyses. Officials found that agencies focused on their own specific core priorities and on developing agency-specific performance plans, which complicated the TPCC’s ability to obtain and track export promotion budgets. The TPCC periodically collects summary data on agencies’ total budget authority for export promotion activities with OMB’s assistance. According to OMB staff, OMB asks agencies’ budget offices to self-identify their activities that relate to export promotion and compile a summary budget number. OMB resource management offices typically review the numbers provided by the agencies to ensure they are reasonable. Table 2 below reproduces the last table publicly released by the TPCC in its 2008 National Export Strategy, including its footnotes. According to OMB staff, OMB only compiles this information when requested by the TPCC, and the committee last requested this data in the spring of 2011. Because the TPCC opted not to make these data public in that year’s National Export Strategy, OMB staff did not fully review them. Therefore, OMB staff requested that we not publish the data collected in 2011. We nevertheless examined the more recent information the TPCC provided us, which included actual budget data for the same member agencies as shown in table 2 from fiscal years 1994 through 2010 and agencies’ requested budget for fiscal years 2011 and 2012. The TPCC used the same process to collect data in 2011 that it used for the 2008 National Export Strategy. Therefore, our discussion below, which identifies several significant issues impacting the reliability and usefulness of the data, focuses on the 2011 update but also generally applies to the data presented in table 2. According to TPCC secretariat officials, the committee has initiated efforts to further update this information, but officials have not indicated whether they plan to make it public as part of a future National Export Strategy. The data the TPCC collects are not useful for assessing the allocation of export promotion resources. To be useful for assessing how agencies’ resources are allocated, data should, among other things, be consistent and sufficiently comprehensive for the intended purpose. Moreover, collaborating agencies would need to use compatible methods to track funding. Additionally, we have reported on the importance of agencies providing appropriate levels of detail in budgeting documents. For example, prior to the creation of the Department of Homeland Security, we noted that crosscutting funding data provided in an OMB annual report on combating terrorism had limited utility for decision makers, in part because it did not include data on obligations or on duplication in programs for combating terrorism. We identified several issues with the TPCC’s most recent data, from 2011, and determined that the data are neither consistent across agencies nor comprehensive enough to indicate how resources are allocated across priorities or the overall cost of carrying out the National Export Strategy. Agencies use different definitions: According to TPCC secretariat and OMB staff, each agency independently defines export promotion and self-identifies the activities to include in its export promotion budget. The TPCC’s data include few explanatory notes about how each agency’s budget was computed, making it difficult to compare numbers across agencies or understand what activities are included for each agency. In fact, TPCC secretariat officials were not always certain what each agency’s number represented. Because agencies use different definitions, there is no assurance that TPCC’s data treat similar activities consistently. For example, SBA, OPIC, and Ex-Im all provide some form of export financing, but the TPCC’s data for these agencies represent three different aspects of their budgets. SBA’s data show the administrative expenses for its Office of International Trade, which is responsible for its export loan programs. OPIC’s data capture the agency’s total impact on the federal budget but do not provide any indication of the costs of operating its financing programs. Ex-Im’s data show the appropriations for its Office of Inspector General, but do not include any information on the costs of operating its financing programs or the agency’s total impact on the federal budget. The reasons for including or excluding agencies are not always clear: An example of the lack of clarity in how the TPCC treats member agencies is that its summary budget table does not include USAID, noting that it does not do so because the agency’s activities support trade promotion indirectly. However, the TPCC’s data include OPIC, which also focuses on international development and only indirectly supports exports. Moreover, the TPCC’s table continues to include other agencies, such as the Department of the Treasury, which do not directly fund trade promotion activities. Nonetheless, as we noted in 2006, portions of several National Export Strategies continued to highlight export promotion programs involving USAID. According to TPCC secretariat officials, member agencies decide whether or not they have export promotion programs and whether to provide resource data. The data are not detailed enough to align with priorities: The TPCC’s summary budget table presents data at a very high level, with one number for each agency, and provides no information on specific activities or programs. Without greater detail, it is not possible to understand whether or how agency resources are aligned with the priorities laid out in the National Export Strategy and National Export Initiative. Some TPCC member agencies conduct activities in more than one priority area. For example, among other activities, Commerce supports U.S. business in conducting trade missions and also works to reduce barriers to trade, both of which are priority areas in the National Export Initiative. Among its many activities, USDA supports the goals of increasing exports by small and medium-sized enterprises and increasing export credit available to U.S. businesses. Because it only presents information at a high level, the TPCC’s table does not allow users to understand how federal resources are being allocated across these, or other, priority areas. The data are not current: The TPCC’s data are not comprehensive because they do not include current information about agencies’ resources. The TPCC last updated its information in April 2011 and that summary budget table reflected agency budget requests for fiscal year 2012. The President released his fiscal year 2013 budget request in February 2012. Nonetheless, the latest data collected by the TPCC do not reflect fiscal year 2013 requests, nor do they show actual data for 2011, or estimates for 2012. Moreover, because the TPCC opted not to include the data in its National Export Strategy, OMB staff never fully vetted the data collected in 2011. Therefore, the most recent fully vetted data on federal export promotion resources are from 2008. Budget authority data does not fully reflect costs of all agencies’ programs: Finally, the TPCC’s use of total budget authority data provides an incomplete picture of the costs of some agencies’ programs. For example, OPIC is self-funded through receipts collected on its financing activities and has a net negative budget authority, meaning it returns money to the U.S. government. However, it does receive annual instructions from Congress on the amount of money it can spend on administrative and program expenses for its financing programs. While the TPCC’s use of total budget authority data may accurately represent one aspect of an agency’s impact on the overall federal budget allocated for export promotion, it is not sufficiently detailed to fully understand the agency’s contributions toward export promotion. For example, the TPCC’s number does not indicate the costs associated with operating OPIC’s financing programs or how much financing its budget supports. Without consistent and comprehensive information on export promotion resources, the TPCC cannot accurately assess the levels and allocation of resources among agencies. Thus, decision makers in Congress and the administration do not have full information about the U.S. government’s investment in export promotion and cannot determine whether resources are being allocated to the highest priority areas. Further, without information on export promotion resources, neither the TPCC nor the Export Promotion Cabinet can make informed recommendations about their appropriate allocation across agencies. Additionally, the Export Enhancement Act requires the TPCC to identify overlap and duplication among export promotion programs. However, as we have reported, it is difficult to gauge the magnitude of the federal commitment to a particular area of activity or assess the extent to which federal programs are duplicative without a clear understanding of the costs of implementing those programs and the activities they support. According to TPCC secretariat officials, the TPCC does not provide any guidance to agency officials on what budget information should be reported or how agencies should determine which activities should be included as export promotion. In the past, the TPCC provided guidance on the information member agencies should submit on their export promotion budgets. We reported that the data presented by the TPCC fostered a better understanding of historic and potential expenditures. The lack of clear TPCC guidance makes it difficult for agencies to provide, and for the committee to collect, comparable budget information. Without clear guidance, TPCC agencies use different definitions for export promotion in compiling budget information. Many agencies’ programs have multiple objectives, some of which are directly related to export promotion and some of which are not. For example, USDA’s export promotion programs also fulfill domestic agricultural objectives. According to OMB staff, this makes it challenging to clearly determine what activities should be considered export promotion. OMB staff stated that TPCC secretariat and OMB staff have had some preliminary discussions about developing standardized definitions of what activities should be considered export promotion and how data should be reported. However, these discussions are in the early stages, and the TPCC would need to decide what information it wants to include in the National Export Strategies before moving forward. Similarly, the TPCC does not supply guidance that could help clarify what level of detail agencies should provide to them. As the TPCC noted in its 2000 National Export Strategy, its ability to collect and present detailed budget information is limited by agencies’ abilities to generate comparable data within their varied accounting structures. In developing guidance, the TPCC could work with member agencies to determine a reasonable level of detail and identify the limitations of the data. For example, in 2000, the TPCC provided details on agencies’ expenditures in major federal export promotion areas, such as combating foreign export subsidies. However, they included a caveat that detailed budget numbers below the overall agency total can be difficult to validate and should only be used as an indication of the resources available for each area. There are lessons to be learned from other bodies coordinating crosscutting government programs and facing similar challenges. For example, like the TPCC, the Office of National Drug Control Policy (ONDCP) has a statutory requirement to develop a national strategy and propose a consolidated budget to implement that strategy. ONDCP’s process for developing the National Drug Control Strategy and its associated budget is not a perfect comparison for the TPCC because ONDCP has different authorities for reviewing and suggesting changes to member agencies’ budgets. However, its process for collecting and compiling data can highlight the usefulness of providing clear and detailed guidance. ONDCP provides detailed guidance to relevant agencies on how to assemble budget information. Its guidance includes a sample budget table that identifies the level of detail agencies should provide, including a list of the functions, such as corrections or interdiction, agencies should report on. ONDCP’s guidance also defines those functions and identifies which activities should be included in each function. In 2011, we reported that, while drug control agency officials raised some concerns about ONDCP’s budget process, officials at 4 of 6 agencies stated that it was somewhat or very effective at providing a record of national drug control expenditures, among other things. Clear guidance can help overcome challenges and make the data collected by interagency groups more useful for understanding how resources are currently allocated across agencies and activities, as illustrated by the ONDCP example. The TPCC’s lack of guidance impedes the collection of accurate, comprehensive, and consistent information necessary to understand how resources are allocated among priorities. Without clear guidance, TPCC agencies are using nonstandardized definitions to identify activities that relate to export promotion and are not clear about what level of detail is required. In announcing the National Export Initiative, the President not only reemphasized the importance of exports to the U.S. economy, but specifically highlighted the need to understand and coordinate federal resources for export promotion. However, the TPCC does not provide decision makers—including Congress and the Export Promotion Cabinet—with information that provides a clear understanding of how resources are currently allocated across the country and around the world among its member agencies or across federal export promotion priorities. In fact, the amount of information the TPCC has reported on agencies’ resources has declined. The TPCC has responded to the National Export Initiative by reporting on efforts to address established priorities and working to improve interagency coordination, but the committee currently places almost no emphasis on understanding the federal resources dedicated to implementing the National Export Strategy, as is called for in good practices. In the absence of clear guidance, the data the TPCC collects are not comparable across agencies and not comprehensive enough to allow the TPCC to determine how resources are currently allocated in support of priority activities. Furthermore, without better resource data, neither the TPCC nor the Export Promotion Cabinet can make informed recommendations about how federal resources should be allocated. As policymakers review the success of the NEI and consider the President’s request for authority to consolidate trade agencies in a single department, it is important to understand how federal resources are being spent. Without consistent and comprehensive information on export promotion resources—presented transparently through the TPCC’s annual strategies—decision makers in Congress and the administration cannot determine whether the return on the federal investment in export promotion is adequate or make informed decisions about future resource allocations. To improve the consistency, comprehensiveness, and transparency of information provided to Congress and policymakers on the federal investment in export promotion programs, the Secretary of Commerce, as chair of the TPCC, should 1. develop and distribute guidance for member agencies on what information they should provide the TPCC on the resources they spend on export promotion activities, and 2. report in its National Export Strategies on how resources are allocated by agency and aligned with priorities. We provided drafts of this report to the Secretary of Commerce, as chair of the TPCC, and to OMB. In written comments reprinted in appendix II, the Director of the TPCC Secretariat generally concurred with our recommendations on behalf of the Secretary and stated that they intend to work with TPCC member agencies and the Export Promotion Cabinet to implement them. In particular, they plan to create a new TPCC Budget Working Group to establish a robust TPCC role in assessing the appropriate levels and allocation of resources among agencies, as called for in its mandate. TPCC Secretariat officials provided technical comments and suggested corrections and clarifications that we incorporated, when appropriate. Nevertheless, the Director noted the TPCC’s limited authority over budget reporting and resource allocations, including its inability to compel member agencies to provide budget and resource information. He gave examples of some challenges they face, including shifts in the political and budgetary landscape and how different Administrations and Congresses have emphasized different priorities over time. However, he said the TPCC Secretariat will work within its existing authorities with TPCC agencies to address our recommendations. We support the establishment of a TPCC Budget Working Group and note that implementing the requirements of the Export Enhancement Act of 1992 is the responsibility of the committee, as comprised of the member agencies, under the leadership of the Chair and with the support of the secretariat. TPCC member discussions that improve the consistency, comprehensiveness, and transparency of information provided to Congress and policymakers can help overcome such challenges, facilitate well-informed resource decisions, and better support the National Export Initiative and the Export Promotion Cabinet. We also requested comments on a draft of this report from OMB. On June 21, OMB’s Office of General Counsel provided us with comments via e-mail. OMB noted that, while export promotion budgetary data have not been presented in a public document since the 2008 National Export Strategy, OMB annually compiles and reviews current and proposed resources across TPCC agencies that are devoted to export promotion and trade activities, as part of the development of the President’s budget. OMB further stated that it uses these data to ensure prudent government- wide allocation of export promotion-related resources and strong support for the President’s export promotion agenda, but that because these data are internal, pre-decisional, and deliberative, OMB does not share the cross-agency table outside of OMB, nor does it publish this information as part of the President’s budget or related materials. However, OMB commented that it consults with a number of officials, including the Assistant to the President and Deputy National Security Advisor for International Economics, as head of the Export Promotion Cabinet, when recommending export-promotion related resources in the President’s budget. We acknowledge that OMB conducts a review as part of the annual agency budget formulation process. However, this activity is distinct from the TPCC’s budget-related requirements in the Export Enhancement Act. As OMB notes, its activities are internal and deliberative and not shared outside OMB, including with the TPCC Secretariat or its member agencies. Thus, OMB’s process is not transparent to Congress or to other relevant parties and does not benefit from activities that could improve the consistency or comprehensiveness of this information. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 28 days from the report date. At that time, we will send copies to the Secretary of Commerce (in her capacity as Chairman of the TPCC), as well as the Director of OMB, interested congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512- 8612 or gianopoulosk@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report assesses the extent to which the Trade Promotion Coordinating Committee (TPCC) currently compiles and reports information on how budgetary resources are aligned with established export promotion priorities. To address this objective, we analyzed the laws and presidential directives that define what is required of the TPCC as an interagency coordinating body. These included the Export Enhancement Act of 1992, which directed the President to establish the TPCC; the 1993 Executive Order which established the TPCC in accordance with the 1992 act; the 2010 Executive Order announcing the National Export Initiative (NEI); and a subsequent (2012) Presidential Memorandum providing further instruction on Export Promotion Cabinet and TPCC collaboration to maximize the effectiveness of Federal trade programs. We also reviewed GAO’s guidance regarding data reliability and examined alternate models and good practices for coordinating and managing multi-agency initiatives as described in other GAO reports, including those covering the Government Performance and Results Act (GPRA) of 1993 and the GPRA Modernization Act of 2010. We reviewed the annual “National Export Strategy” reports to Congress that the TPCC has produced since its inception, focusing in particular on those prepared since the NEI was announced in 2010, as well as TPCC memoranda documenting efforts to compile and report budget information and develop a federal trade promotion budget. We also interviewed staff of the TPCC Secretariat, which is housed in the Department of Commerce, and staff of the Office of Management and Budget (OMB). To assess the reliability and usefulness of budget data collected by the TPCC, we took a number of steps, including (1) reviewing the data for internal consistency; (2) comparing TPCC’s data table with select agency budget documents, including Congressional Budget Justifications, appropriations bills, and agency financial or annual reports; (3) reviewing past GAO work on the TPCC’s budget; and (4) interviewing knowledgeable TPCC secretariat and OMB staff. Based on this assessment, we identified numerous issues with the TPCC’s data, as discussed in detail in this report. We present the TPCC’s data in the report only to illustrate our assessment of the data. We conducted this performance audit from February 2013 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Adam Cowles, Assistant Director; Michael McAtee, Analyst-in-Charge; Kara Marshall; and Karen Deans made key contributions to this report. Export Promotion: Small Business Administration Needs to Improve Collaboration to Implement Its Expanded Role. GAO-13-217. Washington, D.C.: January 30, 2013. National Export Initiative: U.S. and Foreign Commercial Service Should Improve Performance and Resource Allocation Management. GAO-11-909, Washington, D.C.: September 29, 2011. International Trade: Effective Export Programs Can Help In Achieving U.S. Economic Goals. GAO-09-480T. Washington, D.C.: March 17, 2009. Export Promotion: Trade Promotion Coordinating Committee’s Role Remains Limited. GAO-06-660T. Washington, D.C.: April 26, 2006. Export Promotion: Mixed Progress in Achieving a Governmentwide Strategy. GAO-02-850. Washington, D.C.: September 4, 2002. Export Promotion: Federal Agencies’ Activities and Resources in Fiscal Year 1999. GAO/NSIAD-00-118. Washington, D.C.: April 10, 2000. Export Promotion: Issues for Assessing the Governmentwide Strategy. GAO/T-NSIAD-98-105. Washington, D.C.: February 26, 1998. National Export Strategy. GAO/NSIAD-96-132R. Washington, D.C.: March 26, 1996. Export Promotion: Governmentwide Plan Contributes to Improvements. GAO/T-GGD-94-35. Washington, D.C.: October 26, 1993. Export Promotion: Initial Assessment of Governmentwide Strategic Plan. GAO/T-GGD-93-48. Washington, D.C.: September 29, 1993. Export Promotion Strategic Plan: Will It Be a Vehicle for Change? GAO/T-GGD-93-43. Washington, D.C.: July 26, 1993. Export Promotion: Governmentwide Strategy Needed for Federal Programs. GAO/T-GGD-93-7. Washington, D.C.: March 15, 1993. Export Promotion: Federal Programs Lack Organizational and Funding Cohesiveness. GAO/NSIAD-92-49. Washington, D.C.: January 10, 1992.
In 2010, the President launched the NEI with the goal of doubling U.S. exports over 5 years. More than 2 decades ago, Congress directed the President to establish the TPCC to provide a unifying framework for federal efforts in this area. Among other things, Congress directed the TPCC to assess the appropriate levels and allocations of resources and develop a government-wide strategic plan that identifies federal export promotion priorities, reviews current programs in light of these priorities, and proposes to the President a federal trade promotion budget that supports the plan. Congress also required the TPCC to submit annual reports to Congress describing the required strategic plan. This report assesses the extent to which the TPCC compiles and reports information on how federal export promotion resources are aligned with export promotion priorities. GAO reviewed the laws governing the TPCC and good practices for interagency initiatives, analyzed TPCC budget data and documents, and interviewed TPCC secretariat and Office of Management and Budget staff. The interagency Trade Promotion Coordinating Committee (TPCC) neither reports nor compiles information on how federal export promotion resources align with government-wide priorities. As a result, decision makers lack a clear understanding of the total resources dedicated across the country and around the world by TPCC member agencies to priority areas, such as increasing exports by small- and medium-sized businesses. GAO has previously reported that effective national strategies should address costs and has found shortcomings in the committee's response to the budget-related portions of its mandate. While the TPCC's National Export Strategy reports issued since initiation of the National Export Initiative (NEI) outline government-wide priorities and progress in achieving them, they do not discuss how resources are allocated in support of these priorities. Despite the current emphasis on export promotion as a high-priority goal, recent strategies have provided less information on budget resources than have previous strategies, as shown below. The TPCC last publicly reported a summary budget table in 2008. TPCC secretariat officials acknowledged that the TPCC agencies currently place little emphasis on displaying or discussing agencies' resources in the National Export Strategy. The TPCC last compiled high-level data on member agencies' budget authority in 2011, but this information is not useful for assessing resource allocations. To be useful, data should, among other things, be consistent and sufficiently complete for the intended purpose. However, the TPCC's data are inconsistent across agencies and not detailed enough to facilitate an understanding or comparison of how resources are allocated among priorities. TPCC agencies do not use a common definition of export promotion, so it is unclear why some agencies are included in the TPCC's data and others are not, and the TPCC's data are not current. Although agency accounting systems and budget processes differ, which presents challenges, clear guidance for agencies on what information they should provide the TPCC could improve the quality of the data. Without better information on agencies' export promotion resources, decision makers cannot determine whether the federal investment in export promotion is being used effectively or make informed decisions about future resource decisions. GAO recommends that TPCC (1) develop and distribute guidance for member agencies on what information they should provide the TPCC on the resources they spend on export promotion activities; and (2) report in its National Export Strategies on how resources are allocated by agency and aligned with the strategy's priorities. The TPCC secretariat agreed with our recommendations and stated it plans to take steps to address them.
To determine whether DHS has developed policies and established a workforce to use other transactions, we analyzed DHS’s organization, and policy and draft guidance for using these authorities. We interviewed DHS contracting officials and representatives from the DOD agencies that DHS has used for contracting support, officials in its S&T Directorate, and contractors to whom it made initial other transactions awards. We collected and reviewed other transactions agreement documents for DHS’s Countermeasures for Man-Portable Air Defense System (Counter- MANPADS) and Chemical and Biological Countermeasures (Chem-Bio) projects, the only two projects with other transactions awards as of the time of our review. We also reviewed other S&T Directorate solicitations that could result in other transactions agreements, but which had not yet resulted in awards as of the completion of our audit work. We analyzed information obtained from our interviews and file reviews using criteria that we found are generally important to federal acquisitions, namely, planning, reviews and approvals, market knowledge, and monitoring of contractor performance. We derived these criteria from our prior reports on other transactions and knowledge-based acquisition principles, DOD’s policies for other transactions, and selected parts of the FAR. To determine how effectively DHS used its other transactions authority to attract nontraditional government contractors, we analyzed DHS’s reported results from using these authorities in the Counter-MANPADS and Chem-Bio programs. We also reviewed other DHS acquisitions that could result in other transactions awards but for which DHS had not yet made awards. DHS relies on contractors to self-certify their status as a nontraditional government contractors during agreement negotiation. In analyzing the reported results from DHS’s other transactions awards, we did not independently verify a contractor’s reported status as a nontraditional contractor. We also compared DHS’s practices to attract nontraditional government contractors against policies and practices used by DOD. In addition, we interviewed DHS contracting and project management officials, contractors that DHS made other transactions awards to, and representatives from the commercial research and development and technology communities to gain their perspectives on DHS’s use of other transactions to attract nontraditional government contractors. We performed our review from February through October 2004 in accordance with generally accepted government auditing standards. The acquisition function plays a critical role in helping federal agencies fulfill their missions. DHS is expected to spend billions of dollars annually to acquire a broad range of products, technologies, and services from private-sector entities. Other transactions authority is one of the acquisition tools—in addition to standard FAR contracts, grants, and cooperative agreements—available to DHS to help support its mission. Other transactions were created to enhance the federal government’s ability to acquire cutting-edge science and technology. They help agencies accomplish this, in part, through attracting nontraditional contractors from the private sector and other areas that typically have stayed away from pursuing government contracts. There are two types of other transactions authorities—(1) research and (2) prototype. Other transactions for research are used to perform basic, applied, or advanced research. Other transactions for prototypes are used to carry out projects to develop prototypes used to evaluate the technical or manufacturing feasibility of a particular technology, process, or system. A single S&T program could result in multiple awards using other transactions. Because they are exempt from certain statutes, other transactions permit considerable latitude by agencies and contractors in negotiating agreement terms. For example, other transactions allow the federal government flexibility in negotiating intellectual property and data rights, which stipulate whether the government or the contractor will own the rights to technology developed under the other transactions agreement. Table 1 shows the statutes that DHS has determined are generally inapplicable to its other transactions agreements. Because other transactions agreements do not have a standard structure based on regulatory guidelines, they can be challenging to create and administer. Experts on other transactions and industry officials who have used these procurement arrangements told us that other transactions agreement terms are significantly different from FAR contracts and more closely resemble procurement agreements between private-sector firms. According to DHS, the unique nature of other transactions agreements means that federal government acquisition staff who work with other transactions agreements should have experience in planning and conducting research and development acquisitions, strong business acumen, and sound judgment to enable them to operate in a relatively unstructured business environment. DHS views the use of other transactions as key to attracting nontraditional government contractors—typically high-technology firms that do not work with the government—that can offer solutions to meet agency needs. As defined by the Homeland Security Act, a nontraditional government contractor is a business unit that has not, for at least a period of 1 year prior to the date of entering into or performing an other transactions agreement, entered into or performed any contract subject to full coverage under the cost accounting standards any contract in excess of $500,000 to carry out prototype projects or to perform basic, applied, or advanced research projects for a federal agency that is subject to compliance with the FAR. The S&T Directorate of DHS supports the agency’s mission by serving as its primary research and development arm. According to a senior DHS Chief Procurement Office official, the S&T Directorate currently is the only DHS organization using the other transactions authority provided in the Homeland Security Act. As of September 2004, other transactions agreements accounted for about $125 million (18 percent) of the S&T Directorate’s fiscal year 2004 total acquisition activity of $715.5 million. The S&T Directorate’s fiscal year 2004 total acquisition activity is depicted in figure 1. After DHS was established in 2003, the department rapidly established the S&T Directorate, which issued several solicitations using other transactions authority. These solicitations used some commonly accepted acquisition practices and knowledge-based acquisition principles. DHS issued a management directive, drafted guidance, and recruited additional program and contracting staff, which now provide a foundation for using other transactions authority; however, refinements in these policies and attention to workforce issues are needed to promote success in the department’s future use of other transactions. DHS’s policy guidance does not specify when audit requirements should be included in its other transactions agreements to help ensure, for example, that payments to contractors are accurate. Also, the department’s guidance does not address training requirements for its contracting and program staff to ensure that staff understand and leverage the use of other transactions. In addition, the limited size and capacity of DHS’s internal contracting workforce to conduct other transactions may hamper DHS’s goal to internally manage its increasing number of mission programs that could use its other transactions authority. DHS was directed by Congress and the executive branch to quickly initiate and execute R&D projects to help strengthen homeland security. The S&T Directorate at DHS was largely established to centralize the federal government’s homeland security R&D efforts, a function that was not the responsibility of any of DHS’s legacy agencies. Figure 2 depicts the Directorate’s four offices and their functions. The S&T Directorate initiated various projects to address homeland security concerns, including two prototype projects using other transactions authority. Initiating and executing these first projects took priority over establishing the Directorate’s operating procedures. The S&T Directorate’s need to rapidly initiate and execute projects forced a reliance on other federal agencies’ acquisition offices to award and administer its project agreements. (HSARPA) (SED) The S&T Directorate hired program managers and staff with R&D expertise from other government agencies and the private sector to manage its other transactions authority and other acquisitions. These initial hires included several former Defense Advanced Research Projects Agency (DARPA) officials experienced in R&D and other transactions authority acquisitions. In the absence of DHS policies and procedures for other transactions, the S&T Directorate relied on these key officials and other staff with R&D expertise in their former organizations to implement its early projects. These experienced staff helped train DHS program and contracting staff in other transactions and supervised and managed the acquisition process. For example, one official drafted a model other transactions agreement and guided program managers and contracting officers through the other transactions process. In addition to these officials, the S&T Directorate obtained portfolio and program managers from other government agencies and federal laboratories to act in key programmatic positions in their areas of expertise. Some of these portfolio and program managers serve on detail from their home agency. The S&T Directorate’s workforce strategy is to have its program and technical staff serve term appointments, most of which will not be longer than 4 years, in order to promote the influx of leading-edge science and technology skills to DHS. DHS’s planning and budget documents identified the need to develop countermeasures and detection systems against chemical-biological (Chem-Bio) and radiological-nuclear attacks. Under one area of the Chem- Bio project, being implemented by the S&T Directorate using other transactions, DHS is developing mobile laboratories to be rapidly deployed in the field to detect and analyze chemical warfare agents and toxic industrial chemicals in the environment. Figure 3 depicts a mobile laboratory being developed for DHS. The S&T Directorate also initiated projects to address homeland security needs identified by Congress and the executive branch. One such project is aimed at protecting commercial aircraft against possible terrorist use of shoulder-fired missiles, sometimes referred to as man-portable air defense systems (MANPADS). The Counter-MANPADS other transaction project is a multiyear development and demonstration program that will produce prototype systems to be used on commercial aircraft to defend against shoulder-fired missiles. An illustration of a proposed Counter-MANPADs technology being considered by DHS is depicted in figure 4. The S&T Directorate and Office of the Chief Procurement Officer (CPO) used Federal Acquisition Regulation principles as a framework for other transactions solicitations. The Directorate also utilized additional acquisition tools commonly used by DARPA and other agencies, such as broad agency announcements (BAA) to serve as general announcements of the Directorate’s research interest, including general principles for selecting proposals, and soliciting the participation of all offerors capable of satisfying the S&T Directorate’s needs; a white paper process under which firms submit to S&T brief synopses of the main concepts of a proposal introducing technology innovations or solutions; and payable milestone evaluations under which the S&T Directorate’s managers measure the progress of its projects at key points before making payments to contractors. The S&T Directorate modeled its acquisition process after DARPA’s to solicit proposals from as many industry sources as possible to meet its research needs and hosted technical workshops and bidders conferences for its early solicitations to help convey its technical needs to industry. An overview of the S&T Directorate’s generally used acquisition process for other transactions is in figure 5. The Homeland Security Advanced Research Projects Agency (HSARPA) and Office of Systems Engineering and Development (SED) hosted technical workshops prior to publishing some of their early solicitations to obtain information from the industry on what technical requirements were feasible to include in the solicitation. Following the issuance of the solicitations, HSARPA and SED held bidder’s conferences to answer industry questions about the solicitations. The S&T Directorate used a white paper review stage in its early solicitations, including solicitations for the Counter-MANPADS and Chem- Bio programs. According to DHS’s Chem-Bio solicitation, the use of the white paper approach allows DHS to provide firms with feedback on their proposed technologies without the firms having to incur the expense and time of writing complete proposals. For the Chem-Bio project, HSARPA received over 500 white papers from industry. S&T officials told us they provided each contractor that submitted a white paper for this project with feedback, giving the agency’s views on the merits of the proposed technology. HSARPA officials told us that the white paper process helps ensure that the office gets the best proposals and represents an inexpensive way for nontraditional firms to pursue business with DHS. To rapidly execute its projects, including other transactions agreements, the S&T Directorate used other federal agencies to award and administer its contracts to fill DHS’s contracting workforce gaps. DHS has interagency agreements with these agencies for their contracting services. For example, HSARPA is using the U.S. Army Medical Research Acquisition Activity, based in Ft. Detrick, Maryland, which performs acquisition services for the Army, to award other transactions instruments in support of its Chem-Bio project. In addition, DHS is using a contractor who is an expert in other transactions and R&D procurement to help draft its other transactions policy guidance and also provide assistance to administer several of its other transactions projects. The S&T Directorate incorporated some knowledge-based acquisition approaches throughout its acquisition process for using its other transaction authorities. We previously reported that an agency’s use of a knowledge-based acquisition model is key to delivering products on time and within budget. By using a knowledge-based approach, an agency can be reasonably certain about the progress of its project at critical junctures during development, which helps to ensure that a project does not go forward before the agency is sure that the project is meeting its needs. For example, some of the knowledge-based approaches being used by the S&T Directorate and CPO to manage their Counter-MANPADS and Chem- Bio other transaction projects are as follows: Integrated Product Teams (IPTs). Using IPTs to bring together in a single organization the different functions needed to ensure a project’s success is a knowledge-based acquisition best practice. The S&T Directorate formed IPTs that combine the expertise of representatives from each of its four offices to analyze customer requirements and make planning and budget decisions for the portfolio. Contractor Payable Milestone Evaluations. The S&T Directorate’s program managers measure the progress of its projects at key points before making payments to contractors. These milestones are usually associated with contractors satisfying certain performance criteria— commonly referred to as “exit criteria.” Examples of SED’s four payable milestones for Phase I and six payable milestones for Phase II of the Counter-MANPADS project are shown in figure 6. Design Reviews. HSARPA and SED program managers also use design review decision points to ensure the contractor’s product development is meeting program expectations and to determine if the product is ready to proceed to the next stage of development. (See figure 6 for the design review points in Phase I of the Counter-MANPADS project.) In 2002 we identified key success factors for DHS to effectively create its organization, including creating strong systems and controls for acquisition and related business processes. The development of formal policies and procedures for DHS’s authority to use other transactions is guided by statute and DOD’s experiences and practices in using the other transactions authority. DOD’s extensive experiences with and policies for using other transactions provide a useful framework for the effective management of projects using other transactions. For example, DOD uses a guidebook for other transactions prototype projects, which provides detailed policies and procedures in areas such as criteria for using other transactions, acquisition planning, agreement execution, and reporting requirements. In 2004 DHS prepared several policy and draft guidance documents, which should help provide DHS with a structure for using its other transactions authority. In October 2004, DHS issued an other transactions management directive, which provides DHS’s policy for the use of other transactions for research and for prototype projects. The policy is generally consistent with DOD’s policy. The management directive prescribes the responsibilities of key officials in using other transactions, such as the DHS Under Secretary of Management and its Chief Procurement Officer. Specifically, under the management directive, the CPO is responsible for setting policy, conducting oversight, and approving the use of other transactions authority for each project. The management directive also provides general policies and requirements for the documentation of a strategy for using other transactions and provides the purposes and criteria for using research and prototype other transactions. DHS’s explanation of the types of other transactions and criteria for their use, if effectively implemented, should help promote its compliance with the Homeland Security Act by helping to ensure that agency officials adequately assess the utility of other acquisition vehicles—such as FAR contracts, grants, or cooperative agreements, prior to using an other transaction for research. The purposes and criteria for other transactions use as stated by DHS are shown in table 2. DHS is using a contractor experienced with other transactions to assist in the preparation of a guidebook for using other transactions for prototype projects. The draft guidebook, which is loosely based on the DOD guide on other transactions for prototype projects, provides a broad framework for DHS to plan and use other transactions. It covers topics such as acquisition planning, market research, acquisition strategy, and agreements analyses requirements. According to a DHS official, its draft guidebook, when completed, is not to be part of the DHS official management directive system. In addition, the contractor drafted a lessons learned report on other transactions to help DHS fully leverage the benefits and minimize any problems associated with using other transactions. DHS’s draft lessons learned report on other transactions summarizes lessons from various sources, such as federal agencies and think tanks with other transactions experience, on topics related to those discussed in the draft guidebook. Figure 7 shows the development of DHS’s other transactions policy. DHS’s management directive and draft guidebook for other transactions does not yet specify roles, responsibilities, and requirements for agency program and contracting officials in two key areas: audit and training. Addressing these areas is important since, according to DHS officials, DHS plans to issue solicitations that could result in other transactions use at an increasing rate. S&T Directorate and CPO officials acknowledged the importance of these areas and told us they intend to address them in the future. Audit requirements. While DHS’s management directive covers Comptroller General access to contractor records under certain conditions, the directive does not address audits by other entities or specify other circumstances when audits of other transactions agreements may be needed to protect the government’s interest. For example, audits may be needed in certain other transactions agreements to help ensure that payments to contractors are accurate. DOD’s policy for auditing prototype other transactions projects, by contrast, provides more complete guidance on audits of other transactions agreements. For example, the DOD policy states that contracting officers should include information on the frequency of audits, scope of audits, and the means by which audits are to be performed. DOD’s policy also recognizes the flexibility in negotiating other transactions agreements by allowing the contracting officer, in certain circumstances, to waive the inclusion of audit provisions if it would adversely affect the execution of the agreement. DHS’s management directive, in contrast, does not address these conditions. A DHS official told us that its contracting officers negotiate specific auditing provisions in other transactions agreements with contractors on a case-by-case basis. Also, the DOD other transactions prototype projects policy has provisions for its contracting officers to use the Defense Contract Audit Agency (DCAA) or another independent auditor to audit other transactions agreements. Although DHS has a Memorandum of Understanding with DCAA to provide contract audit services, neither DHS’s other transactions management directive nor its draft guidance contain information on the specific conditions when contracting officers should use DCAA’s or another independent auditor’s services. Training requirements. DHS’s management directive requires other transactions contracting officers to be senior warranted contracting officers with a Level III acquisition certification and who possess a level of experience, responsibility, business acumen, and judgment that enables them to operate in this relatively unstructured business environment. This staffing requirement for other transactions closely mirrors the contracting workforce staffing qualification used by DOD. DHS’s management directive also requires its contracting staff to possess a special contracting officer certification, which can be achieved only after the staff have received appropriate training in other transactions. However, DHS has not yet developed a training program on other transactions for its contracting officers or its program managers expected to work on other transactions projects. By not establishing other transactions training requirements and schedules for its contracting and program staff to complete them, DHS may not be equipping its staff to fully understand and leverage the benefits of other transactions. We have previously reported on the importance of training and reported that leading organizations usually prioritize key processes, identify staff needing training, and establish requirements to ensure that the appropriate staff are trained. Furthermore, because S&T’s technical program personnel serve on details from other government agencies and have varying levels of experience with other transactions, appropriate training is key to help ensure that such staff uniformly and effectively use other transactions. DHS’s draft lessons learned report on other transactions states that it is critical to train contracting officers on aspects such as (1) the flexibilities associated with other transactions to help ensure the proper and optimal use of the authority, and (2) negotiating intellectual property (IP) rights, which can vary from project to project. The S&T Directorate plans an increasing number of mission programs that could use its other transactions authority, but DHS’s current contracting workforce may not be sufficient to manage this workload. DHS has relied on a small number of key S&T program personnel, who are experienced other transactions practitioners, to develop or approve solicitations. In fiscal year 2004, two of the S&T Directorate’s programs resulted in other transactions awards—Counter-MANPADS and Chem-Bio. In fiscal year 2005, the S&T Directorate could award other transaction agreements for at least eight additional programs, which could significantly increase its contracting workload because some programs could include multiple other transactions awards. (One S&T program could result in multiple awards using other transactions, contracts, grants, or cooperative agreements as the acquisition vehicle.) For example, S&T’s ongoing Chem-Bio project has resulted in 17 other transactions awards as of August 2, 2004. Figure 8 depicts the S&T Directorate’s project workload that could involve other transactions and the corresponding CPO in-house contracting support. DHS is currently developing a plan to address contracting workforce issues. Senior DHS officials told us that their strategy is to generally have in-house contracting staff award and administer all of the S&T Directorate’s other transactions and R&D projects by fiscal year 2006. Currently, CPO has dedicated six contracting staff—some of whom are warranted contracting officers dedicated to conducting other transactions—to support S&T acquisitions on a temporary basis. CPO and S&T Directorate officials told us that they intend to increase this staff support to 15 staff by the end of fiscal year 2005. As cited in DOD policy and DHS’s guidance, acquisition staff that award and administer other transactions need special skills and experience in business, market acumen, and knowledge of intellectual property issues. CPO and S&T Directorate officials told us that contracting officers with these skills and experience are difficult to find in the current acquisition workforce. In addition, they noted lengthy delays in DHS’s ability to process needed security clearances for these staff, which caused some contracting officer candidates to accept positions elsewhere. DHS’s challenges in developing its acquisition workforce are similar to other federal agencies’ experiences in managing attrition and retirements affecting their acquisition workforces. As a result, DHS will continue to rely on other agencies for contracting support until the end of fiscal year 2006. For example, for its Chem-Bio other transactions project, the S&T Directorate is using DOD’s U.S. Army Medical Research Acquisition Activity for contracting support. According to DHS’s S&T Directorate and CPO officials, the offices are in the process of drafting a Memorandum of Understanding regarding the contracting personnel that CPO will dedicate to support the S&T Directorate’s projects. DHS included nontraditional government contractors in its two initial other transactions projects. But DHS is not capturing knowledge learned from these acquisitions that could be used to plan and execute future projects. The S&T Directorate has conducted outreach to engage nontraditional government contractors in its early projects, including briefing industry associations, setting up a Web site to facilitate contractor teaming, and conducting project-specific workshops. However, the S&T Directorate does not systematically capture and use knowledge learned from its acquisition activities for use by program staff. The S&T Directorate’s Counter-MANPADS and Chem-Bio projects included nontraditional government contractors in all of the initial awards at the prime and subcontractor levels. For example, in February 2004 DHS made three Phase I awards for the Counter-MANPADS project to contractor teams led by BAE Systems, Northrop Grumman, and United Airlines (a nontraditional contractor). BAE Systems and Northrop- Grumman, which are traditional contractors, included nontraditional contactors on their teams. Nontraditional government contractors serve significant roles in the Counter-MANPADS and Chem-Bio projects, such as leading the aircraft integration team incorporating the counter measure technology with commercial aircraft in the Counter-MANPADS project. Table 3 shows the composition of the Counter-MANPADS project contractor teams. An intent of Congress in granting other transactions authority to DHS was to attract firms that traditionally have not worked with the federal government. The use of other transactions may help attract high-tech commercial firms that have shied away from doing business with the government because of the requirements mandated by the laws and regulations that apply to traditional procurement contracts. According to DHS officials, early DHS other transactions award recipients, and industry association officials, two primary barriers to nontraditional contractors pursuing government contracts are: Intellectual Property (IP) Rights. IP rights refer to access to information or data used in the performance of work under a contract. We previously reported on contractors’ reluctance to pursue government R&D funding because the FAR’s IP provisions could give the government rights to certain information and data, which could decrease their businesses’ competitive advantage. For example, a nontraditional contractor without prior federal R&D contracting experience under the FAR who won one of DHS’s early other transactions awards told us that the flexibility to negotiate IP rights was critical to its participation because it allowed the contractor to negotiate IP rights favorable to its company. Cost Accounting Standards (CAS). CAS are the federal government’s accounting requirements for the measurement, assignment, and allocation of costs to contracts. According to contractors and procurement experts outside the government that we interviewed, nontraditional firms generally do not operate accounting systems in compliance with the federal government’s CAS, and developing such systems can be cost prohibitive. For example, a nontraditional contractor who won an initial DHS other transactions award told us developing a CAS-compliant accounting system would have required the establishment of a subsidiary firm to perform its accounting functions. DHS’s Science and Technology Directorate used extensive outreach to attract nontraditional contractors to participate in its projects. It briefed industry groups, conducted project-specific workshops, and used Web sites to publicize the agency’s needs. In the fall of 2003, shortly after the S&T Directorate was established, its HSARPA sponsored separate 1-day briefings to business and academia to help engage the private sector in R&D to satisfy DHS’s needs. These sessions were designed to gather input on best practices to optimize the solicitation, procurement, and program execution aspects of its projects. For example, at these sessions DHS officials presented information on its organization and approach to program management, such as the roles and responsibilities of agency officials and managers; investment and research priorities; available solicitation methods, such as requests for proposals, broad agency announcements, and research announcements; and possible procurement vehicles, including FAR contracts, grants, cooperative agreements, and other transactions. The S&T Directorate supplemented these sessions by conducting project- specific industry workshops and other outreach events. For example, in October 2003, the S&T Directorate held an industry day session for its Counter-MANPADS project. The session provided participants with background on the project, the structure of the DHS organization that would manage it, the program’s goals and schedule, and an overview of other transactions for prototypes. DHS presented detailed information on the nature and requirements of other transactions agreements, firms that may qualify as a nontraditional contractor, and laws that would not apply to other transactions. In addition, the S&T Directorate gave an overview of the other transactions solicitation process to be used for the project, which covered topics such as the white paper process, oral presentations, and the proposed other transactions agreement. DHS attracted almost 200 participants to this event—approximately 85 percent of whom were from industry. Also, in September 2003, DHS held a bidders conference for its Chem-Bio project where it described its technical requirements and the solicitation process for this project. According to an agency official, the conference gave DHS the opportunity to obtain input from the private sector on the technical aspects of its solicitation and to answer participants’ questions about the solicitation. Similarly, DHS held technical workshops for projects that may result in other transactions awards, such as those intended to counter threats from truck, suicide, and public transportation bombs and to design cyber security systems. DHS also created and used Web sites to publicize its activities and procurement needs. For example, DHS created the “DHS—Open for Business” site, which centralizes information on its contracts, grants, small business opportunities, and R&D efforts. According to DHS, this site is intended to complement governmentwide portals such as Federal Business Opportunities, known as FedBizOpps. In addition, HSARPA created a solicitation and teaming portal Web site to help attract firms (www.hsarpabaa.com). On this site, HSARPA announces its current project solicitations and offers a teaming portal where contractors can learn about possible partners to bid on DHS work. This site also contains links to other DHS programs to facilitate industry participation in its projects, such as its Small Business Innovation Research program, which DHS established in December 2003 to increase the participation of innovative and creative small businesses in its R&D programs. Also, the site has a mailing list function where contractors can register to receive electronic e-mail notices of upcoming HSARPA solicitations. We found that industry’s views vary on the effectiveness of DHS’s outreach efforts. Some contractors and industry associations we interviewed said these outreach efforts are having a positive impact on the procurement process. For example, an industry association head in the technology field told us that DHS’s use of Broad Agency Announcements and other flexible solicitation methods to publicize its technology and research needs may help to attract nontraditional contractors. Officials from two technology associations told us commercial firms that traditionally do not work with the federal government believe that government officials have preconceived ideas of exactly what technology they need and which contractors they want to work with. However, one of the officials stated that DHS’s use of the BAA process demonstrates to industry that the agency desires to hear all the possible technology solutions that may meet its needs. Other industry officials believed that DHS’s outreach actions could be improved, for example, if DHS took additional actions to inform industry that it has other transactions authority and developed a more user-friendly process to attract broader interest in its projects. Representatives of a large industry association we interviewed were not aware that DHS possesses other transactions authority and said if this fact were more widely known, it could increase industry’s interest in working with DHS. In addition, representatives of some small companies told us that the fee DHS charges to attend its outreach events could pose a barrier to attending them. Also, several contractors we interviewed told us that DHS’s teaming portal site is a good idea in concept but found it cumbersome to maneuver in the automated system. However, two of the nontraditional contractors we interviewed that received a DHS other transactions award used this site to help identify industry partners for their team. The S&T Directorate’s capacity to build and sustain knowledge for use in its future acquisitions involving other transactions is in the early stages of development but the Directorate has not yet developed policies or procedures to ensure that program and portfolio managers are capturing and assessing critical information and knowledge gained from its acquisition activities, including the use of other transactions, for use in future projects. Knowledge gained from prior other transactions acquisitions on issues ranging from seeking nontraditional government contractors to assessing project outcomes is key to planning future projects. A knowledge base of important lessons learned from outreach to private-sector firms, the acquisition process, and the design and execution of projects can facilitate the work of program and acquisition staff in planning future acquisitions using other transactions authority. DHS’s draft guidebook on other transactions for prototypes acknowledges the importance of documenting knowledge gained during the acquisition process for planning future other transactions acquisitions. We have also reported on the benefits of agencies using systematic methods to collect, verify, store, and disseminate information for use by their current and future employees. Our previous work has identified the importance of setting goals and identifying performance indicators that will inform federal agencies of whether they have achieved the performance they expected. S&T Directorate officials acknowledge the need to create a “corporate memory” function to provide future staff with access to information and knowledge obtained from its current projects and to incorporate such knowledge into its training efforts. The S&T Directorate’s workforce-staffing strategy necessitates that it have a policy and procedure in place to capture employees’ knowledge. Under its current workforce strategy, the S&T Directorate’s technical staff serves regularly rotating term appointments that typically do not exceed 4 years. This approach, according to S&T Directorate officials, is designed to promote the influx of leading-edge science and technology skills to DHS. S&T Directorate officials recognize that these rotations can place a burden on its contracting staff that plan, conduct, and manage highly specialized other transactions programs by having to continually guide new technical staff on the workings of the process. However, these officials have told us that there is no policy or process yet in place to ensure that the capturing and sharing of such knowledge occur. The S&T Directorate’s current practices for capturing knowledge gained from its acquisition efforts vary. In establishing its structure the S&T Directorate drew its technical staff from a variety of organizations, each of which used different acquisition approaches. Consequently, portfolio managers and program managers we spoke with did not consistently capture knowledge acquired. In addition, the S&T Directorate’s efforts to assess the effectiveness of its industry outreach activities involving the use of other transactions authority are not rigorous enough to capture information needed in planning future outreach. By not assessing its activities, S&T cannot be assured that it is reaching the broadest base of firms to provide technological solutions for the S&T Directorate’s needs. Without policies and a supporting process to capture the experiences and knowledge gained from its acquisition efforts, DHS may not capitalize on lessons learned from its early use other transactions. Given the S&T Directorate’s planned rotations of its key technical staff, building and maintaining institutional knowledge are critical to ensuring that new S&T Directorate staff have the ability to quickly learn about previous other transactions acquisitions when designing future projects. For example, the S&T Directorate invests funding and staff resources to advertise its organization and projects to help attract firms but does not fully assess the effectiveness of these activities for use in planning future projects. Figure 9 depicts the S&T Directorate’s acquisition process and a possible knowledge management function for collecting, storing, and sharing information. Recognizing the flexibility offered by other transactions authority to tap nontraditional sources to meet its needs for new homeland security technologies, DHS moved quickly to use this authority to build its science and technology capabilities. In doing so it signaled its seriousness about using other transactions authority to advance its strategic objectives. However, to sustain its progress made to date DHS needs to take additional actions, such as completing the necessary foundation of policies and procedures, including guidance on audit provisions, and ensuring that it has an adequately trained and staffed acquisition function. Furthermore, given its strategy of using regularly rotating term appointments in staffing its S&T programs, long-term success will depend on the department’s ability to harness its institutional knowledge on other transactions. DHS’s ability to identify, prioritize, and access the most promising research and technologies in the future will depend, in part, on its ability to capture and make accessible critical knowledge on the agency’s use of other transactions authority to ensure that it is accessing the broadest and most appropriate technologies in the marketplace. By completing its foundation for using other transactions and creating a means for capturing key knowledge and measuring performance, DHS will be better prepared to capitalize on the full potential of the private sector to provide the innovative technology it needs to secure the homeland. To promote the efficient and effective use by DHS of its other transactions authority to meet its mission needs, we have three recommendations for the Secretary of Homeland Security. The Secretary should direct the Under Secretary for Management and the Under Secretary for Science and Technology to establish guidance on when it is appropriate to include audit provisions in develop a training program for DHS staff in the use of other transactions to help ensure the appropriate use of this authority, and capture knowledge obtained during the acquisition process for use in planning and implementing future other transactions projects. We provided a draft of this report to DHS for its review and comment. DHS provided written comments generally agreeing with the facts and conclusions expressed in the draft report. DHS agreed with our first two recommendations and noted that it is already working to address them. Regarding our recommendation that DHS capture knowledge obtained during the acquisition process for use in planning and implementing future projects that could use other transactions, DHS agreed with the utility of retaining such historical information and “lessons learned” about its procurement activities, acquisition planning, execution, and program management activities. DHS stated that while no formal system for assembling such information is in place within the organization, this information is being monitored. However, DHS sought further clarity about the types of information we recommend it retain and to what end it is to be used. Based on our review of DHS’s early use of its other transactions authority, we believe that systematically capturing, analyzing, and making readily available knowledge about using this authority is needed. We recognize that the S&T Directorate’s work and focus cuts across various technology areas, which are continuously evolving, making each solicitation’s requirements unique. We also recognize and appreciate DHS’s concern over the administrative aspects of collecting, maintaining, and monitoring this information over time. We believe, however, that DHS can build upon its current informal system of monitoring acquisition information. Specifically, we think DHS could collect and disseminate information on what has worked and not worked in areas such as outreach efforts. This information could be useful for future other transactions projects. For example, if DHS wants to ensure that its outreach attracts firms who have a recognized core competency desired by S&T, including nontraditional government contractors, it may want to use forms of outreach that have been used successfully in the past. We believe this information could be particularly important given the S&T Directorate’s workforce-staffing strategies, under which its technical staff serves regularly rotating term appointments. DHS also provided technical revisions to our draft report, which we incorporated as appropriate. The department’s comments are reprinted in appendix I. We are sending copies of this report to other interested congressional committees; the Secretaries of Homeland Security and Defense; and the Director, Office of Management and Budget. We also will make copies available to others on request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4841, or John K. Needham, Assistant Director, at (202) 512-5274. Other major contributors to this report were Rachel Augustine, Eric Fisher, Alison Heafitz, John Krump, Robert Swierczek, and Anthony J. Wysocki. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Homeland Security Act of 2002 authorized the Department of Homeland Security (DHS) to establish a pilot program for the use of acquisition agreements known as "other transactions." Because they are exempt from many of the requirements that apply to government contracts, other transactions can be useful in acquiring cutting-edge technologies from entities that traditionally have declined to do business with the government. The act requires GAO to report to Congress on the use of other transactions by DHS. To fulfill this obligation, GAO (1) determined if DHS has developed policies and established a workforce to manage other transactions effectively and (2) evaluated how effectively DHS has used its other transactions authority to attract nontraditional government contractors. The Department of Homeland Security has issued policy and is developing a workforce to implement its other transactions authority, but the department's policies need further development and its contracting workforce needs strengthening to promote the successful use of the authority in the future. Soon after it was established, DHS issued other transactions solicitations using some commonly accepted acquisition practices and knowledge-based acquisition principles. Subsequently, the department issued a management directive and drafted guidance for using other transactions, loosely modeled on the practices of the Department of Defense (DOD), one of several other agencies with other transactions authority and the one with the most experience with using these agreements. Unlike DOD, however, DHS has not specified in its policies or guidance when its contracting staff should consider the use of independent audits to help ensure, for example, that payments to contractors are accurate. Similarly, DHS has not established training requirements to aid staff in understanding and leveraging the benefits of other transactions. The DHS contracting workforce is limited in size and capacity, which could impede the department's ability to manage a potential increase in its other transactions workload. DHS is taking steps to enhance the capacity of its contracting workforce. The DHS Science and Technology Directorate included nontraditional government contractors in its first two other transactions projects. The Directorate engaged in extensive outreach efforts, such as conducting briefings on its mission and research needs to industry and academic institutions and using a number of Web-based tools to publicize its solicitations. But DHS has not yet developed mechanisms to capture and assess the knowledge gained about the use of other transactions. As a result, DHS may not be able to leverage information from current projects for use in future solicitations that use other transactions.
The Social Security Disability Insurance (DI) and Supplemental Security Income (SSI) programs are the two largest federal programs providing cash payments to people with long-term disabilities. The DI program, authorized in 1956 under title II of the Social Security Act, provides monthly cash insurance benefits to insured, severely disabled workers. The SSI program, authorized in 1972 under title XVI, provides monthly cash payments to aged, blind, or disabled people whose income and resources fall below a certain threshold. About 2.5 million people apply to the Social Security Administration (SSA) each year for disability benefits. Between 1985 and 1995, the number of DI beneficiaries increased about 53 percent to about 5.0 million, and the number of working-age SSI recipients increased 81 percent to 2.4 million. In 1995, SSA distributed about $61 billion to these and other disability beneficiaries and spent $3 billion on program administration, which accounted for more than half of SSA’s total administrative expenses. Both the DI and SSI programs are administered by SSA and state disability determination services (DDS), which determine benefit eligibility. DDSs award benefits to about 35 percent of applicants. Denied applicants may appeal to an administrative law judge (ALJ) in SSA’s Office of Hearings and Appeals (OHA). About a third of all applicants found not disabled by DDSs appeal to an ALJ, and almost two-thirds of claimants who appeal to an ALJ are subsequently found disabled. Cases appealed to ALJs add considerably to SSA’s administrative expense and increase the time claimants must wait for a decision. The average initial DDS decision in DI cases costs about $540, while a hearing can cost an additional $1,200. In addition, appeals can add an average of 378 days to the length of time that an applicant must wait for a final decision. Moreover, because ALJs award a high percentage of appealed cases that have already been denied twice by the DDS, the integrity of the process is called into question. Claimants apply for DI and SSI disability benefits in SSA field offices, which forward these applications, along with any supporting medical evidence, to the appropriate state DDS. A DDS adjudication team, consisting of a disability examiner and a medical or psychological consultant, makes the initial decision on each claim. If the DDS denies a claim, the claimant may ask for reconsideration. For the reconsideration review, a new team of DDS adjudicators makes an independent decision on the basis of its own evaluation of all the evidence, including any new evidence the claimant might submit. If, after reconsideration, a DDS denies benefits, the claimant may pursue several levels of appeal (see table 1.1) and may introduce new evidence at almost every level. First, the claimant has the right to request a hearing before an ALJ. Before the hearing, the ALJ may obtain further medical evidence, for example, from the claimant’s own physician or by hiring a consultative physician to examine the claimant. The hearing before the ALJ is the first time that a claimant has an opportunity for a face-to-face meeting with an adjudicator. SSA hearings are informal and nonadversarial; SSA does not challenge a claimant’s case. The claimant and witnesses—who may include medical or vocational experts—testify at the hearing. The ALJ asks about the issues, receives relevant documents into evidence, and allows the claimant or the claimant’s representative to present arguments and examine witnesses. If necessary, the ALJ may further update the evidence after the hearing. When this is completed, the ALJ assesses the effects of the claimant’s medical impairment on capacity to function at work. The ALJ then issues a decision based on his or her assessment of the evidence in the case and is generally authorized to do so without seeking input from a medical professional. If an ALJ denies an appealed claim, the claimant may request that SSA’s Appeals Council review the case. The Appeals Council may deny or dismiss the request, or it may grant the request and either remand the case to the ALJ for further action or issue a new decision. The Appeals Council’s decision, or the decision of the ALJ if the Appeals Council denies or dismisses the request for review, becomes SSA’s final decision. After a claimant has exhausted all SSA administrative remedies, the claimant has further appeal rights within the federal court system, up to and including the Supreme Court. Overall, about 49 percent of all applicants receive benefits, most (71 percent) from initial or reconsideration decisions made at the DDS level. About 22 percent of all applicants appeal their cases to ALJs; about two-thirds of all claimants whose claims are denied at the DDS reconsideration level appeal to an ALJ. Overall, about 29 percent of all claims in 1996 were awarded on appeal. Figure 1.1 shows an overview of the disability decision-making appeals process. ALJs at SSA conduct de novo (or “afresh”) hearings; in other words, they may consider or develop new evidence, and they are not bound by DDS decisions. In addition, the Administrative Procedure Act (APA) protects ALJs’ independence by exempting them from certain management controls. Although ALJs are SSA employees and generally subject to the civil service laws, the APA protects these staffs’ independence by restricting the extent to which management controls them. For example, ALJ pay is determined by the Office of Personnel Management independently of SSA recommendations or ratings, and ALJs are not subject to statutory performance appraisal requirements. Such safeguards help ensure that ALJ judgments are independent and that ALJs would not be paid, promoted, or discharged arbitrarily or for political reasons by an agency. ALJs operate under rules that differ from those of appellate courts. After a DDS denial is appealed, an ALJ at SSA holds a de novo hearing, entitling the claimant to have all factual issues determined anew by the ALJ. In contrast, appellate courts generally review the findings of lower courts and only consider whether those courts made errors of law or procedure. Under the ALJ de novo process, the claimant receives a full in-person hearing from an adjudicator who is fully authorized to hear every aspect of the case. The ALJ hearing is the first time a new claimant is guaranteed the right to testify before an adjudicator. As SSA employees, ALJs make decisions for the Commissioner and are subject to agency rules and regulations that they must apply in holding hearings and making decisions. Review by the Appeals Council ensures that ALJ decisions follow SSA regulations and rulings. If the Council concludes that the ALJ has not followed agency rules and regulations, the Council can reverse the ALJ decision on its own or send the case back to the ALJ for further action. Although the ALJ’s review and analysis of an appealed denial must include the case file materials developed by the DDS, the ALJ makes new factual determinations. For example, even though a DDS concludes that an individual can perform work, the ALJ is free to conclude that the individual cannot. The differences between DDS and ALJ results are a long-standing problem contributing to the growth in OHA backlogs and increased case-processing time, according to our 1996 report on SSA’s efforts to reduce backlogs in appealed decisions. Our review of over 40 internal and external studies of the disability determination and appeals process, several of which were completed more than 20 years ago, led us to this conclusion. In the early 1990s, as part of its efforts to develop a number of strategic priority goals, SSA reviewed many of the same studies and identified inconsistent decisions as a critical issue affecting SSA’s ability to improve its service to the public. Inconsistent decisions have been evident in program data for many years. For example, since 1986, DDS award rates have ranged from 31 to 43 percent, whereas ALJ award rates have ranged from 60 to 75 percent. As shown in figure 1.2, ALJ awards, as a percentage of total awards, have ranged from 17 percent in 1986 to 29 percent in 1996. Concerns about comparatively high ALJ award rates are not new. Although many hypotheses for inconsistent decisions have been discussed, explanations for the high rate of ALJ awards have been inadequate or unavailable. In early 1979, congressional hearings focused on high ALJ award rates, and, in 1980, the Congress passed legislation aimed at promoting greater consistency and accuracy of ALJ decision-making. This legislation required SSA to establish a system of reviewing ALJ decisions to ensure that they comply with laws, regulations, and SSA rulings. In January 1982, SSA submitted to the Congress the results of a study on progress made in reviewing ALJ decisions, including the possible causes for ALJ reversals. Soon after SSA started to perform the quality reviews required by legislation, the Association of ALJs filed suit in federal court. The lawsuit challenged SSA’s plans to target these reviews to judges with high award rates on the grounds that such reviews threatened ALJs’ decision-making independence. The court never ruled on this issue because SSA decided to rescind targeted reviews. ALJ award rates fell temporarily from 62 percent in 1981 to 55 percent in 1983 when SSA was performing its targeted reviews, although other factors could explain the decline. When targeted reviews ended in 1984, however, ALJ award rates started to increase again and have remained at high levels ever since. Not only do award rates between DDSs and ALJs differ, but the rates also differ by impairment type and other factors. For example, although DDS award rates vary by impairment, ALJ award rates are high regardless of the type of impairment. As shown in table 1.2, DDS award rates ranged from 11 percent for back impairments to 54 percent for mental retardation. In contrast, ALJ award rates averaged 77 percent for all impairment types with a smaller variation among impairment types. When age is considered in addition to impairment type, decisions can vary even more widely. Table 1.3 illustrates, for example, how widely DDSs and ALJs can diverge when age is considered in back impairment cases. SSA has long known about its inconsistent decisions and the problems they pose for the disability programs and the agency. SSA has studied the problem and taken several steps to address factors known to contribute to inconsistency between DDS and ALJ adjudicators. In May 1992, SSA’s Commissioner approved a study of the appeals process, later called the Disability Hearings Quality Review Process (DHQRP). This study analyzed the reasons for high ALJ award rates. SSA has issued two reports based on this study, which is ongoing. Realizing that the inconsistency between DDS and ALJ decisions and the length and complexity of the decision-making process compromised the integrity of disability determinations, SSA began redesigning the process in 1993. In late 1994, it released its Plan for a New Disability Claim Process— commonly referred to as the “redesign plan”—which represents the agency’s long-term strategy for addressing the systemic problems contributing to inefficiencies in its disability processes. To direct the redesign effort, SSA created a management team assisted by top SSA management, various task teams, and state and federal employees involved with disability determinations. To address inconsistent decisions as a part of redesign, the agency established a process unification task team. This team included a diverse group of 29 SSA and DDS employees who, in addition to their own expertise, sought information from other sources and reviewed data from SSA’s DHQRP study of the appeals process. In November 1995, the task team issued its final report. SSA established an intercomponent group to develop specific actions to support consistent disability decisions and a senior executive group to enforce needed changes. In July 1996, the SSA Commissioner approved the group’s recommendations for several initiatives designed to reduce inconsistent decisions by DDSs and ALJs. In addition to SSA’s recent efforts to address inconsistent DDS and ALJ decisions, the agency faces significantly increasing workloads at all levels of adjudication. In particular, several congressional mandates will compete for time and resources with process unification efforts. For example, the Social Security Independence and Program Improvements Act of 1994 and the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 require hundreds of thousands more continuing disability reviews (CDR) to ensure that beneficiaries are still eligible for benefits. By law, SSA must conduct CDRs for at least 100,000 more SSI beneficiaries annually through fiscal year 1998. In 1996, the Congress increased CDR requirements for children on SSI, requiring CDRs at least every 3 years for children under age 18 who are likely to improve and for all low birth weight babies in the first year of life. In addition, SSA is required to redetermine, using criteria for adults, the eligibility of all 18-year-olds on SSI beginning on their 18th birthdays and to readjudicate 332,000 childhood disability cases by August 1997. Finally, thousands of noncitizens and drug addicts and alcoholics could appeal their benefit terminations, further increasing SSA’s workload. The Government Performance and Results Act (the Results Act) of 1993 requires federal agencies to be more accountable for the results of their efforts and their stewardship of taxpayer dollars. The Results Act shifts the focus of federal agencies from traditional concerns, such as staffing and activity levels, to results. Specifically, the act directs agencies to consult with the Congress and obtain the views of other stakeholders and to clearly define their missions. It also requires them to establish long-term strategic goals as well as annual goals linked to the strategic goals. Agencies must then measure their performance toward these goals and report to the President and the Congress on their progress. The Results Act’s initial implementation involves about 70 pilot tests during fiscal years 1994 through 1996 to provide agencies with experience in meeting its requirements before governmentwide implementation in the fall of 1997. As a pilot agency, SSA submitted its fiscal year 1996 annual performance plan to the Office of Management and Budget in May 1995. Specifically, the plan includes the strategic goals of (1) rebuilding confidence in Social Security, (2) providing world-class service, and (3) creating a supportive environment for SSA employees. It also includes a broad range of measures for disability and appeals-related performance outputs and outcomes. In 1995 testimony before the Subcommittee on Social Security, House Committee on Ways and Means, we reported on the timeliness and consistency of DDS and ALJ disability determinations. After our testimony, the Chairman asked us to examine the differences between DDS and ALJ decisions in more detail. Specifically, we agreed to (1) ascertain the factors contributing to inconsistent decisions by DDSs and ALJs and (2) identify SSA’s efforts to address inconsistent decisions. We reported our preliminary findings in testimony earlier this year. To respond to the first objective, we divided the possible contributing factors into three types: (1) factors related to differences in RFC assessments made by DDSs and ALJs, (2) procedural factors that contribute to differences in decisions, and (3) use of quality reviews to manage the process. In conducting our review, we examined existing studies, SSA’s regulations and program operations memoranda, and court cases related to the disability programs. We also obtained and analyzed program and statistical data; see appendix I for details. In addition, we interviewed DDS and SSA officials, including ALJs and OHA staff. We also attended SSA’s nationwide process unification training. We performed our review at SSA headquarters in Baltimore, Maryland; OHA headquarters in Falls Church, Virginia; and at SSA and DDS offices in Atlanta, Boston, and Denver. We conducted our review between October 1995 and June 1997 in accordance with generally accepted government auditing standards except that we did not verify agency data. SSA requires that DDS and ALJ adjudicators follow a standard approach— called the sequential evaluation process—for making disability determinations. Although standard, the process requires adjudicators to make several complex judgments. For example, if adjudicators cannot allow the claim on the basis of medical evidence only, they must make judgments on whether claimants can perform prior or other work available in the national economy despite their disabling conditions. Such determinations may involve not only residual functional capacity (RFC) assessments, but consideration of these assessments along with the claimant’s age, education, and skill levels. To reduce the amount of judgment involved, SSA has developed medical- vocational rules. In general, the older, less educated, and less skilled the claimant, the more likely these rules will direct the adjudicator to award benefits. For claimants with functional and vocational profiles that do not fit the rules, however, adjudicator decision-making is less prescribed. In addition, before making any decision, adjudicators must decide how much weight to give to various sources of evidence and evaluate the reasonableness and consistency of any allegations the claimant makes about pain or other symptoms. To determine whether applicants meet the Social Security Act’s definition of disability, SSA regulations provide DDS and ALJ adjudicators with a sequential evaluation process (see table 2.1). Although the process provides a standard approach, determining disability requires a number of complex judgments. For people 18 or older, the act defines disability under the DI and SSI programs as the inability to engage in substantial gainful activity by reason of a severe physical or mental impairment that is medically determinable and has lasted or is expected to last at least 1 year or result in death.Moreover, the impairment must be of such severity that a person not only is unable to do past relevant work, but, considering age, education, and work experience, is also unable to engage in any substantial work available in the national economy. Applicants are denied benefits at step 1 if they are engaged in substantial gainful activity. At step 2, adjudicators further screen applicants by assessing whether they have a severe impairment, defined by the regulations as an impairment that has more than a minimal effect on the applicant’s ability to perform basic work tasks. For those whose impairments have more than a minimal effect on ability to work, adjudicators then begin determining whether the applicant’s impairments are severe enough to qualify for disability benefits. In step 3 of the sequential evaluation process, adjudicators compare the applicant’s medical condition with medical criteria found in SSA’s Listing of Impairments—referred to as “the medical listings”—which are published in SSA’s regulations. The listings delineate over 150 categories of medical conditions (physical and mental) that, according to SSA, are presumed to be severe enough to ordinarily prevent an individual from engaging in any gainful activity. For example, corrected vision of 20/200 or less, amputation of both hands, or an intelligence quotient of 59 or less would ordinarily qualify an individual for benefits. An applicant may automatically qualify for benefits if the adjudicator concludes that the laboratory findings, medical signs, and symptoms of one of the applicant’s impairments meet the specific criteria for medical severity cited in the listings for that impairment and the applicant is not engaging in substantial gainful activity. If an applicant’s medical condition does not meet the listed criteria or if the impairment is not listed, then the adjudicator must determine whether the applicant’s impairment is the medical equivalent of one in the listings. The medical severity criteria for listed mental impairments are generally more subjective than those for physical impairments. For most mental impairments in the listings, many of the severity criteria are defined by functional limitations. Determining whether a mental impairment meets or equals the listed criteria often requires subjective evaluations about (1) restrictions of daily activities; (2) difficulties in maintaining social functioning; (3) deficiencies in concentration, persistence, or pace that result in failure to complete tasks in a timely manner; and (4) episodes of deterioration in work settings that cause the individual to withdraw or have exacerbated signs and symptoms. For example, adjudicators must decide whether the impairment has any impact at all on activities of daily living or on social functioning, and, if so, rate the impact as slight, moderate, marked, or extreme. By contrast, the listed criteria for physical impairments generally are more objective, relating to medical diagnosis and prognosis, rather than the assessment of functional limitations in the mental listings. Determining whether the medical findings for a physical impairment meet or equal these criteria is a matter of documentation and is often more a question of medical fact than opinion. In some instances, however, the criteria for physical impairments also require that adjudicators assess functional limitations. For example, for applicants with human immunodeficiency virus, adjudicators assess their symptoms or signs, such as fatigue, fever, malaise, weight loss, pain, and night sweats as well as their subsequent effect on activities of daily living and social functioning. For musculoskeletal and other impairments, adjudicators assess the importance of pain in causing functional loss when it is associated with relevant abnormal signs and laboratory findings. Adjudicators must also carefully determine that the reported examination findings are consistent with the applicant’s daily activities. When medical evidence does not show that an applicant’s condition meets or equals the severity criteria in the listings, adjudicators must determine whether the applicant can perform past work. To do this, adjudicators use judgment when they assess an applicant’s RFC—that is, what an applicant can still do, despite physical and mental limitations, in a regular full-time work setting. To assess RFC, adjudicators must consider all relevant medical and nonmedical evidence, such as statements of lay witnesses about an individual’s symptoms. In considering medical evidence, adjudicators must evaluate medical source opinions and judge the weight to be given to each opinion. Adjudicators also often evaluate issues involving pain or other symptoms and judge whether the applicant’s impairment could reasonably be expected to produce the applicant’s symptoms. Assessing physical RFC requires adjudicators to judge individuals’ ability to physically exert themselves in activities such as sitting, standing, walking, lifting, carrying, pushing, and pulling. Adjudicators also assess the effect of the individual’s physical impairment on manipulative or postural functions such as reaching, handling, stooping, or crouching. Assessing mental RFC requires adjudicators to judge the individual’s functional abilities such as understanding, remembering, carrying out instructions, and responding appropriately to supervision, coworkers, and work pressures. After assessing an applicant’s RFC, the adjudicator compares it with the demands of the applicant’s prior work. The adjudicator either concludes that the applicant can perform his or her prior work and denies the claim or proceeds to the last step (step 5) in the sequential evaluation process. At step 5, adjudicators evaluate whether applicants unable to perform their previous work can do other jobs that exist in significant numbers in the national economy. If the adjudicator concludes that an applicant can perform other work, the claim is denied. Again, adjudicators must apply judgment to determine whether an applicant can perform other work in the national economy, depending on whether the applicant’s limitations are exertional or nonexertional. An applicant has exertional limitations when his or her impairment limits the ability to perform the physical strength demands of work. For this evaluation, SSA places a claimant into one of five categories of physical exertion—sedentary, light, medium, heavy, and very heavy—with sedentary work requiring the least physical exertion of the five levels (see table 2.2). On the basis of an applicant’s RFC, adjudicators must judge which of the five exertional categories is the most physically demanding work the individual can perform. For an applicant whose maximum physical ability matches one of the five exertional categories of work, SSA provides medical-vocational rules that direct the adjudicator’s decision on the basis of the claimant’s age, education, and skill levels of prior work experience. Table 2.3 shows how the medical-vocational rules direct decisions for people aged 50 or older who are limited to sedentary work. In general, the older a person is, the more likely SSA’s medical-vocational rules direct adjudicators to award benefits. For example, under the rules for those whose maximum physical capacity limits them to performing sedentary work, applicants aged 50 or older qualify for benefits under four of the scenarios shown in table 2.3. Those aged 45 through 49, however, qualify under only one scenario; applicants aged 18 through 44 qualify under no scenario (see table 2.4). Although SSA’s medical-vocational rules reduce the degree of judgment that adjudicators must use in many cases, SSA has no rules to direct adjudicators’ decisions for other cases. These include cases in which (1) the applicant’s maximum strength capability does not match any of the five exertional levels or (2) the applicant’s primary limitations are nonexertional (or unrelated to the physical strength demands required for sitting, standing, walking, lifting, carrying, pushing, and pulling). In such cases, the medical-vocational rules can provide a guide for evaluating an applicant’s ability to do other work, but the regulations instruct adjudicators to base their decisions on the principles in the appropriate sections of the regulations, giving consideration to the medical-vocational rules for specific case situations. For example, an applicant may be restricted to unskilled sedentary jobs because of a severe cardiovascular impairment. If a permanent injury of the right hand also limits the applicant to only those sedentary jobs that do not require bilateral manual dexterity, then the applicant’s work capacity is limited to less than the full range of sedentary work. The ability to do less than the full range of sedentary work is not one of the five exertional levels defined in SSA’s regulations; therefore, no medical-vocational rules would direct the adjudicator’s decision. On the basis of Department of Labor data, SSA estimates that approximately 200 unskilled occupations exist, each representing many jobs that can be performed by people whose limitations restrict them to the full range of sedentary work. But, if an applicant is limited to less than the full range of sedentary work, the adjudicator must determine the extent to which the exertional and nonexertional limitations reduce the occupational base of jobs, considering the applicant’s age, education, and work experience, including any transferable skills or education providing for direct entry into skilled work. The mere inability to perform all sedentary unskilled jobs is not sufficient basis for a finding of disability. The applicant still may be able to do a wide range of unskilled sedentary work. Before making any decision, an adjudicator must assess the amount of weight to give to the various sources of evidence and evaluate the reasonableness and consistency of any allegations from applicants about pain or other symptoms. To provide a basis for determining disability, the adjudicator must gather existing medical evidence, which includes (1) opinions of physicians or psychologists who have had an ongoing treatment relationship with the applicant and (2) hospitals, clinics, and other medical sources that have treated or evaluated the applicant but not on an ongoing basis. In addition, adjudicators may develop new medical evidence obtained from consulting sources. Medical evidence includes (1) medical history; (2) clinical findings, such as the results of physical or mental status examinations; (3) laboratory findings, such as blood pressure and X rays; (4) statement of the diagnosis of the disease or injury based on its signs and symptoms; and (5) treatment prescribed and prognosis. Medical evidence also includes statements from treating physicians or other medical sources describing work-related activities, such as sitting, standing, walking, and lifting, that the applicant can still do despite his or her impairments. In the case of mental impairments, statements should describe the applicant’s ability to understand, carry out, and remember instructions and respond appropriately to supervision, coworkers, and work pressures. In making a decision, an adjudicator must assess how much weight to give to each medical source’s statement of opinion. Table 2.5 describes the factors to be considered in weighing opinions. Adjudicators also must evaluate whether an applicant’s impairment could reasonably be expected to produce the reported symptoms—such as pain, fatigue, shortness of breath, weakness, and nervousness. This requires the adjudicator to assess the extent to which an individual’s symptoms are consistent with (1) the objective medical evidence (medical signs and laboratory findings); (2) evidence, such as statements from the applicant, medical sources, family, friends, or employers about the applicant’s medical history, diagnosis, prescribed treatment, activities of daily living, and efforts to work; (3) information from social welfare agencies, nonmedical sources, and other practitioners, such as chiropractors and audiologists; and (4) any other evidence of the applicant’s impairment’s effect on his or her ability to work. If the adjudicator concludes that the impairment could reasonably be expected to produce the reported symptoms, the adjudicator must then evaluate the intensity and persistence of the symptoms to determine how the symptoms limit the applicant’s ability to work. In making such an evaluation, adjudicators look for objective medical evidence obtained through clinical and laboratory diagnostic techniques, such as evidence of reduced joint motion, muscle spasm, sensory deficit, or motor disruption. However, adjudicators cannot reject an applicant’s statements about the intensity and persistence of pain or other symptoms or about the effect of these symptoms on the ability to work solely because the available objective medical evidence does not substantiate the applicant’s statements. Because symptoms reported by the applicant sometimes suggest a more severe impairment than can be shown by objective medical evidence alone, adjudicators must carefully consider any other information provided by the applicant, treating sources, or other people about the applicant’s pain or other symptoms. Following are the factors that adjudicators must consider in assessing pain and other symptoms: activities of daily living; location, direction, frequency, and intensity of the pain or other symptoms; precipitating and aggravating factors; type, dosage, effectiveness, and side effects of any medication the applicant takes or has taken to alleviate pain/symptoms; treatment, other than medication, the applicant is receiving or has received for relief of pain or other symptoms; any measures the applicant uses or has used to relieve pain or other symptoms, such as lying flat on back, standing for 15 or 20 minutes every hour, and sleeping on a board; and other factors concerning the applicant’s functional limitations and restrictions due to pain or other symptoms. SSA studies show that DDS and ALJ decisions most often differ because adjudicators make different conclusions about applicants’ ability to function in the workplace. At the DDS and ALJ levels, two different types of professional staff perform residual functional capacity (RFC) assessments. At the DDS, medical staff perform the assessments; at the ALJ level, the ALJ performs them. ALJs may seek the advice of medical experts, but they do so infrequently. Study results also suggest that DDSs and ALJs differ in their assessments of the opinions of applicants’ own physicians. SSA has conducted studies of the differences between DDS and ALJ decisions and has identified key issues. To improve consistency of decisions, the agency has recently published policy clarifications, conducted training for all disability adjudicators, and is now starting to evaluate the impact of this training. SSA also plans to develop a single presentation of policy to be used by both DDSs and ALJs. Differing DDS and ALJ assessments of a claimant’s capacity to function in the workplace are the primary reason for most ALJ awards. Under the sequential evaluation process, almost all DDS denial decisions appealed to ALJs include an RFC assessment. On appeal, ALJs also follow the same sequential evaluation process and assess the claimant’s functional ability in most awards they make. Both the ongoing Disability Hearings Quality Review Process (DHQRP) study and a study conducted by SSA in 1982 note the importance of differences in assessing RFC. (See app. II for more details on these studies’ results.) Decisions in cases involving physical impairments clearly reflected differences in assessing RFC. Table 3.1 presents data from SSA’s DHQRP study on physical impairment cases in which ALJs made awards on the basis of RFC assessments. The table compares the ALJ decisions with those of reviewers who used the DDS approach and examined the written evidence available to the ALJ. These data indicate that ALJs are significantly more likely than DDS medical consultants to find that applicants have very limited work capacity. In the view of awarding ALJs, 66 percent of the cases merited a “less than the full range of sedentary work” assessment—a classification that often leads to an award. In contrast, the medical consultants who performed the RFC assessment using the DDS approach found that less than 6 percent of cases merited this classification. The DDS and ALJ adjudicators also differed in the other classifications. In addition, high ALJ award rates for claimants with mental impairments often reflect different assessments of functional limitations. Even ALJ mental impairment awards based on the listings reflect these differences because most such listings require adjudicators to assess functional limitations in addition to determining the claimant’s medical condition. A study known as the Bellmon Report, which controlled for differences in evidence, also found that differing RFCs played a role in differing DDS and ALJ decisions. This study found that DDS and ALJ adjudicators reached different results even when presented with the same evidence. As part of the study, two groups of reviewers looked at selected cases. One group reviewed the cases as ALJs would, and the other reviewed the cases as DDSs would. Reviewers using the ALJ approach concluded that 48 percent of the cases should have received awards; reviewers using the DDS approach concluded that only 13 percent of those same cases should have received awards. We identified specific differences in DDSs’ and ALJs’ approach to their decisions. First, medical staff have different roles at the two levels. In addition, DDSs and ALJs respond differently to (1) the opinions of claimants’ physicians and (2) claimants’ statements about symptoms such as pain. Medical experts play different roles in the DDS and ALJ decision-making approaches. At the DDS, medical or psychological consultants assess RFC of applicants. In contrast, ALJs may consult with medical experts but have sole authority to make the RFC finding. ALJs sought the advice of medical experts in only 8 percent of cases resulting in awards, according to our analysis. Both the Bellmon and DHQRP studies compared RFC assessments made by SSA medical staff using the DDS approach with those made by awarding ALJs. According to both studies, medical staff tended to find that claimants had higher capacities to function in the workplace than the ALJs found. Under SSA regulations, adjudicators must consider the opinions of treating physicians who have an ongoing treatment relationship with the claimant. Such an opinion might include, for example, a statement that a claimant “cannot stand or walk for more than two hours total in a day.” In the disability determination, adjudicators must give controlling weight to these treating source opinions provided they are (1) well supported by medically acceptable clinical and laboratory diagnostic techniques and (2) consistent with the other substantial evidence in the record. A treating physician’s statement, however, that a claimant is “disabled” or “unable to work” does not bind adjudicators. Treating physicians’ opinions, however, seem to influence DDSs and ALJs differently. The DHQRP study found that the treating physician’s report was one of the five most frequent reasons for ALJ awards. This implies that ALJs tended to give controlling weight to the treating physician’s opinion, while DDS adjudicators were more likely to focus on assessing that opinion in conjunction with other medical evidence in the case file. A second factor contributing to differing DDS and ALJ decisions is the impact of symptoms (for example, pain, fatigue, or shortness of breath) reported by the claimant but not identifiable in laboratory tests or confirmable by medical observation. Like the opinions of the claimant’s own physician, assessment of symptoms is important in the disability decision. Adjudicators must assess symptoms by determining (1) whether the medically determinable impairments could reasonably be expected to produce such symptoms and (2) the intensity, persistence, and functionally limiting effects of the symptoms. According to SSA, adjudicators must assess the claimant’s credibility on the basis of the entire case record to make a determination about these symptoms’ effects. DDSs generally make such assessments on the basis of the case file (for example, statements made by applicants on the application or reports from medical sources that record applicants’ comments). ALJs have additional evidence because they have the opportunity to consider the claimant’s testimony in a hearing. Moreover, claimant credibility has a significant impact on ALJ decisions. The DHQRP study identified the credibility of the claimant and claimants’ allegations about pain as two of the top five reasons for an ALJ allowance decision. The impact of these reasons on DDS decisions is more difficult to assess. However, during the DHQRP study, reviewers using the DDS approach listened to tapes of claimant testimony in a small sample of 50 cases. The study concluded that claimant testimony had no or minimal impact on those adjudicators. SSA adjudicators use two different sets of documents as criteria for disability decisions, which some believe contributes to inconsistent decisions. DDS adjudicators must follow a detailed set of policy guidelines, called the Program Operations Manual System (POMS). The POMS for disability contains detailed interpretations of laws, regulations, and rulings as well as procedural instructions on deciding cases. ALJs, on the other hand, rely directly on the laws, regulations, and Social Security Rulings (SSR) for guidance in making disability decisions. The latter documents are generally shorter and much less prescriptive than the POMS. This difference in policy documents, along with the difference in decisions between the DDSs and ALJs has led to the belief by some that there are two standards—or at least two different interpretations of policy. A 1994 Inspector General survey of DDS and ALJ opinion found that the DDSs’ strict application of POMS—as opposed to the ALJs’ direct application of disability law and regulations—was considered to have a strong effect on allowance rates by over half of those surveyed. Similarly, the Bellmon Report stated that, “SSA has long recognized that the standards and procedures governing decisions by DDSs and ALJs are not entirely consistent.” The type and extent of these differences have proven difficult to quantify, however. For example, the Bellmon Report identified significant differences in DDS and ALJ decisions based on impairments considered not severe. The study then identified differences in the regulations and POMS on this issue. The study concluded, however, that the two written standards, “while different, (were) not widely divergent.” As such, it remains unclear whether the differences derive from the standards or from their differing application. Nevertheless, although their relative impact has not been quantified, policy differences cannot be discounted as a potential reason for inconsistent decisions. SSA has taken or planned several initiatives to make disability decisions more consistent. In July 1996, SSA issued nine SSRs to address several of the factors we identified as contributing to inconsistent decisions. For example, one of the new rulings reminds ALJs that they must obtain expert medical opinion in certain types of cases. Another ruling clarifies when adjudicators must give the opinion of a treating physician special consideration. A third ruling states that an RFC of less than the full range of sedentary work is expected to be relatively rare. SSA also plans to issue a regulation to provide additional guidance on assessing RFC for both DDSs and ALJs, specifically clarifying when a less-than-sedentary classification is appropriate. In addition, partly on the basis of the nine rulings, SSA completed nationwide process unification training between July 10, 1996, and February 26, 1997. SSA officials pointed out that this training was the first time that the agency had brought together DDS and ALJ staff to share their views. The training represented a major effort—15,000 adjudicators and quality reviewers received 2 full days of training, coordinated by facilitators in SSA headquarters using a broadcast system. SSA has also started to evaluate the impact of the new rulings and training by collecting data before and after the new rulings and training. Furthermore, SSA recently compared the policy language in the POMS with disability law, regulations, and SSRs and concluded that no substantive differences in policy existed. SSA did find some differences in wording and detail, however, that could lead to a perception of differences. To address this matter, SSA plans to develop a single policy presentation to be used by both DDSs and ALJs. To this end, the agency is using exactly the same words in any new regulation, ruling, and POMS publication. It has already done this, for example, for the SSRs on which the process unification training was based. SSA eventually plans to have all adjudication policy in the form of regulations or SSRs so that they are binding on ALJs as well as DDS adjudicators. In the longer term, SSA also plans under redesign to develop new, more valid, and reliable functional assessment/evaluation instruments relevant to today’s work environment. Because current differences in RFC assessments are the main reason for inconsistent decisions, however, SSA should proceed cautiously and test any new decision-making methods to determine their effect on consistency as well as on award rates before widespread implementation. ALJs often cannot fully understand how DDS denial decisions have been made because DDS written evaluations provide neither clear explanations nor justifications for the findings and conclusions reached. Therefore, the evaluations often do not lay a solid foundation for subsequent appeals. For instance, the basis of the DDS’ residual functional capacity (RFC) assessment is often unclear, leaving the ALJ without full understanding of the reasoning that led to the DDS denial. Furthermore, explanations of how the DDS considered evidence that ALJs might later rely on, such as the opinions of the claimants’ own physicians, may often be missing from the case file or are not fully developed. As a result, ALJs often cannot rely on the evaluations as developed by the DDSs. SSA has plans to change the process to improve the documentation of DDS evaluations so they can better serve as a foundation for ALJ decisions. These plans include requiring clear DDS explanations of the reasoning used to support reconsideration denials and improving development of evidence at the DDS. SSA also plans to return a selected number of cases involving new evidence from the ALJ level to DDSs for their reconsideration. Together, these changes in procedures will better serve as a foundation for appeals, improving the consistency of DDS and ALJ decisions. As discussed in chapter 3, inconsistent decisions between DDSs and ALJs are due mainly to differences in RFC assessments. Studies show that DDS medical consultants often inadequately explain their conclusions, including those about an applicant’s RFC. Such explanations, if improved, could be more useful in ALJ decision-making. In fact, SSA’s policy is that an ALJ, when making an RFC assessment, must consider the opinion of the DDS medical consultant. To this end, SSA requires DDS medical consultants to record explanations of their reasoning. In particular, the agency asks medical consultants to fully describe how they used the medical evidence to draw their conclusions about an applicant’s RFC. RFC forms and procedures require that medical consultants discuss in writing how the medical evidence in the case file supports or refutes an applicant’s allegations of pain or other symptoms. Finally, the RFC forms also require medical consultants to explain how conflicts among treating physician opinion and other medical evidence in the case file were resolved. Disability Hearings Quality Review Process (DHQRP) data, however, indicate that existing SSA procedures do not ensure that DDS decisions are well documented. Specifically, procedures require the disability examiner to prepare supplementary explanations when the resolution of key issues is not well documented elsewhere in the case file. The DHQRP study of appealed reconsideration denials found that in about half the cases that hinged on complex issues—such as conflicts with the treating physician’s opinion, assessment of RFC, and weighing of allegations regarding pain or other symptoms—DDS documentation failed to explain how these issues were resolved. The insufficient documentation of the underlying medical analyses limited their usefulness during the appeal process. Although ALJs use the medical evidence assembled by DDSs, they often base their decisions on additional documentary or testimonial evidence. This both contributes to inconsistent decisions and makes it difficult to reconcile those differences. Procedures at the hearings level, such as longer time frames for evidentiary development and permitting the introduction of new information, result in the availability of new documentary evidence for appeal cases. In addition, testimony during the face-to-face hearing and the opportunity it provides for further assessing the claimant’s credibility provide new information not in DDS case files. SSA studies show that in many instances introducing additional documentary evidence at the hearing level results in an ALJ’s awarding benefits. DHQRP data show that about three-quarters of the appealed cases sampled contained new evidence. The study estimated that 27 percent of the hearing awards hinged on additional evidence, resulting in an assessment of a more severe impairment or a more restrictive RFC. In addition, the Bellmon Report found that when new evidence was removed from the case file, the ALJ award rate decreased from 46 to 31 percent. This study also found that approximately three-quarters of new documentary evidence was medical in nature rather than, for example, statements of friends and associates. One reason that appeals cases have additional evidence is that ALJ procedures allow for more time to be spent on evidence development. Although SSA regulations stipulate that “every reasonable effort” be made to obtain necessary evidence, DDS guidelines state that evidence should generally be gathered within 30 calendar days. ALJ guidelines, however, provide a time frame for evidence gathering that is almost twice as long and can be extended if necessary. In addition, ALJs responding to an Inspector General (IG) survey believed that DDSs often fail to adequately develop evidence to show the true nature and extent of an applicant’s disability. The ALJs attributed some of this to a lack of adequate resources at the DDSs and pressures to dispose of cases.Also, surveyed ALJs said that DDS problems with developing evidence, particularly medical evidence, contribute to their reversals of DDS denials. In an earlier survey we conducted of DDS administrators, almost two-thirds responded that workload and staffing pressures had affected the accuracy of denial decisions. Seven DDS administrators (14 percent) said the harmful effect on the accuracy of denial decisions was great or very great. Finally, the presence of attorneys or others who represent the claimant’s interests may also result in the presentation of new evidence during an appeal. Because attorneys are generally paid only when decisions favor their clients, they are motivated to find and present additional evidence. Although few claimants hire attorneys or other representatives at the DDS level, DHQRP data showed that representatives attended 81 percent of ALJ hearings. With few exceptions, ALJ hearings present a claimant’s first opportunity for face-to-face contact with a disability adjudicator. Studies show that face- to-face encounters with claimants appear to account for a significant number of ALJ reversals. Specifically, in the DHQRP study, reviewing ALJs believed that a favorable assessment of the claimant’s credibility is a factor in 34 percent of sampled hearing allowances. Although DDSs and ALJs also assess credibility from case file information, testimony received at a hearing appears to especially influence ALJs when assessing the credibility of a claimant’s subjective allegations such as the effect of pain on functioning. The IG’s 1994 report showed that nearly 60 percent of ALJs surveyed believed that the claimant’s appearance before an ALJ strongly affects awards; 90 percent believed it has a moderate to strong effect. Furthermore, the Bellmon Report found that the ALJ award rate decreased by about 17 percentage points when evidence from the claimant’s record of testimony was removed from the case file. Because claimants may offer new documentary and testimonial evidence at an ALJ hearing, they can also change their impairment type or add a new, secondary impairment, which also affects consistency of DDS and ALJ decisions. Moreover, in about 10 percent of cases appealed to the ALJ level, claimants switch the basis of their primary impairment from a physical claim to a mental claim. Under current procedures, the DDS lacks the opportunity to routinely consider these switched claims, then incorporate this consideration in their analysis, thus providing the ALJ with a basis for confirming or rejecting the new impairment claim. In addition to inadequately explained RFC assessments and new evidence submitted on appeal, we examined other factors that could affect inconsistent decisions. We could not attribute any significant effect, however, to other factors, such as worsening condition of claimants and the lack of government representation at hearings. Because claimants must often wait several months—on average almost a year—for an ALJ hearing, it seems reasonable to conclude that some ALJ awards could be explained by the claimants’ condition deteriorating during that time. Worsening conditions, however, are not a major contributor to ALJ awards, according to our examination of program data. About 93 percent of ALJ awards had onset dates—dates on which the ALJ had determined the individual had become disabled—that preceded the DDS decision, suggesting that the ALJ had decided the individual had been disabled when the DDS denied the case. If worsening conditions were a major factor contributing to ALJs awarding benefits, we might expect to see ALJ-determined onset dates coming after the date of the final DDS denial. Because such onset dates are relatively rare, however, little basis seems to exist for concluding that worsening conditions influence many ALJ awards. Moreover, neither the Bellmon Report nor the DHQRP study discussed worsening conditions as a key factor influencing ALJ awards. An ALJ award based on a worsening condition may have also followed a DDS denial based on the assumption that a claimant’s impairment would improve within 12 months (individuals are not disabled if their impairment is expected to last less than 1 year), SSA officials noted. If expected improvement did not, in fact, occur, then the ALJ award would have correctly been based on the original alleged date of onset. About 10 percent of ALJ awards are made to individuals whose claim the DDS had denied on the basis of the duration requirement, according to our analysis of program data. This 10 percent, however, represents a maximum amount because available program data did not allow us to isolate the impact of other factors—such as new information introduced at the ALJ level—which could have been the main reason for the ALJ award. Although the ALJ is expected to consider SSA’s interests during the hearing, the agency is not formally represented. The presence of a government attorney or other advocate to represent SSA at hearings has been discussed over the years as a way of improving the ALJ hearing process. Although claimants have the right to representation, SSA relies on the ALJ to fully document the case, considering the claimant’s as well as the government’s best interests. In the early 1980s, SSA initiated a pilot project at selected hearing offices to test the effect of SSA representation at hearings. At a 1985 congressional hearing, SSA released preliminary information from the pilot that suggested that ALJ awards made in error could be cut by 50 percent if SSA were represented at appeal hearings. Acting under a July 1986 court injunction, however, SSA halted the pilot project. The court concluded that the entire notion of SSA representation, as implemented, violated procedural due process. In May 1987, SSA decided to end the project, stating that the administrative resources committed to it could be better used elsewhere. As a result, the preliminary results were never verified, and a final report was never issued. SSA plans to take several actions so that DDS and ALJ procedures better ensure decision-making consistency, including requiring more detailed DDS rationales, returning selected appealed cases to the DDS for consideration of new evidence introduced at ALJ hearings, and using a “predecision interview” by a disability examiner. To improve explanations of DDS decisions, SSA plans to require more detailed DDS rationales. New guidelines for all reconsideration denials are to require DDS adjudicators to write rationales explaining how they made their decisions, especially how the medical consultants assessed RFC, treating physician opinion, pain, and other factors. On the basis of feedback from the process unification training, SSA plans further instructions and training for the DDSs on the bases for their decisions and where in the case files this information should go. SSA issued a ruling in July 1996 clarifying that ALJs consider the findings of fact made by DDS medical and psychological consultants as expert opinion evidence of nonexamining sources and plans to issue a regulation to further clarify the weight given by ALJs to the DDS medical consultants’ opinions. To ensure that DDSs have an opportunity to review all relevant evidence before an ALJ hearing, SSA plans to return selected appealed cases to the DDS for consideration of new documentary evidence introduced at ALJ hearings. This would avoid the need for a more costly and time-consuming ALJ decision in cases where the DDS would award benefits. If the DDS cannot allow the returned claim, however, the DDS medical consultant must provide a revised assessment of the case’s medical facts. SSA plans to implement this project in May 1997, at which time it would begin selecting about 100,000 of the roughly 500,000 appealed cases per year for such claims. Moreover, SSA’s decision to limit such claims to about 100,000 cases may need to be reassessed in light of the possible benefits that could accrue from this initiative. SSA also plans to test the use of a “predecision interview” by a disability examiner with the claimant before denying a claim. This interview would provide an opportunity for the DDS to routinely obtain and consider testimonial evidence. It would also allow the DDS the chance to better ensure that claimants understand how decisions about their cases are made and what evidence might be relevant. This could improve the claimants’ ability to provide complete and relevant information and make all relevant disability claims earlier in the disability determination process. SSA could use its ongoing quality reviews to better focus on differences in DDSs’ and ALJs’ assessments of functional capacity and of procedures to improve its management of the decision-making process and reduce inconsistent decisions between DDSs and ALJs. Current quality reviews, however, focus on the DDS and ALJ decision-making processes in isolation from one another and do not reconcile differences between them. In addition, to better manage the process and reduce inconsistencies, SSA also needs a quality review system that focuses on the overall process and provides feedback to all adjudicators on factors that cause differences in decisions. SSA has data and mechanisms in place that it could use to begin integrating its quality reviews and to provide feedback to DDSs and ALJs. In the longer term, SSA plans to systematically review decision-making at all levels through a new quality review system. SSA has several quality review systems that review disability DDS and ALJ decisions. As shown in table 4.1, each of the reviews has a different purpose. None was developed to identify and remedy the factors that contribute to differences in DDS and ALJ decisions. At the DDS level, staff who report to SSA’s Office of Program and Integrity Reviews (OPIR) perform a quality assurance review to promote the accuracy and consistency of DDS determinations. The review uses continuous random samples of completed award and denial actions. On the basis of errors found during this review, SSA computes accuracy rates for each DDS, which it compares with performance standards. DDSs that fall below standards for two consecutive quarters are subject to increased SSA oversight and may be removed from making disability decisions. In addition, DDS staff also perform a pre-effectuation review or PER (that is, a review before benefits payments are paid) of awards to protect the solvency of the DI trust fund. Under this review, staff review 50 percent of DI awards (not SSI-only cases) to prevent payment of erroneous awards. At the ALJ level, quality review heavily focuses on the review of claims denied by ALJs and appealed to SSA’s Appeals Council. Claimants whose claims are denied by an ALJ and want to appeal the denial must apply to the Appeals Council before bringing their claim to a federal court. The purpose of this final agency review is to ensure that the case file fully supports the ALJ denial decision before possible court appeal by the claimant. On the basis of this review, the Appeals Council may, among other things, reverse the denial decision or remand the case to the ALJ for further action. In addition, like the PER at the DDS level, the Appeals Council performs a PER of ALJ awards. Unlike the 50-percent sample used for the PER, however, this Appeals Council review samples only a portion of DI-only awards totaling about 3 percent of all ALJ DI awards to people under age 59. As shown in table 4.1, DDS reviews emphasize awards; the ALJ reviews, however, emphasize denials. This may inappropriately give DDSs an incentive to deny claims and ALJs an incentive to award claims in both instances to avoid scrutiny by quality reviewers. Available evidence, however, does not support this conclusion. Before SSA instituted the PER of DDS award determinations in fiscal year 1981, national accuracy rates were generally higher for initial denials than for awards. After the PER was instituted, this situation reversed. By 1983, award rates were more likely to be accurate than denial rates. This trend may suggest that instituting the review caused a decline in the accuracy of denials, while increasing the accuracy of awards. Other factors could have influenced these accuracy trends, however, including workload pressures and program changes. In addition, the difference between the denial and award accuracy rates is slight. In fiscal year 1996, the denial accuracy rate was only 2.9 percentage points lower than the award accuracy rate. Moreover, data from the DHQRP study suggest that the evidence supports ALJ awards and denials equally. As part of that study, reviewing ALJs assessed 3,000 ALJ awards and 3,000 denials and found virtually the same support rates for both types of cases: 81 percent of awards and 82 percent of denials were supported by substantial evidence. How DDS and ALJ quality reviews operate reflects the differences in how decisions are made at the two levels. First, quality reviewers use the same decision-making approach as those they are reviewing. Therefore, they sustain the differences in approach discussed earlier rather than reconcile them. For example, the Appeals Council, mirroring the approach of the ALJs, infrequently consults with medical experts. Second, DDS reviews do not examine the possible impact at the ALJ level of weaknesses in evidence or the explanation of the decision. As a result, SSA misses the opportunity to use quality reviews to strengthen procedures so that DDS decisions better serve as a basis for ALJ consideration. The staff and approach used in SSA’s quality reviews of DDS decisions mirror those used in the DDS process. SSA review teams, composed of disability examiners and physician consultants, assess the quality of DDS decisions using the same policies and procedures that DDSs use in making their decisions. For example, when review staff examine a DDS decision, a physician consultant on the team has final authority regarding the correctness of the residual functional capacity (RFC) assessment made by the DDS medical consultant. Likewise, SSA’s Office of Hearings and Appeals (OHA) staff perform ALJ reviews in a manner that mirrors the ALJ process. Staff at OHA screen decisions for conformance with the same standards and procedures used by ALJs, then refer cases that merit further review to the Appeals Council, which consists of attorneys. Similar to ALJs, Appeals Council reviewers have sole authority for assessing a claimant’s RFC, and they seek medical input infrequently. The Appeals Council’s medical staff and contract physicians consulted in about 17 percent of the cases reviewed by the Appeals Council, according to our analysis of available SSA data. In addition, although SSA’s Office of Disability is responsible for promulgating a uniform decision-making policy, management control of reviews is split between OPIR, which reports to the Deputy Commissioner for Finance, Assessment, and Management, and the Appeals Council, which reports through OHA to another Deputy Commissioner. The two review groups have not routinely met to identify and resolve issues related to inconsistent decisions. SSA’s quality reviewers examine the evidence gathered by the DDS to determine if the end result complies with SSA regulations and guidelines. Although SSA’s reviewers assess the adequacy of the DDS’s explanation of the initial decision, the reviewers consider the DDS to have made an accurate decision whether it is well explained or not. If a DDS medical consultant fails to adequately explain the basis for the RFC assessment— but nonetheless the decision appears correct and based on adequate evidence—the reviewers do not charge DDS with an error affecting its performance accuracy. This approach focuses on performance accuracy; it does not provide DDSs with routine, systematic feedback on inadequate RFC explanations because SSA does not return cases to DDSs for correction solely because RFC explanations are inadequate. Instead, if reviewers return a case to a DDS because of other types of errors, such as inadequate evidence to support the decision, the returned case would include comments on inadequate RFC explanations by DDS medical consultants, according to SSA officials. Otherwise, the only way that reviewers might provide feedback on inadequate RFC explanations is during periodic visits to DDSs. Consequently, SSA lacks a routine, systematic mechanism for giving DDSs timely information on the adequacy of their RFC explanations. Likewise, Appeals Council reviews have not emphasized ALJs’ consideration of DDS medical consultants’ opinions. First, the Appeals Council samples few ALJ awards for review. Such reviews could identify differences between the DDS medical consultant’s opinion and the ALJ view. Second, even if the Appeals Council might want to consider the views of DDS medical consultants, the lack of explanation gives the Council little to review. In addition, SSA’s quality reviews of DDSs’ performance accuracy do not focus on weaknesses in DDS evidence gathering from the standpoint of whether the evidence could later contribute to ALJ reversals. Instead, reviewers of DDS decisions focus on whether the evidence in the file supports the DDS’s own decision. They do not consider whether gaps in evidence may become significant in a later appeal. For example, if the file indicates that the claimant has a treating physician, but the treating physician’s report is missing from the file, quality reviewers do not automatically cite this as a performance accuracy error. Instead, they determine whether the totality of evidence in the file supports the DDS’s decision. If the decision is supported adequately—despite the missing evidence—the reviewers do not charge the DDS with a performance accuracy error, though this lack of evidence could become significant at the ALJ level. Although the DDS decision may be technically accurate, it may also be vulnerable to reversal on appeal, a factor that the current quality assurance system does not consider in assessing the overall quality of DDS decisions. In keeping with procedures, DDS reviewers also determine whether the DDS has made a reasonable effort to obtain the evidence. In assessing the reasonableness of the effort, however, the DDS reviewers again do not focus on the potential impact of the missing information if the case were to be appealed. Such a focus would be necessary for both identifying and reconciling differences in decisions. SSA has taken or planned several actions to reduce decisional inconsistency, including addressing factors that we identified as important contributors to the inconsistency. First, the agency has started to systematically gather information on this subject. In 1992, SSA established the Disability Hearings and Quality Review Process (DHQRP), which collects data on ALJ decisions and on the DDS reconsideration denial decisions that preceded them. DHQRP provides a data-driven foundation to identify inconsistency issues and focus on strategies for resolution. According to quality reviewers, SSA has continued this process and anticipates issuing more reports in the future. In addition, SSA is completing work on a notice of proposed rulemaking, with a target issue date of August 1997 for a final regulation to establish the basis for reviewing ALJ awards, which would require ALJs to take corrective action on remand orders from the Appeals Council before benefits are paid. As envisioned, disability examiners and physician consultants as well as reviewing judges will review ALJ awards. In November 1996, SSA began an initial start-up period for this effort and after the regulation is issued plans to target about 10,000 cases for review during the first year. Unlike existing quality reviews, the new process aims to identify and reconcile factors that contribute to differences between DDS and ALJ decisions. When the reviewers find ALJ awards they believe are unsupported, they send these cases to the Appeals Council. If the Appeals Council disagrees with the conclusions of the quality reviewers, the case is referred to a panel of SSA disability adjudicators from various SSA units. This review process can reveal significant policy issues because the panel will receive cases in which the reviewing Appeals Council judge disagrees with the reviewing examiner and medical consultant. On the basis of issues identified, SSA could issue new or clarified policies or provide adjudicators with additional training. In addition, SSA’s process unification effort calls for returning certain cases to the DDS when new evidence is provided at the hearing level. In the longer term, SSA envisions instituting a new quality review system that will systematically review decision-making at all levels. One focus of the new system is making the right decision the first time. SSA estimates this new system will help reduce the percentage of awards made by ALJs, while increasing the percentage made by DDSs. Under SSA’s model, when this redesign is fully implemented, the percentage of all awards made by ALJs would decline from around 29 to 17 percent, and the percentage made by DDSs would increase from 71 to 83 percent. The agency has not explicitly established this as a goal, however. Inconsistent decisions between DDSs and ALJs are a long-standing problem for SSA management with implications for the fairness, integrity, and costs of both the decision-making process and the program overall. The award rate of appeals raises questions about the fairness of the process because many claimants are awarded benefits only after a lengthy appeal. Moreover, persistent inconsistencies between the two levels can undermine confidence in the integrity of the decision-making process. Furthermore, the later the case is finally decided in the appeals process, the more expensive it is to adjudicate. SSA can make more progress than it has in the past by unifying the decision-making process at both the DDS and ALJ levels. Meanwhile, reducing inconsistent decisions will be limited to some extent by factors inherent in the program. Disability decisions are inherently complex and require adjudicators to exercise judgment on a range of issues. As a result, expectations about the level of agreement possible in such a program should acknowledge this reality. Moreover, the process involves large numbers of decisionmakers with more than 15,000 adjudicators, quality reviewers, and others, including over 1,000 ALJs, making these complex decisions nationwide. SSA has developed process unification initiatives that, if implemented, could significantly improve the consistency of decisions. Competing workloads at all levels of adjudication, however, could jeopardize progress in this important area. SSA should capitalize on the momentum it has recently gained and give consistency of decisions the sustained attention it requires as an essential part of redesign. For example, the agency has ongoing data gathering and review mechanisms in place that could produce real progress in this area. SSA has not established explicit outcome-oriented goals or measures, however, to assess its progress in achieving consistent decisions. We believe the strategic planning process required under the Government Performance and Results Act can be a useful vehicle to help focus management attention on the results SSA hopes to achieve through process unification and to monitor its progress toward reaching these results. In this context, SSA needs to establish performance goals to measure its progress in shifting the proportion of cases awarded from the ALJ to the DDS level. SSA could then monitor its progress and make corrections if its actions do not achieve the desired results. Using quantifiable performance goals to measure results would place a high priority on this issue and bolster public confidence in SSA’s commitment to achieve more consistency in DDS and ALJ decision-making. Under process unification, SSA plans to ensure that the DDS decisions are better explained and thus more useful to ALJs. Workload pressures at the DDSs, however, may make full and thoughtful explanations of their decisions difficult. SSA will need to consider ways to reduce these pressures if the agency’s plans are to be effective. At the ALJ level, SSA’s plans to return cases to the DDSs are important, given the significance of new evidence as a possible reason for awards. SSA’s decision to limit such returns to about 20 percent of cases, however, could reduce the effectiveness of this initiative. In addition, SSA plans to improve its quality reviews but could move more quickly to implement these plans. Historically, SSA has never had a unified system of quality reviews, despite studies documenting inconsistent decisions. Specifically, in 1982, the Bellmon Report identified problems in the consistency of less-than-sedentary residual functional capacity (RFC) assessments, and the Disability Hearings Quality Review Process (DHQRP) reinforced this finding in 1994. However, SSA has not effectively used its quality reviews to focus on this problem or taken action to resolve it. Similarly, DHQRP identified problems with DDS rationales, but no systematic feedback has been provided on this issue. The DHQRP results give SSA an adequate foundation and an ongoing review mechanism to begin unifying quality reviews between the DDSs and ALJs without further delay. SSA could, for example, use the DHQRP findings on less-than-sedentary awards to sharpen and focus current Appeals Council reviews. The agency could also focus on the adequacy of DDS decision explanations in its unified quality review program. We are also concerned that, without adequate planning and evaluation, some redesign initiatives could have unintended consequences. For example, under redesign, SSA intends to develop new, more valid, and reliable functional assessment/evaluation instruments that are relevant to today’s work environment. The agency intends to rely heavily on these instruments in decision-making. But, because differences in RFC assessments are the main reason for ALJ awards, SSA should proceed cautiously. As such, it should test any new decision methods to determine their effects on consistency as well as on award rates before widespread implementation. SSA is beginning to implement initiatives to reduce inconsistent decisions between DDSs and ALJs, realizing that the lengthy and complicated decision-making process and inconsistent decisions between adjudicative levels compromise the integrity of disability determinations. We support these initiatives and recommend that SSA take immediate steps and be accountable for ensuring that they are implemented as quickly as feasible. For example, using available quality assurance systems, SSA should move quickly ahead to improve feedback to adjudicators at all levels. In addition, to better ensure that adjudicators review the same record, the agency should increase the number of cases it plans to return to DDSs when new evidence is submitted on appeal. In addition, we recommend that, given the magnitude and seriousness of the problem, the Commissioner should, under the Results Act, articulate the process unification results that the agency hopes to achieve and establish a performance goal by which it could measure and report its progress in shifting the proportion of cases awarded from the ALJ to the DDS level. SSA officials generally agreed with the conclusions and recommendations in this report and stated that the report would be useful to SSA in its efforts to reduce inconsistent decisions between DDSs and ALJs. SSA agreed with our recommendation that the agency take immediate steps and be accountable for ensuring that its process unification initiatives are implemented as quickly as feasible. Regarding our other recommendation, SSA said that the goal of making a greater proportion of awards at the DDS level and fewer on appeal was laudable and would promote good customer service. But SSA disagreed about taking steps to be accountable for attaining this goal. Agency officials believed that the natural outcome of SSA’s process unification initiatives would effect an increase in DDS awards and a decrease in ALJ awards. Because process unification is the linchpin of the disability determination process, however, not just disability redesign, we continue to believe that SSA needs to establish a performance goal for achieving process unification and that the Results Act is the appropriate mechanism to do this. SSA took exception to our remarks suggesting that its proposal for a new decision methodology could exacerbate inconsistent decisions. We do not agree. Under redesign, SSA plans to reduce medical determinations to a relatively small number of claims, while expanding the functional component of the decision-making process. Because it is unlikely that the new decision methodology will eliminate all adjudicator judgment needed in making functional determinations, we continue to believe that SSA should proceed cautiously and test any new decision methods to determine their effects on consistency as well as award rates. In its comments, SSA stated that it is committed to using research results to dictate which, if any, changes will be made in the decision methodology. We support this commitment. The full text of SSA’s comments and our response appear in appendix III. In addition, SSA provided technical comments, which we incorporated in the report as appropriate.
Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) decisionmaking process for disability determinations and efforts to improve the process, focusing on: (1) factors that contribute to differences between disability determination services' (DDS) and administrative law judges' (ALJ) decisions; and (2) SSA's actions to make decisions in initial and appealed cases more consistent. GAO noted that: (1) ALJs made nearly 30 percent of all awards in 1996; (2) because two-thirds of all cases appealed to ALJs have resulted in awards, questions have arisen about the fairness, integrity, and cost of SSA disability programs; (3) differences in assessing applicants' functional capacity and procedural factors, as well as weaknesses in quality assurance, contribute to inconsistent decisions; (4) ALJs are far more likely than DDSs to find claimants unable to work on the basis of their functional capacity; (5) this outcome has occurred even when ALJ and DDS adjudicators review the same evidence for the same case; (6) most notably, DDS adjudicators tend to rely on medical evidence such as the results of laboratory test; ALJs tend to rely more on symptoms such as pain and fatigue; (7) DDS and ALJ decisionmaking practices and procedures also contribute to inconsistent results because they limit the usefulness of DDS evaluations as bases for ALJ decisions; (8) in addition, SSA procedures often lead to substantial differences between the evidentiary records examined by DDS and ALJs; (9) specifically, ALJs may examine new evidence submitted by a claimant and hear a claimant testify; (10) as a result, even with a well-explained DDS decision, ALJs could reach a different decision because the evidence in the case differs from that reviewed by the DDS; (11) SSA has not used its quality review systems to identify and reconcile differences in approach and procedures used by DDSs and ALJs; (12) the quality review systems for the initial level and appeals levels of the decisionmaking process merely reflect the differences between the levels; they do not help produce more consistent decisions; (13) although SSA has not managed the decisionmaking process well in the past, its current process unification initiatives, when fully implemented, could significantly help to produce more consistent decisions; (14) competing workload pressures at all adjudication levels could, however, jeopardize SSA's efforts; (15) as a result, SSA, in consultation with the Congress, will need to sort through its many priorities and be more accountable for meeting its deadlines and establishing explicit measures to assess its progress in reducing inconsistency; and (16) this may include, for example, setting a goal, under the Government Performance and Results Act, to foster consistency in results, set quantitative measures, and report on its progress in shifting the proportion of cases awarded from the ALJ to the DDS level.
Title III of the OAA is intended to help older adults maintain independence in their homes and communities by providing appropriate support services to address the various needs of individuals as they age. While Title III programs are not entitlements, all people age 60 and over, or approximately 54 million individuals in 2008, are eligible for services. The OAA created the foundation for the current aging services network, which is comprised of 56 state units on aging (state agencies) operated by various state government agencies; 629 local agencies, which, at the discretion of state agencies, may include nonprofit and/or government organizations; 244 tribal and Native American organizations; and 2 organizations serving Native Hawaiians. The state and local agencies are responsible for the planning, development, and coordination of an array of home- and community-based services within each state, though states also provide services to older adults through other funding, such as Medicaid, and through separate programs and departments. Nearly 20,000 local organizations provide services through this network. The OAA authorizes a range of services to help older adults remain in their homes and communities, primarily through Title III Parts B, C, and E (see table 1). Part B provides a variety of support services including transportation for those with and without mobility impairments; home-based services for older adults who have difficulty performing daily activities such as bathing or keeping house; case management services; and adult day care. The goal is to assist older adults in maintaining their independence in the community for as long as possible. Part C nutrition services are designed to provide balanced and nutritious meals at home or in a congregate setting. The OAA identifies three purposes for the nutrition programs: to (1) reduce hunger and food insecurity; (2) promote socialization of older individuals; and (3) promote the health and well-being of older individuals by assisting such individuals in gaining access to nutrition and other disease prevention and health promotion services. Home-delivered meals, commonly referred to as “Meals on Wheels,” are typically provided to individuals who have health difficulties that limit their ability to obtain or prepare food. Congregate meals are served at a variety of sites, such as schools and adult day care centers, and serve older adults’ social interaction needs, in addition to nutrition. Part E funds the National Family Caregiver Support Program, which recognizes the extensive demands placed on family members and friends who provide primary care for spouses, parents, older relatives, and friends and provides assistance and support to such caregivers. Among other services, the program offers individual and group counseling, training for caregivers, and respite care. Although all adults age 60 and over and in need of assistance are eligible for services, OAA requires Title III programs to target or make it a priority to serve older adults with the greatest economic and social need. OAA defines such older adults as those who have an income at or below the poverty threshold, have physical and mental disabilities, or are culturally, socially, or geographically isolated, including isolation caused by language barriers, or racial or ethnic status. According to U.S. Census data, in 2008, approximately 5 million, or 10 percent of people age 60 and over had incomes below the poverty threshold ($10,326 for one adult, age 65 and over) and about 16.4 million, or 31 percent of older adults, had incomes below 185 percent of the poverty threshold ($19,103 for one adult, age 65 and over). Targeting older adults who are most in need can be approached in different ways. For example, a local agency may locate a congregate meal site in a low-income neighborhood or work collaboratively with other organizations that represent minority older adults to provide services. OAA gives state and local agencies flexibility in determining which populations to target and the methods used to do so. Agencies are required to describe these targeting efforts as part of their state planning requirements. Funding Title III Programs Congress provided approximately $1.4 billion in fiscal year 2010 for OAA Title III services. Funding for the program generally increased in small increments over the past 5 years, while the number of people age 60 and over increased from 48.9 million in 2004 to 55.4 million in 2009. AoA, within HHS, distributes this funding through grants to the state agencies. Through these grants, states receive a set amount of funding and are given the flexibility to design and operate OAA programs within federal guidelines. Grant amounts are generally based on funding formulas weighted to reflect a state’s age 60 and over population. For example, because of their respective numbers of older residents, Florida received about $89 million in Title III dollars in fiscal year 2010 compared to Montana, which received $6 million. A non-federal match of 15 percent is required for Part B and C programs. State agencies typically allocate these funds to local agencies that directly provide services, or local agencies contract with local service providers. In a few states, state agencies allocate funds directly to local providers or provide services themselves. (See fig. 1.) In our past work we noted that the national funding formula used to allocate funding to states does not include factors to target older adults in greatest need, such as the very old and low- income older adults, although states are required to consider such factors when developing the intrastate formulas they use to allocate funds among their local agencies. The federal grant amounts are further divided into separate allocations for Title III Parts B, C, and E. In fiscal year 2010, the allocations by part were as follows: Part B support services such as home-based care and transportation programs were allocated a total of $366 million. Part C home-delivered meals programs were allocated $216 million and Part C congregate meals programs were allocated $438 million. Part E National Family Caregiver Support Program was allocated $153 million. The OAA provides states with some authority to transfer federal funding allocations among programs. A state may transfer up to 40 percent of allocated funds for the home-delivered meals programs to the congregate meals program, or visa versa, and the Assistant Secretary of Aging can grant a waiver for a state to transfer an additional 10 percent. In addition, a state may transfer up to 30 percent of allotted funds for Part B support services to the meal programs and vice versa, and the Assistant Secretary of Aging may grant a waiver of the 30 percent limit. Funds for Title III Part E caregiver services cannot be transferred. The Recovery Act appropriated an additional $65 million for congregate meals and $32 million for home-delivered meals under Title III. Those funds were available for obligation through September 30, 2010, and according to AoA, states had until December 30, 2010, to expend them. Unlike the annual appropriation, however, these funds could not be transferred among Title III services. In addition to these federal allocations, a significant amount of funding for Title III services comes from other federal sources, state budgets, private donors, and voluntary contributions from the clients themselves. According to AoA data, total expenditures for Title III programs from all sources totaled $3.6 billion in fiscal year 2008, including $973 million in expenditures paid for with OAA funds (see table 2). Other federal and state programs provide services similar to Title III, particularly for low-income older adults. Some of these programs are administered by the same state agencies as Title III programs, while in other states, the programs are administered by different state agencies. Some of these programs’ expenditures are substantially larger than those of Title III programs. The following are examples of other programs that provide food assistance, home-based care, and transportation services: Food Assistance: Older adults who meet certain income restrictions and other requirements are entitled to receive food assistance through the federally-funded Supplemental Nutrition Assistance Program (SNAP)—formerly the Food Stamp Program. SNAP is the nation’s largest food assistance program, providing benefits to 2.7 million people age 60 and over in fiscal year 2009. In addition, other food programs provide assistance to needy older adults. For example, approximately 950,000 low-income older adults received produce through the Seniors Farmers’ Market Nutrition Program. Home-Based Care: State Medicaid programs provide substantial funding for home-based care such as personal care and homemaker services to low-income older adults and others who need help with self-care due to disabilities or health conditions. These services are provided through Medicaid home- and community-based services waiver programs and other Medicaid benefits. According to a study by the Kaiser Commission on Medicaid and the Uninsured, Medicaid programs spent approximately $38.1 billion in 2006 on home and community-based services to older adults and other eligible individuals. Medicare also provides home-based services to some adults age 65 and over who are receiving Medicare skilled care services at home. Medicare expenditures on home health care in 2009 totaled $18.3 billion. Transportation Services: In our past work we found that 15 key federal programs, including the Title III program, focused on providing or improving transportation services to older adults. Medicaid, for example, reimburses qualified recipients for the transportation costs they incur to reach medical appointments. The Department of Transportation administers several programs to assist transit organizations in purchasing equipment and training staff to facilitate the use of their services by older adults and others with mobility impairments. In addition, United We Ride, a federal interagency initiative, works to increase access to transportation, reduce service duplication, and improve the efficiency of federal transportation services for older adults and other groups. Local agencies play a key role in helping older adults locate and enroll in these various programs and services. In fact, according to a study conducted by the National Association of Area Agencies on Aging and Scripps Gerontology Center of Miami University, over the past few years local agencies have increasingly served as a single point of entry for older adults, providing access to information on the array of home- and community-based services for which they may be eligible, regardless of which federal or state program funds the services. Figure 2 illustrates the various funding sources and programs that help older adults receive meals, home-based care, and transportation services. For states to be eligible for Title III grants, OAA requires state agencies to submit plans to the AoA for 2, 3, or 4 years. Among other types of information, the plans must evaluate older adults’ needs for home- and community-based services. In addition, OAA also requires that state agencies develop a standardized process to determine the extent to which public or private programs and resources (including volunteers and programs and services of voluntary organizations) are capable of meeting needs. Thus, the plans provide an opportunity to consolidate information about services available to older adults from a variety of sources. The meals services provided in 2008 did not serve most of the low-income older adults likely to need them. Through our analysis of information from the CPS, we found that approximately 9 percent of an estimated 17.6 million low-income older adults received meals services like those provided by Title III programs. However, many more older adults likely needed services, but did not receive them, as shown in table 3. For instance, an estimated 19 percent of low-income older adults were food insecure and about 90 percent of these individuals did not receive any meal services. Similarly approximately 17 percent of those with low incomes had two or more types of difficulties with daily activities that could make it difficult to obtain or prepare food. An estimated 83 percent of those individuals with such difficulties did not receive meal services. (See table 3.) Along the same lines, agency officials we spoke with identified several reasons why an older adult may be likely to need meals services but not receive them. Specifically, officials from several state agencies stressed that need for home-delivered meals is greater than the level of services they are able to fund. Through our survey of local agencies, we found that an estimated 22 percent of agencies were generally or very unable to serve all clients who request home-delivered meals. Some state and local agencies we spoke with also noted that many older adults who would benefit from meals services do not know that they exist or that they are eligible to receive them and, therefore, do not contact the agencies to request them. We also found evidence suggesting that demand for home-delivered meals is often higher than for congregate meals. Officials from a few state and local agencies we spoke with acknowledged that some older adults do not find the format of congregate meal programs appealing due to factors such as the meals served or the time of day that they are provided. Therefore, older adults may not access the services, though their circumstances suggest that they may need them. An estimated 79 percent of local agencies who tracked requests had greater numbers of older adults request home-delivered meals than congregate meals in fiscal year 2009, according to our survey of local agencies. Also, the Congressional Research Service found that although congregate meal programs served more clients than home-delivered meal programs in fiscal year 2008, from 1990 to 2008, the number of home-delivered meals served grew by almost 44 percent, while the number of congregate meals served declined by 34 percent. While most older adults who had difficulties with daily activities such as walking or bathing received at least some help completing such tasks, many received limited help and some did not receive any help. Through our analysis of 2008 HRS data, we found that an estimated 29 percent of older adults from all income levels reported difficulties with one or more activities such as walking or bathing. As shown in table 4, many of these older adults either received no help, or received help with some, but not all of their difficulties—either formally from sources such as Title III programs and Medicaid or informally through family members. For example, among older adults who reported three or more difficulties with ADLs such as bathing and walking, approximately 21 percent received help with all of the ADLs they identified, while 68 percent received help with some of them, and 11 percent did not receive any help. In an estimated 19 percent of the cases where these older adults received any help, at least some of that help came from professionals or organizations rather than family members. These older adults who had difficulties with multiple types of ADLs are generally considered to have more severe conditions than those who have difficulties with IADLs, such as shopping or housework. We found that greater percentages of older adults with multiple ADLs received help with some or all of their difficulties than those with IADLs, but not ADLs (see table 4). However, the available data did not allow us to assess whether the help an individual received for a particular difficulty was sufficient to meet his or her needs. Several agency officials and researchers we spoke with noted that even some of those receiving help with their difficulties likely need more frequent or more extensive help. Officials and researchers we spoke with identified a number of difficulties in meeting older adults’ needs for home-based care. Officials in most states we contacted noted that funding from Title III and other sources like Medicaid waiver programs is not enough to meet the need. Also, because different states structure their Medicaid programs differently and some also run separate state home-based care programs, the extent to which older adults who need services are receiving them likely varies from state to state. As shown in table 4, we found that most older adults receiving assistance with some or all of their difficulties received all of this help from informal sources, rather than from an organization or professional caregiver. While this can reduce public expenditures, researchers from one organization we spoke with expressed concern that providing extensive informal care may strain family members who act as caregivers. Some of the family members providing care may be receiving help through Title III caregiver programs such as respite care. In fiscal year 2008, Title III programs provided caregiver services to about 224,000 individuals, according to AoA data. However, officials from a few states told us that the likely need for such services was greater than available resources. Many older adults were likely to need transportation services like those provided by Title III programs due to circumstances such as being unable to drive or not having access to a vehicle. According to our analysis of 2008 HRS data, an estimated 21 percent of people 65 and older (about 8 million) were likely to need such services. Our analysis also found that some social and demographic characteristics were associated with an increased likelihood of needing such services. In particular, after controlling for other factors that may influence likely need for services, we found that people who were age 80 or older, female, or living below the poverty threshold were more likely to need services than people without these characteristics. We also found that the odds that someone with visual or mobility difficulties was likely to need services were about two times as high as someone without such difficulties. Additional factors also increased an individual’s likelihood of needing services as shown in appendix V. While there appears to be a significant need for transportation services, data were not available to estimate the extent to which older adults’ likely needs were met. Instead, available information provides only clues about the extent to which older adults in likely need may be receiving services. For example, AoA collects information about the number of people receiving assisted transportation services through their programs and the total number of rides provided. The agency also collects information about the number of rides provided by its general transportation services, but does not collect information on the number of older adults receiving those services. State and local agency officials provided anecdotal evidence suggesting the existence of substantial unmet need for transportation services. For example, officials in Tennessee said that some local agencies must limit their transportation to essential medical treatments like dialysis because they cannot afford to also transport older adults to activities that would improve their quality of life, such as trips to senior centers and congregate meals sites. Agency officials from several states noted that rural areas face particular challenges, due to the long distances between destinations and minimal public transit options. Through our survey of local agencies, we found that an estimated 62 percent reported transportation services as among the most requested support services. The survey also showed that an estimated 26 percent of agencies that provide transportation services were generally or very unable to meet all transportation requests. Our past work also found that older adults’ transportation needs less likely to be met included: (1) transportation to multiple destinations or for purposes that involve carrying packages, such as shopping; (2) life-enhancing trips, such as visits to spouses in nursing homes or cultural events; and (3) trips in non-urban areas. Most state and some local agencies utilize the flexibility provided by the OAA to transfer funds among Title III programs. According to AoA data, 45 state agencies transferred funds among congregate meal programs, home- delivered meal programs, and support services in fiscal year 2008, and, according to our survey results, an estimated 45 percent of local agencies did so in fiscal year 2009. Agencies most commonly transferred funds from congregate meals to home-delivered meals or support services. In fact, nationally, from fiscal year 2000 through fiscal year 2008, states collectively transferred an average of $67 million out of the congregate meal program each year (see fig. 3). In fiscal year 2008, states transferred nearly 20 percent of OAA funding out of congregate meals. As a result, support services and home-delivered meal programs experienced an 11 percent and 20 percent net increase in Title III funds, respectively. State and local officials told us they moved funds out of congregate meals because of a greater need for home-delivered meals and support services. According to AoA data, in fiscal year 2008, 34 states transferred funds from congregate meals to home-delivered meals and 32 states transferred funds from congregate meals to support services. Georgia state officials told us they transferred funds because there is a greater need for home-delivered meals, with a waitlist of about 12,000 people, compared to the congregate meal waitlist of about 400. Nevada state officials said transferring funds from congregate meals to support services is necessary because support services are under-funded for meeting needs in their state. Some state officials recommended consolidating funding for Title III Part C meal programs into one single stream. For example, Wisconsin state officials said maintaining separate funding for congregate and home- delivered meals creates a process in which the state has to deal with multiple rules to allocate funds to services that are most needed. Georgia state officials said the federal distribution of Title III C funds does not reflect local variation in needs and a less restrictive funding allocation would allow local officials to put funds where they are most needed. However, some state officials, from New Jersey and Oregon for example, did not see the need to change the current process of transferring Title III funds. According to AoA data, five states and the District of Columbia did not transfer any funds in fiscal year 2008 and only one state transferred the maximum allowable amount. In addition to OAA funding allocations, agencies provide Title III services using funds from other federal programs, state and local governments, private sources, and clients. Agencies told us that to meet client needs, they rely on other funding sources in addition to OAA funding. Our survey found that, on average, OAA funds comprised an estimated 42 percent of local agency’s Title III program budgets in fiscal year 2009. Some local agencies rely more heavily on OAA funds than others. OAA funds ranged from 6 to 100 percent of local agency budgets. State funds were the second largest source, contributing an average of 24 percent of program funds. While the funds contributed by local governments are a smaller part of program budgets on average, according to AoA officials, a role of local governments is to secure additional resources for Title III programs, such as volunteers or private grants. See figure 4 for the average proportion of Title III program funding provided by various sources. The OAA gives state and local agencies some flexibility to allocate program funds to services most needed and select which source of funds to use to provide services. This flexibility includes the ability to transfer funds, as well as the ability to decide which services to fund with Title III resources, based on local priorities and needs. According to AoA officials, the ability to decide which services to fund is most often the case with Title III Part B support services, such as personal care and transportation services, because Congress’ funding allocation is less restrictive than the allocations for other parts of Title III. As an example, AoA officials told us some states may choose to provide personal care services under their Medicaid program, rather than use OAA Title III Part B funds and use the OAA funds for other services. Additionally, some state officials we spoke with told us OAA funds are used to fill gaps in state or Medicaid-funded home- and community-based services programs. In addition to receiving funds from governments and private sources, clients can also contribute to the cost of services. In fact, according to our survey, almost all local agencies permit voluntary contributions for Title III services. On average, voluntary contributions comprised 4 percent of local agency budgets in fiscal year 2009; yet, some agencies told us that voluntary contributions are a significant portion of their meal program budget. For example, Wisconsin state officials estimated that voluntary contributions are between one-quarter to one-third of congregate meal funding. While the OAA allows for cost sharing for some OAA services wherein clients are asked to pay a portion of the cost of services based on their income, 5 of the 14 states we spoke with actually permit cost sharing. States are required to have a formal cost share plan before implementation, and the National Association of States United for Aging and Disabilities (NASUAD)—formerly known as the National Association of State Units on Aging—found in a 2009 survey that less than a quarter of states had such a plan, which suggests that cost sharing is not being widely used. NASUAD also found that cost sharing was most often used for respite care and homemaker services. Our survey found that about three- fourths of local agencies whose states permitted them to cost share did so. Even so, more local agencies would prefer to have the ability to cost share. In fact, an estimated 39 percent of local agencies in states who do not allow cost sharing said they would do so if given the opportunity. Since Title III services are open to all older adults, additional cost-sharing arrangements could generate income for programs by obtaining payments from those with higher incomes. AoA officials noted that if individuals with higher incomes see Title III programs as an attractive service option, they could pay market value for the services through cost-sharing arrangements, thereby subsidizing services to lower-income adults. State officials cited administrative burden as a reason they do not permit cost sharing or do not use it more extensively. On the other hand, several states that have implemented cost sharing find it helpful. For example, Illinois state officials told us they have not implemented cost sharing because of the number of services that are exempt and likelihood that implementation costs will exceed the revenue collected. Although Nevada has a statewide cost-share policy, state officials told us few local agencies have elected to use it because many of the older adults served are low- income and the agency cannot condition receipt of services upon paying the cost-share amount. Although Georgia officials recognized cost sharing was complicated to implement, they cost share for all allowable OAA services and said it generates revenue and adds value to the services for clients. While cost sharing has the potential to generate additional funds for Title III services, for agencies this potential must be weighed against the OAA’s cost-share restrictions and administrative requirements. While agencies rely on multiple sources of funds to provide services, many agencies reported overall decreases in funds from fiscal year 2009 to fiscal year 2010. In fact, according to our survey, an estimated 47 percent of local agencies experienced reductions to their budgets in fiscal year 2010. These budget cuts ranged from 1 to 30 percent of local agency budgets, and the average budget cut was 8 percent, according to 29 local agencies that provided more detailed information. Approximately 68 percent of local agencies reported that state funds, the second largest source of funds for Title III programs, were cut in fiscal year 2010. This is consistent with research by NASUAD that found that most states reported state budget shortfalls in fiscal year 2010 and reduced budgets for aging services. While funding has recently decreased for many agencies, requests for services have increased since the beginning of the economic downturn. Since the downturn began in late 2007, based on our survey, an estimated 79 percent, 73 percent, and 67 percent of local agencies have received increased requests for home-delivered meals, support services, and caregiver services, respectively. A survey conducted by NASUAD in 2009 also found requests for the types of services provided by the OAA recently increased, particularly for home-delivered meals, transportation, and personal care. Local agencies responded to increased requests in various ways. For instance, some local agencies told us they created waitlists, secured additional funds, collaborated with other agencies, and utilized Recovery Act funds. Some local agencies reduced services as a result of funding cuts. According to our survey, in fiscal year 2010, as compared to fiscal year 2009, an estimated 20 percent of local agencies said they reduced support services. An estimated 18 percent said they reduced nutrition services, and 14 percent reduced caregiver services. In fact, a local agency in California told us that they traditionally operated a state program that provided services similar to OAA; however, the state-funded services ended on January 1, 2010, due to the complete elimination of state funding. Our survey also found that local agencies anticipated additional service reductions in fiscal year 2011. About 21 percent anticipated additional cuts to the meal programs, 16 percent anticipated cuts to support services, and 12 percent anticipated cuts to caregiver services. Some state and local agencies we visited also told us they adapted to limited funding by providing less service to all rather than full service to only some. For example, a state official in Illinois said some local areas resolved the funding shortfalls by reducing the number of hours they provide respite services for each caregiver. Alternatively, in response to these funding cuts, many local agencies said they took steps to reduce administrative and operations costs. In fiscal year 2010, an estimated 37 percent of local agencies cut capital expenses, 38 percent cut administrative expenses, and 45 percent cut operating expenditures. Local agencies said they cut expenses in a variety of ways. For example, local agencies relocated to a smaller office building with lower overhead costs, stretched meal service supplies, decreased travel expenses, and limited raises for employees. Additionally, an estimated 45 percent of local agencies did not fill vacant positions. In addition to administrative and operations cuts during fiscal year 2010, an estimated 27 percent of local agencies anticipated additional reductions in fiscal year 2011. Consistent with our survey data, agency officials told us about administrative and operations reductions. State officials in Wisconsin, for example, told us that, as a result of the state’s budget deficit, the agency was unable to fill vacant positions and cut planning, administration, and monitoring activities in order to avoid cutting services to older adults. Illinois state officials told us the last budget cycle included a 10 percent decrease in state funds for aging services, and there were lay-offs, required furlough days and positions left vacant as a result. Many agencies used Recovery Act funds—comprising about 13 percent of the total OAA amount for meals in fiscal year 2009—to temporarily fill budget gaps and expand existing nutrition programs. In addition, some agencies created new meal programs such as breakfast at congregate meal sites. However, many state and local agencies expressed concern about how to continue the same level of services after the Recovery Act funding ends. According to our survey, an estimated 79 percent of local agencies said sustaining services currently paid for with Recovery Act funds would be a moderate to extreme challenge. Of the 10 state agencies we spoke with in early fall 2010, 5 told us that they will have to cut back services, 2 told us that they reserved funds from other sources to compensate for some of the lost Recovery Act funds, 2 states had not decided how to make up for the lost Recovery Act funds, and 1 state will maintain services. The OAA requires AoA to design and implement uniform data collection procedures that include procedures for states to assess the receipt, need, and unmet need for Title III services. Additionally, state agencies’ plans on aging must stipulate that states will in fact use AoA’s uniform procedures to evaluate the need for services under Title III. Previous GAO work has found that using standardized definitions and measurement procedures helps state and federal agencies gather useful information to plan and evaluate programs. AoA issues standardized definitions and measurement procedures to state agencies for collecting information on the receipt of Title III services. For Title III services provided more than once and over a period of time—such as home-delivered meals and home-based care—state agencies must collect data on the number of older adults who receive services. State agencies also collect data on the demographic characteristics of recipients, such as their race, age, gender, and disabilities. AoA also requires state agencies to report the number of service units provided for services that clients receive more sporadically, such as general transportation. Because AoA issues standardized definitions and measurement procedures to state agencies, data on the receipt of services are relatively consistent within and across states. As a result, this data can be used to make comparisons of the type and quantity of Title III services delivered and to support AoA’s budget requests and performance evaluations. In contrast, AoA does not provide standardized definitions and procedures for states to use when measuring need or unmet need for services. Researchers have generally defined need for a particular service as having characteristics, health conditions, or circumstances that make individuals likely to need the service and defined unmet need as fitting the definition of need, but not receiving the service. However, the specifics of defining need and unmet need can be challenging and can lead to variation without a standardized definition. For example, one can define unmet need for a service as no assistance at all, or one could define it as an inadequate level of assistance in one or more service areas. Rather than requiring that states measure need in a standardized manner or requiring states to measure unmet need, AoA provides states with non-binding guidance on these issues. AoA, through a grant to NASUAD, provides state agencies with an assortment of tools and resources that they can use to evaluate need and limited information about measuring unmet need. Tools for measuring need include needs assessment surveys and links to Census information. This guidance is optional and does not identify specific measurement procedures that all state agencies should use or information they should collect. Without standardized definitions and measurement procedures, states use a variety of approaches to measure need and measure unmet need to varying extents. Some state agencies maintain and review waiting lists; host discussions with, and obtain data from, local service providers; and conduct surveys of current recipients, among other approaches. State agencies use the information they collect for a variety of planning purposes such as identifying greatly needed services and focusing resources in these areas. For example, one state agency we spoke with found that transportation services were of particular need. As a result, they directed local agencies to prioritize transportation programs. Nonetheless, these various approaches have a number of limitations and, as a result, no state agencies that we asked fully estimate the number of older adults with need or unmet need. First, officials from some state agencies and AoA told us that waiting lists are not effective tools for fully estimating need and unmet need. For example, waiting lists are only a lower-bound estimate of those who are likely to need services but not receive them. A local agency official we spoke with in Illinois said that needs assessments and anecdotal information indicate a much greater need for services than requests to the agency indicate. Also, some of these approaches, such as surveys of current clients, only collect information on those who already receive services. None of these approaches either collect or quantify information on older adults who need services but do not request them. In addition to the above approaches, some state agencies we spoke with utilize other means to obtain information on the potential need and unmet need of older adults who do not currently request or receive services, although they still do not fully estimate need and unmet need for Title III services. For example, some state agencies use Census data to identify the number of older individuals with characteristics that indicate potential need for services, including those who do not currently receive services. Florida’s state agency uses the Elders Needs Index available through the NASUAD Web site to identify and direct funds to geographic areas with high concentrations of older adults who have demographic characteristics often associated with need for Title III services such as age, race, or disability. However, this index does not show other factors indicating likely need. For example, it does not include information about whether an older adult in a particular area is food insecure or whether or not he or she received meal services from any source. Some state and local agencies also conduct surveys of older individuals, including those who do not currently receive Title III services. For instance, one state agency we spoke with described a survey conducted by university researchers as a part of the state’s planning process for Title III programs and other services for older adults. Among other components, the survey included information about older adults’ awareness of various services and whether they received services. State agency officials said that this survey could be used to generate an estimate of older adults with need and unmet need for services, although they do not currently generate such estimates. Overall, AoA and state agency officials noted that there are various challenges to fully estimating need and unmet need. For example, state officials in one state told us that representative surveys of older adults are too costly and officials in another state said that they lack capacity or expertise at the state level to conduct comprehensive evaluations of need and unmet need. In addition, comprehensive evaluations of unmet need would require states to account for whether or not older adults in need were receiving services from other sources such as Medicaid home-based care programs. This would require states to collaborate and partner with other state agencies to account for needs met by other programs. This could be difficult to do because states differ in how they choose to use and administer their Title III funds and other federal, local, and state funding sources to support older adults. Some state agencies that administer Title III programs would have limited information on older adults who receive services from other programs, administered by other agencies. As a result of limited and inconsistent state knowledge about need and unmet need, AoA is unable to measure the extent of need and unmet need for the different home- and community-based services nationally or consistently across states—information that could help them to best allocate their limited resources. When asked to provide such information to Congress in 2008, AoA was unable to do so, but did suggest that it was possible to gather information on need from local agencies and partners in the aging network. While AoA officials told us they have the authority to require that state agencies collect more complete information on need and unmet need, they have not done so to date because they are unaware of a specific set of criteria to use that would address various data challenges. They also expressed concern about creating a reporting burden for states and about the utility of obtaining data on unmet need within the context of a formula-based program where set funding levels would not necessarily allow them to address all unmet needs. OAA Title III programs, in tandem with other government services such as Medicaid, are an invaluable support mechanism for many older adults, helping them stay in their homes and communities and maintain dignity and independence. The broad eligibility criteria for the program opens services to any older adult who seeks them, and, although programs are expected to, and do, target certain groups, our estimates show that in 2008, many additional older adults who would have likely benefited from services like those provided by OAA Title III programs did not receive them. Although, as AoA officials acknowledged, the law requires AoA to design and implement uniform procedures for assessing need and unmet need, AoA has not required states to use them. And as they currently operate, many programs have no way of knowing whether they are serving those who have the greatest need because they do not have information about those in need who do not receive or request services. As the number of older adults grows, demand for services will also grow. This, combined with resource constraints, prompt concerns about how the needs of this growing population can be met. As a result, states and local providers will likely face increasingly difficult decisions about how to serve older adults; yet, they will lack valuable information needed to help them identify those most in need. Although there are cost and methodological challenges to assessing need and unmet need, they are not insurmountable. Various approaches to estimating need and unmet need could be used and the effort would not necessarily require detailed analysis of a nationally representative survey. Also, AoA could provide guidance and technical assistance to state agencies in order to meet reporting requirements around quantifying need and unmet need. In addition, AoA could partner with other programs providing similar and complementary services in order to consolidate knowledge on how to better serve the needs of the community and minimize additional data collection and reporting burdens. Partnering would also assist the states to better map out approaches that will help ensure that they are making the best use of their various funding sources during times of increasing demand. This information could help the home- and community-based services network make informed funding and programmatic decisions that optimize their resources and provide vital services to older adults in greatest need. To maximize program resources during a time of increasing demand and fiscal constraints, we recommend that the Secretary of Health and Human Services study the real and perceived burdens to implementing cost sharing for OAA services and identify ways to help interested agencies implement cost sharing, which could include recommending legislative changes to the restrictions in the OAA, if warranted. To help ensure that agencies have adequate and consistent information about older adults’ needs and the extent to which they are met, we recommend that the Secretary of Health and Human Services partner with other government agencies that provide services to Older Americans and, as appropriate, convene a panel or work group of researchers, agency officials, and others to develop consistent definitions of need and unmet need and to propose interim and long-term uniform data collection procedures for obtaining information on older adults with unmet needs for services provided from sources like Title III. We provided HHS with the opportunity to comment on a draft of this report. The written comments appear in appendix VII. HHS indicated that it would review our recommendations and explore the options available to implement them. However, it raised several concerns in response to our recommendation that it partner with other government agencies to develop agreed-upon definitions and data collection procedures to assess need and unmet need. HHS noted that states and local agencies currently target services to those older adults and family caregivers in greatest social and economic need. The department described the existing guidance and technical assistance it provides states and local agencies to help them understand need and unmet need in their communities and target services. While these efforts may be useful to states, we believe that more can be done to provide the uniform definitions and data collection procedures required by the OAA. Further, HHS acknowledged that states are already making difficult choices about how to serve seniors in need because the demand for services exceeds supply. It is, in fact, for these reasons that we have recommended a more systematic approach to identifying need. Due to the projected increase in the older population, and in the face of current fiscal constraints, it is more important than ever to have good information about need and unmet need in order to adequately plan and direct resources to those in greatest need. HHS also commented on factors that complicate development of a standardized definition and methodology for measuring unmet need. These factors include differences among states in how the programs are administered and the multiple funding streams that are often used to provide services for older adults in need. Our recommendation recognizes this circumstance by calling for HHS to partner with other agencies that fund similar services to work together to agree on definitions and procedures. We believe that AoA, as the responsible federal entity for Title III- funded services, is well-positioned to lead this effort. In this era of scarce resources, and in those cases where multiple funding streams and programs are offering services to similar populations, it is vital to ensure that all funding sources are used to their best advantage and programs are not duplicating efforts. Finally, HHS expressed concern that such standardization could increase the reporting burden for states. It also commented that GAO was “…not able, using existing resources, to develop workable measures for determining the extent of unmet need….” However, it is important to note that it was not the purpose of this report to develop measures for states and local agencies to use, which AoA is required to do. Rather, our objective was to assess likely unmet need on a national scale using sophisticated analyses of national databases to shed light on whether further focus on unmet need was warranted. We continue to believe that convening a panel would allow stakeholders to explore options for collecting meaningful data on need and unmet need in a manner that would not require the extent of analyses we conducted or impose an onerous burden on state or local agencies. Such an effort developed in collaboration with other aging services programs could also facilitate information-sharing across programs. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Department of Health and Human Services, relevant congressional committees, and other interested parties. Copies will also be made available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff has any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Key contributors to this report are listed in appendix VIII. Our objectives were to identify: (1) what is known about the need for home- and community-based services like those funded by the Older Americans Act (OAA) and the potential unmet need for these services; (2) how have agencies used their funds, including Recovery Act funds, to meet program objectives; and (3) how have government and local agencies measured need and unmet need. To identify what is known about the receipt and potential unmet need for home- and community-based services, we analyzed data from national surveys about older adults in likely need of meals services or home-based care and whether those in likely need received services. We also estimated the percentages of older adults likely to need transportation services, although data limitations did not allow us to estimate transportation services received. To identify how agencies have used funds, including Recovery Act funds, we conducted a Web-based national survey of a random sample of 125 local area agencies on aging (local agencies)—the frontline administrators of Title III services for older adults and reviewed Administration on Aging (AoA) documentation about state expenditures. To identify how government and local agencies measure need and unmet need, we reviewed 51 state plans on aging, select needs assessments from states, and reviewed relevant laws. To address all three objectives, we conducted site visits to four states—Illinois, Massachusetts, Rhode Island, and Wisconsin—where we interviewed officials from state and local agencies and conducted telephone interviews with officials from an additional 10 states. Lastly, we also interviewed national officials involved in Title III programs and reviewed relevant federal laws and regulations. These research methods are described in more detail below. We assessed the reliability of the data we used by reviewing pertinent system and process documentation, interviewing knowledgeable officials, and conducting electronic testing on data fields necessary for our analysis. We found the data we reviewed reliable for the purposes of our analysis. The OAA Title III meals programs are designed to aid older adults and certain individuals living with older adults by: (1) reducing hunger and food insecurity; (2) promoting socialization; and (3) promoting health and well-being. The home-delivered meals program in particular is also designed to assist individuals who have difficulty obtaining or preparing food due to difficulties with daily activities (i.e., with functional impairments). While the eligibility criteria for Title III programs are very broad, we focused our analysis on identifying eligible older adults who were particularly likely to need the services based on exhibiting (1) food insecurity; (2) difficulties with daily activities (i.e., with functional impairments); (3) limited social interaction, or a combination of these characteristics. Data limitations did not allow us to identify individuals likely to need and/or receive services based on the third identified purpose of promoting health and well-being. To conduct our analysis, we used nationally representative data from the 2008 Current Population Survey (CPS), including the Food Security Supplement and the Civic Engagement Supplement. As described below, the CPS includes various questions related to receipt of meals services like those provided by Title III and having characteristics that indicate likely need. Our analyses focused on people age 60 and over as well as spouses of older adults and individuals with disabilities living with older adults because they are also eligible for meals services. Our analysis was limited to older adults living in households with incomes that were below 185 percent of the poverty threshold and is not generalizable to older adults with higher incomes. As described below, our analysis included this income restriction because the questions related to participation in the two meals programs of interest were not asked of all respondents to the survey. The only group that was completely sampled and asked those questions were the respondents who were in households with incomes that were below 185 percent of the poverty threshold. While the exclusion of individuals living in households with higher incomes from our study is unfortunate, the sample we are using does represent the large majority of people who were food insecure and likely to need one of the two meal programs based on one of the key purposes of OAA nutrition programs. Other indicators of likely need such as difficulties with daily activities and limited social interaction were also more prevalent among the low-income population than among those with household incomes above 185 percent of the poverty threshold. To determine whether older adults were food insecure and whether or not they received home-delivered or congregate meals, we used the Food Security Supplement. The Food Security Supplement is sponsored by the United States Department of Agriculture (USDA), and USDA’s Economic Research Service compiles the responses. The 2008 food security survey interviewed members of roughly 44,000 households that comprised a representative sample of the U.S. civilian population of 118 million households. The survey queried one adult respondent in each household about experiences and behaviors indicative of food insecurity (see table 5). If they were living in households below 185 percent of the poverty threshold, or if they had previously indicated some degree of food insecurity, survey respondents were also asked whether they, or anyone in their household, had received a home-delivered meal in the past 30 days, or whether they had received a meal in a congregate setting within the past 30 days. To determine whether older adults had limited social interaction, we used a series of questions from the CPS Civic Engagement Supplement from November 2008 that asked respondents whether they participated in various community groups (see table 6). Determining likely need for social interaction was particularly difficult. Lack of participation in community groups provides a partial indicator that an older adult may be likely to need meals programs for social reasons. However, such survey data do not capture more qualitative aspects of an individual older adults’ likely need for social interaction such as personality and individual preference. The data also do not allow us to identify individuals who may interact socially outside of organized groups and activities. To determine whether older adults had functional impairments that may have made it difficult to obtain or prepare meals, we used three questions from the CPS designed to identify difficulties with instrumental activities of daily living (IADL), activities of daily living (ADL) and one question used to identify cognitive impairments (see table 7). We included the question regarding cognitive impairments because older adults may have difficulties obtaining or preparing food due to cognitive or memory difficulties, which may not be captured through questions about IADLs and ADLs. We used the questions relevant to food insecurity, limited social isolation, and functional impairments to estimate older adults and other eligible individuals who were likely to need and/or receive meals services. First, we estimated the percentages of eligible individuals in low-income households that were (1) food insecure, (2) had one or more types of difficulties with daily activities, and/or (3) had limited social interaction. We then identified the number of individuals with or without one or more of these types of likely need who were and were not receiving home- delivered or congregate meals. Because the CPS questions asked whether older adults received meals services in general, rather than Title III meals programs in particular, our analysis is indicative of all congregate and home-delivered meals services, rather than just those provided by Title III meals programs. We also looked at how the likely need characteristics and the receipt of meals varied across demographic groups generally. We used individual weights to derive estimates of the numbers and percentages of individuals in the entire population of low-income older adult households of interest to us. Unless otherwise noted, our estimates based on the CPS data have a 95 percent margin of error of 4 percentage points or less of the estimate. Existing CPS data did not allow us to estimate the number of older adults likely to need and receive meals services at the state level. Specifically, making state-level reliable estimates would require using data from multiple years of survey data and key survey questions about older adults’ difficulties with daily activities and their participation in meals programs were added to the survey too recently to allow analysis using multiple years of data. In addition, we used logistic regression models to estimate the net effects of the likely need characteristics and demographic variables on the likelihood of receiving either type of meal services. Logistic regression analysis is a method to examine factors associated with a variable of interest such as receipt of meal services, controlling for the potential effect of other factors on that variable, such as likely need or demographic characteristics. One of our primary reasons for using the multivariate models is to determine whether demographic differences in the likelihood of receiving meals were accounted for by differences in food insecurity, isolation, or difficulties with daily activities. The logistic regression models we used could not control for all variables potentially related to food insecurity and the likelihood of receiving the different types of meals. For example, we could not control for differences between states’ funding and programmatic decisions for meal programs or older adults’ preferences for receiving meals. To the extent omitted but relevant variables are correlated with those factors that were incorporated into our models, the estimates we present are subject to potential bias. To examine factors associated with likely need for and receipt of home- based care services, and likely need for transportation services, we used data from the 2008 wave of the University of Michigan’s Health and Retirement Study (HRS). The HRS is a nationally representative longitudinal survey of older adults sponsored by the National Institute on Aging and the Social Security Administration. The survey is administered in waves (generally every 2 years) and includes information on respondent demographics, health status, service receipt, and household characteristics, among other things. An additional HRS dataset, produced by the Rand Corporation, includes recoded variables and more detailed information on household finances. To generate a dataset for analysis, we combined data from the University of Michigan with Rand HRS files. As appropriate, we limited our analysis to those respondents age 60 or above (for home-based care services) or age 65 and above (for transportation). We weighted the data to obtain national-level estimates and used robust estimation to account for the impact of the complex survey design on variance estimates. Unless otherwise noted, percent estimates based on HRS data have a 95 percent margin of error of +/- 6 percentage points of the estimate. To identify older adults likely to need home-based care services, we used HRS questions about difficulties with IADLs and ADLs as listed in table 8. We decided to estimate likely need in terms of these types of difficulties, rather than the existence of particular medical conditions, because the services provided by Title III home-based services are designed to address such difficulties and the questions concerning IADLs and ADLs are designed to capture difficulties with particular actions, regardless of which particular health or memory conditions cause these difficulties. We coded individuals who responded that, as a result of a health or memory problem, they had difficulty doing a given activity, or could not or did not do the activity as having a likely need for services. For respondents that reported difficulty with one or more IADLs or ADLs, we examined whether they received help with each identified activity. To identify differences in the extent to which older adults received help from any source, including Title III programs, we calculated the difference between the number of IADL and ADL difficulties each respondent had and the number of identified difficulties for which they received assistance. However, the available data did not allow us to identify whether the assistance an individual received for each identified IADL or ADL was adequate to address their difficulties. HRS data did not allow us to make state-level estimates because the survey is not designed to be representative at the state level. To estimate the number of older adults likely to need transportation services like those provided by Title III programs, we examined HRS questions on driving and car access. We coded older adults who said they could not or did not drive, and individuals who said they could drive but lacked access to a car, as likely to need transportation services, unless such services were available through an individual’s assisted living facility. The available data did not allow us to factor public transportation use or spouses’ driving abilities into our estimate of likely need for transportation services. Our estimates related to transportation are restricted to those individuals age 65 and above, because younger HRS respondents were not asked about their driving capabilities. To identify factors associated with likely need for home-based care services, and likely need of transportation services, we used descriptive statistics and multiple logistic regression analyses. We estimated the prevalence of IADLs and ADLs, and the extent and nature of help received, across different demographic characteristics such as race, age, sex, education, and homeownership, and whether an individual received Medicaid. These cross-tabulations reveal differences in the proportion of individuals likely to need home-based services across demographic groups, but do not control for other factors that also might relate to likely need. Therefore, we next estimated logistic regression models to predict which factors were associated with having one or more reported IADLs or ADLs, controlling for other characteristics. We also estimated logistic regression models to examine, among those individuals with one or more IADLs or ADLs, what factors were associated with a failure to receive assistance for any one of those IADLs or ADLs, controlling for other factors. Similarly, for transportation services, we began by examining the relationship between being likely to need services and individual demographic factors. We also used logistic regression analysis to predict, controlling for other factors, which characteristics were associated with likely need of transportation services. Unlike our analysis related to meals services and home-based care, we were not able to estimate the number of older adults likely to need transportation services that were and were not receiving such services, because such data were not available. For each of our logistic regression models, we tested various model specifications to assess the model fit and stability of our estimates. Nevertheless, our logistic regression models could not control for all variables potentially related to each variable of interest, such as whether an individual had access to public transportation. To the extent omitted but relevant variables are correlated with those factors that were incorporated into our models, the estimates we present are subject to potential bias. To determine agencies’ use of federal funds, including American Recovery and Reinvestment Act (Recovery Act) funds, we conducted a web-based national random sample survey of 125 local agencies. The survey included questions about: (1) utilization of OAA Title III services, (2) requests for OAA Title III services, (3) approaches to target resources to areas of greatest need, (4) use of OAA Title III funds, and (5) the economic climate and use of Recovery Act funds. We drew a simple random sample of 125 agencies, from a population of 638 agencies. This included all 629 local agencies that operate in the 50 states and District of Columbia, as well as 9 state units on aging (state agencies) in states that do not have local agencies. We included these nine state agencies in our pool for sample selection because the state units on aging perform the function of local agencies in those states. We conducted four pretests to help ensure that survey questions were clear, terminology was used correctly, the information could be obtained, and the survey was unbiased. Agencies were selected for pre-testing to ensure we had a group of agencies with varying operating structures, budget sizes, and geographic regions of the country. As a result of our pretests, we revised survey questions as appropriate. In June 2010, we notified the 125 local agencies that were selected to complete our survey and e-mailed a link to complete the Web survey to these agencies beginning July 1, 2010. We sent e-mail reminders and conducted follow-up calls to increase the response rate. Ninety-nine local agencies responded to our survey, resulting in a response rate of 79 percent. Some individual questions have lower response rates. The survey percentages in this report are subject to margins of error of no more than plus or minus 12 percentage points at the 95 percent confidence level. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 12 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Instances where the margin of error falls outside of the overall rate are footnoted throughout the report. The practical difficulties of conducting any survey may introduce nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire to minimize such nonsampling error. The results of this survey are generalizable to the 629 local agencies in the United States. In addition to our survey, to determine agencies’ use of funds we analyzed AoA State Program Report data from fiscal years 2000 through 2008 available on the agency’s Web site and provided by AoA officials. We assessed the reliability of this data by interviewing AoA officials, assessing official’s responses to a set of standard data reliability questions, and reviewing internal documents used to edit and check data submitted by states. We determined the data were sufficiently reliable for purposes of this review. To determine how agencies measure receipt of services, need and unmet need, we also reviewed guidance on creating state aging plans and measuring receipt of services, need and unmet need distributed by AoA and the National Association of States United for Aging and Disabilities (NASUAD). We then analyzed the most recently available state aging plan for the 50 states and the District of Columbia, as of spring 2010. Each state is required to submit a state aging plan to AoA for review and approval covering a 2-, 3-, or 4-year period. The aging plan should include state long- term care reform efforts with an emphasis on home- and community-based services, strategies the state employs to address the growing number of older adults, and priorities, innovations and progress the state seeks to achieve in addressing the challenges posed by an aging society. We also reviewed selected states’ needs assessments. To determine state and local agencies’ use of funds and how agencies measure need and unmet need, we reviewed relevant statutory provisions and interviewed state, local, and AoA officials. In March 2010, we visited Illinois, Massachusetts, Rhode Island, and Wisconsin. These states were selected due to varying sizes of the population age 60 and over and Title III expenditures. Additionally, we considered geographic region, proximity to AoA regional support centers, and a desire to interview at least one state without local agencies (Rhode Island). Using the same selection criteria, we conducted semi-structured interviews with an additional 10 state agencies in late September and early October 2010: Arizona, California, Florida, Georgia, Indiana, Montana, Nevada, New Jersey, Oregon, and Tennessee. During these interviews, we discussed the types of information states collect on need, their ability to measure need and guidance used to do so, their ability to meet identified needs, the transfer of Title III funds, and use of Recovery Act funds, among other topics. Table 9 shows the percentages of low-income older adults with different characteristics who had received home-delivered meals, congregate meals, or either home-delivered or congregate meals in the 30 days prior to completing the survey. Additional information can be found in appendix III. In our analyses of factors related to likely need and receipt of meals, we used data from the 2008 Current Population Survey (CPS) and focused on the population 60 and older (or in about 9 percent of the cases, on their younger spouses or household members with disabilities), who were in households with incomes that were below 185 percent of the poverty threshold. Our results are not generalizable to older adults with higher incomes. The reason for the income restriction is because the questions related to participation in the two meals programs of interest were not asked of all respondents in the CPS, and the only group that was completely sampled and asked those questions was the respondents in households with incomes that were below 185 percent of the poverty threshold. While the exclusion of others with higher incomes from our study is unfortunate, the sample we are using does represent the large majority of people who were food insecure and decreasing food insecurity is a key goal of both meal programs. While roughly 19 percent of the individuals in households with incomes below 185 percent of the poverty threshold were food insecure, using the U.S. Department of Agriculture’s (USDA) measure of food insecurity; only 4 percent of the individuals in households with incomes above 185 percent of the poverty threshold were food insecure. Other indicators of likely need such as difficulties with daily activities and limited social interaction were also more prevalent among the low-income population than among those with household incomes above 185 percent of the poverty threshold. For additional information about our methodology, see appendix I. Table 10 shows the characteristics of the population represented by our sample. Just over 4 percent of the population had received a home- delivered meal in the past 30 days, 5.5 percent had received a congregate meal, and nearly 9 percent had received either one or the other. These percentages are far lower than the percentage of individuals in the population who were in food insecure households, which comprised nearly 19 percent of the population. Table 10 also shows that roughly one- third of eligible low-income individuals had at least one type of functional impairment (i.e., difficulty with daily activities), and 17 percent had two or more types of impairments. When we measured social isolation rather crudely, by contrasting individuals for whom no group memberships were reported with individuals who belonged to at least one group, we found that more than half of this elderly subpopulation for whom isolation could be measured were somewhat isolated. We also found that 13 percent of the individuals in this group of low-income seniors had received food stamps in the past year. With respect to demographic characteristics, 91 percent of the population was over 60 (and 21 percent were over 80), 61 percent were female, 19 percent were non-white, and 13 percent were Hispanic. Slightly less than half were married, 29 percent were widowed, and 25 percent were in the “other” marital status category, which includes divorced individuals and individuals who were never married. More than half of this group had incomes below $20,000. More than one-third were living alone, and fewer than one in four were living in households with more than two persons. Nearly three-fourths of this largely elderly subpopulation had a high school education or less, and only 16 percent were still employed. Nearly three-fourths of the persons in low-income households were living in homes that were owned and more than three- fourths were living in metropolitan areas. Roughly one in five were from the Northeastern United States, and similar percentages were living in Midwestern and Western states. The remaining two-fifths were from the South. Table 11 shows how food insecurity varied across different subgroups in these older adult low-income households and how the percentages receiving home-delivered meals, congregate meals, or either home- delivered or congregate meals varied across subgroups. Clearly, food insecurity was a decidedly greater problem for some groups than others. The first two columns of numbers in table 11 show the percentages of individuals with various characteristics that were food insecure, and the margins of error associated with those percentages. They reveal that: Persons with impairments were more likely to be food insecure than persons without impairments; i.e., the percentage of food insecure individuals is nearly twice as high for those with multiple impairments (29 percent) than those with none (15 percent). Food insecurity did not vary by level of social isolation. Individuals who had received food stamps over the past year were nearly 2.5 times more likely than individuals who had not received food stamps to be food insecure (43 percent vs. 15 percent). Older individuals were less likely to be food insecure than younger ones, though there was little difference in the food insecurity of men and women. Larger percentages of individuals from minority groups than white individuals were food insecure, and Hispanic individuals were more likely to be food insecure than non-Hispanics. Food insecurity was also more prevalent in larger households (with two or more persons) and among individuals that had less than a high school diploma, had disabilities related to work, or were in rented homes. Food insecurity was only slightly higher in metropolitan areas relative to non-metropolitan areas, and slightly higher in the South than in other regions of the country. The other columns of table 11 show the percentages receiving home- delivered meals, congregate meals, or either home-delivered or congregate meals in the last 30 days. The percentages in each subgroup who received either one or the other type of meal is nearly always smaller than the sum of the percentages who received home-delivered meals and congregate meals, since some individuals had received both home-delivered and congregate meals. With respect to home-delivered meals, we found that: Food insecurity, having impairments, being more isolated, and receiving food stamps were all strongly and positively associated with whether individuals received home-delivered meals. Because of the pronounced effect of food insecurity on the receipt of home-delivered meals, the differences across demographic groups in the percentage of persons who received home-delivered meals tracks (or co- varies) in most cases with the percentages of the different demographic groups that are food insecure. The percentages receiving home-delivered meals were higher for widowed and other non-married individuals, individuals with household incomes less than $10,000, individuals with less than a high school education, and individuals who are retired or could not work due to disability. The major exception to this pattern involves age. While the younger categories of individuals in this group had higher percentages of food insecure individuals, smaller percentages of the individuals in the younger categories than in the older categories received home-delivered meals. With respect to congregate meals, we found that: Food insecurity, having impairments, being more isolated, and receiving food stamps all have little or no association with whether individuals received congregate meals in the last 30 days. The demographic characteristics that appear to be most strongly related to whether people received congregate meals were age (people 70 and older were decidedly more likely to receive them than people under 70), marital status (non-married individuals were more likely than married individuals to receive them), and household size (people living alone were more likely than others to receive congregate meals). Also, people who were retired or had a disability that related to work were more likely to receive congregate meals than those who were employed. The first column of numbers in table 12 simply reproduces the percentages of individuals in each group that had received home-delivered meals, which were shown in table 11. Taking the first percentages as an example, these imply that 3.3 out of every 100 individuals in food secure households received a home-delivered meal, 7.4 out of every 100 individuals in food insecure households received a home-delivered meal, and so on. The odds in the next column of the table can be calculated from these percentages, by taking, for example, the percentage of food secure individuals who received a home-delivered meal (3.3 percent) and dividing it by the implied percentage of food secure individuals who did not (100 – 3.3 = 96.7) to obtain 3.3/96.7 = 0.034. In this case, the odds imply that 0.034 food secure individuals received a home-delivered meal for every 1 that did not, or that 3.4 food secure individuals received a home-delivered meal for every 100 that did not. The odds of 0.080 (after rounding) for food insecure individuals implies that, among them, 0.08 received home-delivered meals for every 1 that did not, or that 8 food insecure individuals received a home-delivered meal for every 100 that did not. By taking ratios of the odds for different subgroups, or odds ratios, we can get a simple and straightforward estimate of the differences between groups in, in this case, the odds on having received a home-delivered meal. In the third column we see, for example, that food insecure individuals had higher odds of receiving a home-delivered meal than individuals in food secure households, by a factor of 0.080/0.034 = 2.3. When multiple categories are to be compared, as in the case of individuals with none, one, or two or more types of impairments, we choose one category as the referent category and take ratios of the other categories relative to that one. In that case, we find that individuals with one type of impairment had higher odds of receiving home-delivered meals than individuals with none, by a factor of 1.6, and that individuals with two or more types of impairments had higher odds of receiving home-delivered meals than individuals with none, by a factor of 5.5. The asterisks beside the unadjusted odds ratios indicate which of the odds ratios, and differences between groups they are estimating, are statistically significant and reflect real differences, or differences that are not due to sampling fluctuations. These are of interest, and where significant, they reflect genuine differences between groups (e.g., people who are more socially isolated have odds of receiving home-delivered meals that are more than twice as high as those who are less socially isolated). However, they are somewhat limited in the sense that they are derived by considering each factor’s association with receiving home-delivered meals one at a time, ignoring the fact that each of the factors may be related to other factors which, in turn, may be related to having received home- delivered meals. To derive “adjusted” odds ratios, we used multivariate models to estimate them. Specifically, we used logistic regression models in this study, since the outcomes of interest (receiving or not receiving home-delivered meals in this table, and congregate meals in the next) are both dichotomous. The odds ratios from these models, given in the final column of the table, estimate the group differences related to each factor in the likelihood of receiving home-delivered meals after we take account of the effects of the factors, rather than before, or while ignoring them. What we find with respect to home-delivered meals when we consider the adjusted or net effect estimates of each factor (or the adjusted odds ratios in the table) is that: Food insecurity, having multiple types of impairments, and being socially isolated are significantly related to receiving home-delivered meals, while receiving food stamps is unrelated to whether individuals received home- delivered meals. The odds that food insecure individuals received home- delivered meals are nearly twice the odds for food secure individuals, and more isolated individuals have odds nearly twice as high as less isolated individuals of receiving home-delivered meals. Impairments have an even larger effect. People with multiple types of impairments are much more likely than those with none to receive home-delivered meals, with odds more than three times higher. The demographic variables that have significant effects are age, household size, employment status, and home ownership. Individuals over 80 are twice as likely as individuals under 60 to receive home-delivered meals, by a factor of nearly two. By implication, they are also more likely to receive them than people 60-69, their odds being greater by a factor of 2.02/1.26 = 1.60, apart from rounding. Individuals in two-person households and in households with three or more persons were less likely to receive home- delivered meals than persons living alone, by a factor of roughly 0.6 in both cases. Also, individuals who were unemployed because of disabilities had odds of receiving home-delivered meals nearly two times higher than employed individuals, and individuals who did not own their homes had odds of receiving home-delivered meals about 1.5 times higher than those who did. When we consider congregate meals, we found that: Food insecurity, number of impairments, social isolation and the receipt of food stamps were all unrelated to having received congregate meals. A number of the demographic variables are, however associated with whether individuals had received congregate meals. The odds on having done so were more than twice as high for individuals over 70 than for those under 60 (and by implication about 1.5 to 2 times as high for individuals over 70 than for those 60 to 69). The odds that African American older adults and other older adults from minority groups received congregate meals were about 1.5 times higher than for white older adults, and Hispanic older adults had similarly larger odds than non- Hispanic older adults of receiving congregate meals (i.e., 1.0/0.65 = 1.5, apart from rounding). People who were not living alone were less likely to have received congregate meals (the odds were smaller by a factor of 0.7 for those in two-person households and a factor of 0.4 for those in households with three or more persons). Persons who were not employed were more likely to have received congregate meals than persons who were employed. Finally people in non-metropolitan regions were more likely to receive congregate meals than people in metropolitan regions (with odds higher by a factor of 1.6) and people in the Midwest and West were more likely than people in the Northeast (and, by implication, the South) to have received a congregate meal. To examine factors associated with likely need for and receipt of home- based care, we used data from the 2008 HRS to identify older adults age 60 and above that reported that they had difficulty doing specific activities as a result of a health or memory problem. The specific activities included IADLs, for which Title III programs provide assistance through homemaker and chore care, as well as ADLs, for which Title III programs provide personal care services. We assume that older adults with one or more IADL or ADL restrictions have a likely need for home based care, and examined the likelihood that a older adult with one or more IADL or ADL difficulties failed to receive any help with those restrictions. Our analysis did not consider the sufficiency of help received; that is, among those who received help for a given difficulty, whether they received sufficient help for that difficulty. Table 14 shows the estimated proportion of older adults within different demographic groups reporting one or more IADL or ADL difficulty, the odds that older adults with the specific characteristic report one or more difficulties (that is, the percent reporting one or more difficulties divided by the percent not reporting any difficulties), and the comparative odds between older adults with different demographic characteristics compared to a reference group. Table 14 illustrates notable demographic differences in the proportion of older adults reporting one or more IADL or ADL difficulties. The proportion of older adults with at least one IADL or ADL difficulty increased dramatically with age: while an estimated 22 percent of older adults age 60 through 69 reported one or more IADL or ADL restrictions, an estimated 29 percent of those ages 70 through 79, and an estimated 53 percent of those aged 80 and above, reported such difficulties. We found modest differences among racial and ethnic groups in the proportion reporting one or more IADL or ADL difficulties, with fewer white older adults estimated to have difficulties (29 percent) compared to African American older adults (35 percent), and more Hispanic older adults than non-Hispanic older adults estimated to have difficulties (37 percent compared to 29 percent). The proportion of older adults reporting IADL or ADL difficulties also varied by income, with fewer individuals living in families above 185 percent of the poverty threshold reporting restrictions (26 percent) compared to an estimated 42 to 44 percent of those with lower incomes. The proportion of older adults estimated to have one or more IADL or ADL difficulties also varied by homeownership status, with an estimated 26 percent of homeowners and an estimated 45 percent of non-homeowners reporting one or more IADL or ADL difficulties. A substantially larger proportion of older adults with low levels of education reported IADL or ADL difficulties than those with higher levels of education: an estimated 46 percent of those with less than a high school education reported difficulties, compared to 29 percent of those with high school degrees or equivalents and 20 percent of those with a college degree or more. Medicaid recipients were also more likely to report difficulties, with 54 percent of recipients, compared to 27 percent of non- recipients, reporting IADL or ADL difficulties. There was little difference in the estimated proportion of older adults reporting IADL and ADL difficulties between men and women, between those living alone and those living with others, and between those who had children living within 10 miles and those who did not. We used logistic regression analysis to predict which factors were associated with reporting one or more IADL or ADL difficulties, after controlling for other factors. These “adjusted odds” showing the comparative odds of having a difficulty among older adults with different characteristics are shown in the final column of table 14. Notably: After controlling for other factors, age appeared to have among the most pronounced effects on whether an older adult reported having one or more IADL or ADL difficulty: the odds that an adult age 80 or above reported one or more difficulties was approximately three times higher than those for an adult age 60 through 69. After controlling for other factors, race was not significantly related to the likelihood of reporting one or more difficulties, though Hispanic older adults had lower odds of having one or more IADLs or ADLs than non- Hispanic older adults. Income remained a significant predictor of the likelihood of reporting an IADL or ADL difficulty, with the odds of reporting a difficulty approximately 25 percent to 40 percent higher for those in families making less than 185 percent of the poverty threshold compared to those with higher incomes. Similarly, non-homeowners had higher odds of reporting one or more difficulties than homeowners, by a factor of 1.6. In contrast, those older adults living alone had lower odds of reporting one or more difficulties. The odds were approximately 35 percent lower than those living with others, after controlling for other factors. With respect to education, compared to older adults without a high school degree, older adults with higher levels of education had significantly lower odds of reporting one or more difficulties. In contrast, retired and older adults otherwise not employed had notably higher odds of reporting one or more difficulties, after controlling for other factors. After controlling for other factors, Medicaid recipients were more likely than non-recipients to report one or more IADL or ADL difficulties. However, there were not statistically significant differences across those with and without children living nearby in the odds of having one or more difficulties, after controlling for other factors. When we limited our analysis to older adults reporting one or more IADL or ADL difficulties, we also found demographic differences in the likelihood that older adults did or did not receive any assistance. Table 15 illustrates the risk that an older adult with one or more restrictions did not receive any help with their difficulties, and shows important demographic differences in the estimated proportion of older adults with difficulties that did not receive help. These older adults are potential candidates for home-based care assistance. Our analysis could not determine whether older adults that received some help with difficulties received sufficient assistance. The proportion of older adults that failed to receive any assistance with any reported difficulties declined with age. For example, an estimated 34 percent of older adults ages 80 and above did not receive any assistance with difficulties, compared to an estimated 55 percent of older adults ages 60 through 69. Women were less likely than men to report that they did not receive any assistance (39 percent compared to 57 percent). Compared to married individuals, widowed older adults were less likely to say that they received no assistance (an estimated 54 percent of married older adults, and 36 percent of widowed older adults, did not receive assistance). A greater proportion of white older adults was estimated not to receive assistance (49 percent) compared to African American older adults (35 percent). Older adults in families with higher incomes were more likely to fail to get any assistance than those living in families at or below the poverty threshold. A greater proportion of those in families with incomes exceeding 185 percent of the poverty threshold compared to those with lower incomes did not get any assistance: an estimated 52 percent of those living in families with incomes exceeding 18 percent of the poverty threshold, compared to 33 percent for those in families with incomes below the poverty threshold and 41 percent of those in families above the poverty threshold through 185 percent of the poverty threshold reported not receiving assistance. Homeowners were more likely to report not receiving assistance than non- homeowners (53 percent compared to 33 percent). Education was inversely related to the receipt of assistance: among older adults with college degrees or higher, an estimated 61 percent went without any assistance, compared to an estimated 39 percent among those with less than a high school degree. Similarly, a much higher proportion of older adults currently employed reported not receiving any assistance (an estimated 80 percent) compared to retired or otherwise not employed older adults (46 percent and 34 percent respectively). In addition to being more likely to report having one or more IADL or ADL difficulties, Medicaid recipients were more likely to receive at least some assistance. An estimated 27 percent of Medicaid recipients with difficulties, compared to an estimated 51 percent of non-recipients with difficulties, went without any assistance. Logistic regression analysis revealed that, after adjusting for other characteristics, several of the factors significantly associated with whether an older adult with difficulties received or did not receive assistance were similar to those associated with whether an older adult reported having one or more IADLs or ADLs. For example, after controlling for other factors, the odds that an older adult age 80 or above went without assistance were nearly half of the odds for an older adult age 60 through 69. Compared to older adults that were active in the workforce, older adults that were not employed (either retired or otherwise not working) were dramatically less likely not to get any assistance, with odds approximately 70 to 75 percent lower than those for employed older adults. Women were substantially less likely than men to go without assistance (odds ratio of 0.62), and Medicaid recipients were half as likely as non- recipients to go without assistance (odds ratio of 0.50). The odds that African American older adults with difficulties went without assistance were lower than those for white older adults, by approximately 30 percent, whereas the odds that Hispanic older adults went without assistance were somewhat higher than those for non-Hispanic older adults, by approximately 35 percent. While those older adults living alone had notably higher odds of going without assistance compared to those living with others (odds ratio 1.8), there was not a statistically significant difference between those with children living nearby and those without children living nearby. To assess the number of older adults likely to need transportation services, we used data from the 2008 HRS to identify those older adults over age 65 that reported they could not drive, could drive but lacked access to a car, or did not have access to transportation services through their living facility. By this definition, an estimated 21 percent of older adults age 65 and over were likely to need transportation services. This estimate does not account for the fact that some older adults likely to need services may obtain transportation from other sources, such as through a spouse, friends, or public transportation. When we considered the likely need for transportation services among respondents with different characteristics, we found that many demographic factors were associated with an increased likelihood of needing services. Table 16 presents the estimated percentages of older adults within different demographic groups who were likely to need transportation services. For example: Age and sex were related to likely need for transportation services. An estimated 41 percent of those age 80 and above were likely to need transportation services, compared to just 12 percent of those ages 65 through 69. A much larger proportion of women than men were likely to need transportation services (an estimated 29 percent compared to 12 percent). Likely need for transportation services also varied by race and ethnicity. Prior to controlling for other factors, approximately two times as many African American older adults than white older adults had a likely need for transportation services, with an estimated 39 percent of African Americans likely to need transportation services, compared to 20 percent of white older adults. Among Hispanic older adults, an estimated 46 percent were likely to need transportation services, compared to 20 percent of non- Hispanic older adults. Likely need for transportation services was higher among those with lower incomes and lower net wealth as measured by homeownership. An estimated 53 percent of older adults living in families below the poverty threshold were likely to need transportation services, compared to an estimated 16 percent of those living in families with incomes exceeding 185 percent of the poverty threshold. Compared to non-homeowners, a much smaller proportion of homeowners were likely to need services: an estimated 15 percent of homeowners, compared to 45 percent of non- homeowners, were likely to need services. Older adults with higher levels of education were less likely to need transportation services than older adults with a high school degree or less. An estimated 40 percent of those with less than a high school degree, and an estimated 20 percent of those with high school degrees or equivalents, were likely to need transportation services compared to just 10 percent of those with college degrees or above. Prior to controlling for other factors, older adults that lived alone were slightly more likely to need transportation services than those that lived with other people (an estimated 25 percent compared to 20 percent). Additionally, an estimated 35 percent of widowed older adults and an estimated 22 percent of older adults in other marital status categories (never married, separated, divorced, or unknown) were likely to need transportation services, compared to an estimated 14 percent of married older adults. Likely need for transportation services also varied by health-related factors: a greater proportion of respondents with sight, health, depression, and mobility problems were likely to need services when compared to their counterparts without sight, health, depression, or mobility problems. Additionally, an estimated 54 percent of Medicaid recipients were likely to need transportation services, compared to just 18 percent of older adults that did not receive Medicaid. The odds of being likely to need transportation services for each demographic category are defined as the proportion of the group in likely need compared to the proportion of the group not likely to need services. Odds ratios provide a comparative measure of how the likely need for transportation services varies by different demographic variables. For example, among adults ages 65 through 69, an estimated 11.6 percent of older adults are likely to need transportation services, and an estimated 88.4 percent are not. The odds that an adult age 65 to less than 75 is likely to need services are thus 11.6 to 88.4, or 0.13. In comparison, the odds that an older adult age 80 or above is likely to need transportation services is 41.0 percent to 59.0 percent, or 0.69. The unadjusted odds ratio comparing the two groups (0.69 to 0.13, or 5.3) shows that prior to controlling for other factors, older adults age 80 and above are more than five times more likely than their counterparts ages 65 through 69 of being in likely need of transportation services. The penultimate column of table 16 shows unadjusted odds ratios among different groups of older adults compared to a reference group within each variable, prior to controlling for other factors. The final column of table 16 presents “adjusted” estimates of these comparative odds ratios. These adjusted estimates are derived from logistic regression analysis, and show the comparative odds after controlling for other variables that also influence whether an older adult is likely to need transportation services. Asterisks indicate that the estimated odds ratios are significant at the 95 percent significance level. Table 16 illustrates that, even after controlling for other factors, certain groups are significantly more likely than others to be likely to need transportation services. For example, age, sex, race, and ethnicity are all significantly related with the odds of having a likely need for transportation services. Older individuals, females, African American older adults, and Hispanic older adults had higher odds of being likely to need than younger older adults, men, white older adults and non-Hispanic older adults, respectively. However, after controlling for other factors, the odds of being likely to need transportation services among the “other” race category were not significantly higher than those for white older adults. Older adults with low incomes and low assets (as measured by non- homeownership) had significantly higher odds of being likely to need transportation services than older adults with higher incomes and homeowners, even after controlling for other factors. After controlling for other factors, several health-related factors including poor sight, poor overall health and limited mobility were still significantly associated with differential likelihood of needing transportation services, though the magnitude of the differences in relative odds was reduced. Additionally, after controlling for other factors, there was no statistical difference in being likely to need transportation services between those who were and were not depressed. Appendix VI: R Quest on Aions from GAO Survey of Area Agencies ging (Local Agencies) We distributed a Web-based survey to a random national sample of 125 area agencies on aging (local agencies) to obtain officials views on the use of Older Americans Act (OAA) Title III funds, among other topics. We received completed surveys from 99 of 125 local agencies, for a response rate of 79 percent. Figures 5 through 17 show responses to select questions from the survey, which are generalizable to the 629 local agencies in the United States and were discussed in the body of the report. The percentages in this report are generally subject to margins of error of no more than plus or minus 12 percentage points at the 95 percent confidence level. Instances were the margin of errors falls outside of this range are indicated. For more information about our methodology for designing and distributing the survey, see appendix I. In addition to the contact person named above, Kimberley M. Granger- Heath, Assistant Director; Ramona Burton, Analyst-in-Charge; Jameal Addison; James Bennett; David Chrisinger; Andrea Dawson; Nancy J. Donovan; Gregory Dybalski; Justin Fisher; Gene Kuehneman; Luann Moy; Grant Mallie; Ruben Montes de Oca; Anna Maria Ortiz; Douglas Sloane; Barbara Steel-Lowney; Craig Winslow; and Amber Yancey-Carroll made key contributions to this report. Lise Levie, Ben Pfeiffer, Beverly Ross, Jeff Tessin and Monique Williams verified our findings.
The Older Americans Act (OAA) was enacted to help older adults remain in their homes and communities. In fiscal year 2008, about 5 percent of the nation's adults 60 and over received key aging services through Title III of the OAA, including meals and home-based care. In fiscal year 2010, states received $1.4 billion to fund Title III programs. Studies project large increases in the number of adults who will be eligible for services in the future and likely government budget constraints. In advance of program reauthorization scheduled for 2011, GAO was asked to determine: (1) what is known about the need for home- and community-based services like those funded by OAA and the potential unmet need for these services; (2) how have agencies used their funds, including Recovery Act funds, to meet program objectives, and (3) how government and local agencies measured need and unmet need. To do this, GAO analyzed national self-reported data; surveyed a random sample of 125 local agencies; reviewed agency documents; and spoke with officials from the Administration on Aging (AoA) and state and local agencies. National data show many older adults likely needed meals or home-based care in 2008, but they did not all receive assistance from Title III programs or other sources, like Medicaid. For instance, while about 9 percent of low-income older adults received meals services, many more were likely to need them due to financial or other difficulties obtaining food. Also, while most older adults who were likely to need home-based care because of difficulties with activities such as walking or bathing received at least some help completing such tasks, many received limited help and some did not receive any. Finally, an estimated 21 percent of people age 65 and older were likely to need transportation services due to their inability to drive or lack of access to a vehicle. Some aspects of need and receipt could not be captured with existing data. For example, GAO could not identify whether the meals and home-based care older adults received was adequate or estimate the number of individuals with transportation needs who did and did not receive such services. Many agencies utilize the flexibility afforded by the OAA to transfer funds among programs and use funds from multiple sources to provide services in their communities. State agencies annually transferred an average of $67 million from congregate meals to home-delivered meals and support services over the past 9 years. Agencies also use funds from other sources, such as Medicaid, state and local governments, and client contributions, to fund Title III services for clients. While client donations are common, formal arrangements with clients to pay a portion of the cost of services are limited. These payments by individuals with higher incomes could help defray the costs of serving others, as the demand for services increases in the future. The recent economic downturn affected agency resources and funding, with about 47 percent of local agencies reporting budget reductions in fiscal year 2010. To cope, many agencies cut administrative and operational costs and some reduced services. The Recovery Act temporarily replaced some lost funding by providing $97 million for meals, but ended in 2010. GAO spoke to 10 state agencies about how they will adjust to lost Recovery Act dollars and found 5 plan to cut services, 2 reserved funds from other sources, 2 are not sure how they will adjust, and 1 will maintain services. The OAA requires AoA to design and implement uniform data collection procedures for states to assess the receipt, need, and unmet need for Title III services. While AoA provides uniform procedures for measuring receipt of services, it does not provide standardized definitions or measurement procedures for need and unmet need that all states are required to use. Within this context, states use a variety of approaches to measure need and measure unmet need to varying extents. No agencies that GAO spoke with fully estimate the number of older adults with need and unmet need. AoA and state agency officials noted that there are various challenges to collecting more information, such as cost and complexity. However, as a result of limited and inconsistent information, AoA is unable assess the full extent of need and unmet need nationally, and within each state. GAO recommends that the Department of Health and Human Services study the effectiveness of cost-sharing and definitions and measurement procedures for need and unmet need. The agency said they would explore options for implementing the recommendations.
Ten states concentrated in the western, midwestern, and southeastern United States—all areas where the housing market had experienced strong growth in the prior decade—experienced 10 or more bank failures between 2008 and 2011 (see fig.1). Together, failures in these 10 states accounted for 72 percent (298), of the 414 bank failures across all states during this time period. Within these 10 states, 86 percent (257) of the failed banks were small institutions with assets of less than $1 billion at the time of failure, and 52 percent (155) had assets of less than $250 million. Twelve percent (36) were medium-size banks with more than $1 billion but less than $10 billion in assets, and 2 percent (5) were large banks with assets of more than $10 billion at the time of failure. In the 10 states with 10 or more failures between 2008 and 2011, failures of small and medium-size banks were largely associated with high concentrations of commercial real estate (CRE) loans, in particular the subset of acquisition, development, and construction (ADC) loans, and with inadequate management of the risks associated with these high concentrations. Our analysis of call report data found that CRE (including ADC) lending increased significantly in the years prior to the housing market downturn at the 258 small banks that failed between 2008 and 2011. This rapid growth of failed banks’ CRE portfolios resulted in concentrations—that is, the ratio of total CRE loans to total risk-based capital—that exceeded regulatory thresholds for heightened scrutiny established in 2006 and increased the banks’ exposure to the sustained downturn that began in 2007. Specifically, we found that CRE concentrations grew from 333 percent in December 2001 to 535 percent in June 2008. At the same time, ADC concentrations grew from 104 percent to 259 percent. The trends for the 36 failed medium-size banks were similar over this time period. In contrast, small and medium-sized banks that did not fail exhibited substantially lower levels and markedly slower growth rates of CRE loans and as a result had significantly lower concentrations of them, reducing the banks’ exposure. With the onset of the financial crisis, the level of nonperforming loans began to rise, as did the level of subsequent net charge-offs, leading to a decline in net interest income and regulatory capital. The rising level of nonperforming loans, particularly ADC loans, appears to have been the key factor in the failures of small and medium-size banks in the 10 states between 2008 and 2011. For example, in December 2001, 2 percent of ADC loans at the small failed banks were classified as nonperforming. With the onset of the financial crisis, the level of nonperforming ADC loans increased quickly to 11 percent by June 2008 and 46 percent by June 2011. As banks began to designate nonperforming loans or portions of these loans as uncollectible, the level of net charge-offs also began to rise. In December 2001, net charge-offs of ADC loans at small failed banks were less than 1 percent. By June 2008, they had risen to 2 percent and by June 2011 to 12 percent. CRE and especially ADC concentrations in small and medium-size failed banks in the 10 states were often correlated with poor risk management and risky funding sources. Our analysis showed that small failed banks in the 10 states had often pursued aggressive growth strategies using nontraditional and riskier funding sources such as brokered deposits. The IG reviews noted that in the majority of failures, management exercised poor oversight of the risks associated with high CRE and ADC concentrations and engaged in weak underwriting and credit administration practices. Further, 28 percent (84) of the failed banks had been chartered for less than 10 years at the time of failure and appeared in many cases to have deviated from their approved business plans, according to FDIC. Large bank failures in the 10 states were associated with some of the same factors as small bank failures—high-risk growth strategies, weak underwriting and risk controls, and excessive concentrations that increased these banks’ exposure to the real estate market downturn. The primary difference was that the large banks’ strategies generally relied on risky nontraditional residential mortgage products as opposed to commercial real estate. To further investigate factors associated with bank failures across the United States, we analyzed data on FDIC-insured commercial banks and state-chartered savings banks from 2006 to 2011. Our econometric analysis suggests that across the country, riskier lending and funding sources were associated with an increased likelihood of bank failures. Specifically, we found that banks with high concentrations of ADC loans and an increased use of brokered deposits were more likely to fail from 2008 to 2011, while banks with better asset quality and greater capital adequacy were less likely to fail. An FDIC IG study issued in October 2012 found that some banks with high ADC concentrations were able to weather the recent financial crisis without experiencing a corresponding decline in their overall financial condition. Among other things, the IG found that these banks exhibited strong management, sound credit administration and underwriting practices, and adequate capital. We found that losses related to bank assets and liabilities that were subject to fair value accounting contributed little to bank failures overall, largely because most banks’ assets and liabilities were not recorded at fair value. Based on our analysis, fair value losses related to certain types of mortgage-related investment securities contributed to some bank failures. But in general fair value-related losses contributed little to the decline in net interest income and regulatory capital that failed banks experienced overall once the financial crisis began. We analyzed the assets and liabilities on the balance sheets of failed banks nationwide that were subject to fair value accounting between 2007 and 2011. We found that generally more than two-thirds of the assets of all failed commercial banks (small, medium-size, and large) were classified as held-for-investment (HFI) loans, which were not subject to fair value accounting. For example, small failed commercial banks held an average of 77 percent of their assets as HFI loans in 2008. At the same time, small commercial banks that remained open held an average of 69 percent in such loans. Failed and open small thrifts, as well as medium-size and large commercial banks, had similar percentages. Investment securities classified as available for sale (AFS) represented the second-largest percentage of assets for all failed and open banks over the 5-year period we reviewed. For example, in 2008, small failed commercial banks held an average of 10 percent of their assets as AFS securities, while small open banks averaged 16 percent. Generally, AFS securities are recorded at fair value, but the changes in fair value impact earnings or regulatory capital only under certain circumstances. While several other asset and liability categories are recorded at fair value and impact regulatory capital, together these categories did not account for a significant percentage of total assets at either failed or open commercial banks or thrifts. For example, in 2008, trading assets, nontrading assets such as nontrading derivative contracts, and trading liabilities at small failed banks ranged from 0.00 to 0.03 percent of total assets. As discussed earlier, declines in regulatory capital at failed banks were driven by rising levels of credit losses related to nonperforming loans and charge-offs of these loans. For failed commercial banks and thrifts of all sizes nationwide, credit losses, which resulted from nonperforming HFI loans, were the largest contributors to the institutions’ overall losses when compared to any other asset class. These losses had a greater negative impact on institutions’ earnings and regulatory capital levels than those recorded at fair value. During the course of our work, several state regulators and community banking association officials told us that at some small failed banks, declining collateral values of impaired collateral-dependent loans— particularly CRE and ADC loans in those areas where real estate assets prices declined severely—drove both credit losses and charge-offs and resulted in reductions to regulatory capital. A loan is considered “collateral dependent” when the repayment of the debt will be provided solely by the sale or operation of the underlying collateral, and there are no other available and reliable sources of repayment. Data are not publicly available to analyze the extent to which declines in the collateral values of impaired collateral-dependent CRE or ADC loans drove credit losses or charge-offs at the failed banks. However, state banking associations said that the magnitude of the losses was exacerbated by federal bank examiners’ classification of collateral-dependent loans and evaluations of the appraisals banks used to support the impairment analyses of these loans. Federal banking regulators noted that regulatory guidance in 2009 directed examiners not to require banks to write down loans to an amount less than the loan balance solely because the value of the underlying collateral had declined. The regulators added that examiners were generally not expected to challenge the appraisals obtained by banks unless they found that any underlying facts or assumptions about the appraisal were inappropriate or could support alternative assumptions. The guidance also stated that in making decisions to write down loans, bank examiners were to first focus on the adequacy of cash flows to service the debt. If the sources of cash flows did not exist and the only likely repayment source was the sale of the collateral, then examiners were to direct the bank to write down the loan balances to the fair value of the collateral, less estimated costs to sell in certain circumstances. For example, one Federal Reserve official told us that some failed banks were extending ADC loans on an interest-only basis with no evidence that the borrower would be able to repay the principal and with underlying collateral whose value had declined by a very significant amount. In those cases, examiners questioned whether the banks would ever be repaid the principal owed. Under these circumstances, absent any evidence that the borrowers could pay through other means, the examiners would require a write-down. A loan loss provision is the money a bank sets aside to cover potential credit losses on loans. The Department of the Treasury (Treasury) and the Financial Stability Forum’s Working Group on Loss Provisioning (Working Group) have observed that the current accounting model for estimating credit losses is based on historical loss rates, which were low in the years before the financial crisis. Under GAAP, the accounting model for estimating credit losses is commonly referred to as an “incurred loss model” because the timing and measurement of losses are based on estimates of losses incurred as of the balance sheet date. In a 2009 speech, the Comptroller of the Currency, who was a co-chair of the Working Group, noted that in a long period of benign economic conditions, such as the years prior to the most recent downturn, historical loan loss rates would typically be low. As a result, justifying significant loan loss provisioning to increase the loan loss allowance can be difficult under the incurred loss model. Treasury and the Working Group noted that earlier recognition of loan losses could have reduced the need for banks to recognize increases in their incurred credit losses through a sudden series of loan loss provisions that reduced earnings and regulatory capital. Federal banking regulators have also noted that requiring management at the failed banks to recognize loan losses earlier could have helped stem losses. Specifically, such a requirement might have provided an incentive not to concentrate so heavily in the loans that later resulted in significant losses. To address this issue, the Financial Accounting Standards Board has issued a proposal for public comment for a loan loss provisioning model that is more forward-looking and focuses on expected losses. This proposal would allow banks to establish a means of recognizing potential losses earlier on the loans they underwrite and could incentivize prudent risk management practices. Moreover, the proposal is designed to help address the cycle of losses and failures that emerged in the recent crisis as banks were forced to increase loan loss allowances and raise capital when they were least able to do so (procyclicality). We plan to continue to monitor the progress of the ongoing activities of the standard setters to address concerns with the loan loss provisioning model. FDIC is required to resolve a bank failure in a manner that results in the least cost to the Deposit Insurance Fund (DIF). FDIC’s preferred resolution method is to sell the failed bank to another, healthier, bank. During the most recent financial crisis, FDIC facilitated these sales by including a loss share agreement, under which FDIC absorbed a portion of the loss on specified assets purchased by the acquiring bank. From January 2008 through December 31, 2011, FDIC was appointed as receiver for the 414 failed banks, with $662 billion in book value of failed bank assets. FDIC used purchase and assumption agreements (the direct sale of a failed bank to another, healthier bank) to resolve 394 failed institutions with approximately $652 billion in assets. As such, during the period 2008 through 2011, FDIC sold 98 percent of failed bank assets using purchase and assumption agreements. However, FDIC only was able to resolve so many of these banks with purchase and assumption agreements because it offered to share in the losses incurred by the acquiring institution. FDIC officials said that at the height of the financial crisis in 2008, FDIC sought bids for whole bank purchase and assumption agreements (in which the acquiring bank assumes essentially all of the failed bank’s assets and liabilities) with little success. Potential acquiring banks we interviewed told us that they did not have sufficient capital to take on the additional risks that the failed institutions’ assets represented. Acquiring bank officials that we spoke to said that they would not have purchased the failed banks without FDIC’s shared loss agreements because of uncertainties in the market and the value of the assets. Because shared loss agreements had worked well during the savings and loan crisis of the 1980s and early 1990s, FDIC decided to offer the option of having such agreements as part of the purchase and assumption of the failed bank. Shared loss agreements provide potential buyers with some protection on the purchase of failed bank assets, reduce immediate cash needs, keep assets in the private sector, and minimize disruptions to banking customers. Under the agreements, FDIC generally agrees to pay 80 percent for covered losses, and the acquiring bank covers the remaining 20 percent. From 2008 to the end of 2011, FDIC resolved 281of the 414 failures (68 percent) by providing a shared loss agreement as part of the purchase and assumption. The need to offer shared loss agreements diminished as the market improved. For example, in 2012 FDIC was able to resolve more than half of all failed institutions without having to offer to share in the losses. Specifically, between January and September 30, 2012, FDIC had agreed to share losses on 18 of 43 bank failures (42 percent). Additionally, some potential bidders were willing to accept shared loss agreements with lower than 80-percent coverage. As of December 31, 2011, DIF receiverships had made shared loss payments totaling $16.2 billion. In addition, future payments under DIF receiverships are estimated at an additional $26.6 billion over the duration of the shared loss agreements, resulting in total estimated lifetime losses of $42.8 billion (see fig. 2). By comparing the estimated cost of the shared loss agreements with the estimated cost of directly liquidating the failed banks’ assets, FDIC has estimated that using shared loss agreements has saved the DIF over $40 billion. However, while the total estimated lifetime losses of the shared loss agreements may not change, the timing of the losses may, and payments from shared loss agreements may increase as the terms of the agreements mature. FDIC officials stated that the acquiring banks were being monitored for compliance with the terms and conditions of the shared loss agreements. FDIC is in the process of issuing guidance to the acquiring banks reminding them of these terms to prevent increased shared loss payments as these agreements approach maturity. The acquisitions of failed banks by healthy banks appear to have mitigated the potentially negative effects of bank failures on communities, although the focus of local lending and philanthropy may have shifted. First, bank failures and failed bank acquisitions can have an impact on market concentration—an indicator of the extent to which banks in the market can exercise market power, by, for example, raising prices or reducing the availability of some products and services. But, we found that a limited number of metropolitan areas and rural counties were likely to have become significantly more concentrated. We analyzed the impact of bank failures and failed bank acquisitions on local credit markets using data for the period from June 2007 to June 2012. We calculated the Herfindahl-Hirschman Index (HHI), a key statistical measure used to assess market concentration and the potential for firms to exercise their ability to influence market prices. The HHI is measured on a scale of 0 to 10,000, with values over 1,500 considered indicative of concentration. Our results suggest that a small number of the markets affected by bank failures and failed bank acquisitions were likely to have become significantly more concentrated. For example, 8 of the 188 metropolitan areas affected by bank failures and failed bank acquisitions between June 30, 2009, and June 29, 2010, met the criteria that indicate significant competitive concerns. Similarly, 5 of the 68 rural counties affected by bank failures during the same time period met the criteria. The relatively limited number of areas where concentration increased was generally the result of acquisitions by institutions that were not already established in the locales that the failed banks served. However, the effects could be potentially significant for those limited areas that had been serviced by one bank or where only a few banks remain. Second, our econometric analysis of call report data from 2006 through 2011 found that failing small banks extended progressively less net credit as they approached failure, but that acquiring banks generally increased net credit after the acquisition, albeit more slowly. Officials from acquiring and peer banks we interviewed in Georgia, Michigan, and Nevada agreed. However, general credit conditions were generally tighter in the period following the financial crisis. For example, several bank officials noted that in the wake of the bank failures, underwriting standards had tightened, making it harder for some borrowers who might have been able to obtain loans prior to the bank failures to obtain them afterward. Several banks officials we interviewed also said that new lending for certain types of loans could be restricted in certain areas. For example, they noted that the CRE market, and in particular the ADC market, had contracted and that new lending in this area had declined significantly. Officials from regulators, banking associations, and banks we spoke with also said that involvement in local philanthropy had declined as small banks approached failure but generally increased after acquisition. State banking regulators and national and state community banking associations we interviewed told us that community banks tended to be highly involved in local philanthropic activities before the recession—for example, by designating portions of their earnings for community development or other charitable activities. However, these philanthropic activities decreased as the banks approached failure and struggled to conserve capital. Acquiring bank officials we interviewed told us that they had generally increased philanthropic activities compared with the failed community banks during the economic downturn and in the months before failure. However, acquiring banks may or may not focus on the same philanthropic activities as the failed banks. For example, one large acquiring bank official told us that the acquiring bank made major charitable contributions to large national or statewide philanthropic organizations and causes and focused less on the local community charities to which the failed bank had contributed. Finally, we econometrically analyzed the relationships among bank failures, income, unemployment, and real estate prices for all states and the District of Columbia (states) for 1994 through 2011. Our analysis showed that bank failures in a state were more likely to affect its real estate sector than its labor market or broader economy. In particular, this analysis did not suggest that bank failures in a state—as measured by failed banks’ share of deposits—were associated with a decline in personal income in that state. To the extent that there is a relationship between the unemployment rate and bank failures, the unemployment rate appears to have more bearing on failed banks’ share of deposits than vice versa. In contrast, our analysis found that failed banks’ share of deposits and the house price index in a state appear to be significantly related to each other. Altogether, these results suggest that the impact of bank failures on a state’s economy is most likely to appear in the real estate sector and less likely to appear in the overall labor market or in the broader economy. However, we note that these results could be different at the city or county level. Chairman Johnson, Ranking Member Crapo, and Members of the Committee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Lawrance Evans, Jr. at (202) 512-4802 or evansl@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this testimony include Karen Tremba, Assistant Director; William Cordrey, Assistant Director; Gary Chupka, Assistant Director; William Chatlos; Emily Chalmers, Robert Dacey; Rachel DeMarcus; M’Baye Diagne; Courtney LaFountain; Marc Molino, Patricia Moye; Lauren Nunnally; Angela Pun, Stefanie Jonkman; Akiko Ohnuma; Michael Osman; and Jay Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Between January 2008 and December 2011--a period of economic downturn in the United States--414 insured U.S. banks failed. Of these, 85 percent (353) were small institutions with less than $1 billion in assets. Small banks often specialize in small business lending and are associated with local community development and philanthropy. The failures of these banks have raised questions about contributing factors. Further, the failures have raised concerns about the accounting and regulatory requirements needed to maintain reserves large enough to absorb expected loan losses (loan loss allowances)--for example, when borrowers are unable to repay a loan (credit losses). This statement is based on findings from GAO's 2013 report on recent bank failures ( GAO-13-71 ) required by Pub. L. No 112-88. This testimony discusses (1) the factors that contributed to the bank failures in states with the most failed institutions between 2008 and 2011; (2) the use of shared loss agreements in resolving troubled banks; and (3) the effect of recent bank failures on local communities. To do this work, GAO relied on issued report GAO-13-71 and updated data as appropriate. Ten states concentrated in the western, midwestern, and southeastern United States--areas where the housing market had experienced strong growth in the prior decade--each experienced 10 or more commercial bank or thrift (bank) failures between 2008 and 2011. The failures of small banks (those with less than $1 billion in assets) in these states were largely driven by credit losses on commercial real estate (CRE) loans, particularly loans secured by real estate to finance land development and construction. Many of the failed banks had often pursued aggressive growth strategies using nontraditional, riskier funding sources and exhibited weak underwriting and credit administration practices. The Department of the Treasury and the Financial Stability Forum's Working Group on Loss Provisioning observed that earlier recognition of credit losses could have potentially lessened the impact of the crisis. The accounting model used for estimating credit losses is based on historical loss rates, which were low in the prefinancial crisis years. In part due to these accounting rules, loan loss allowances were not adequate to absorb the wave of credit losses that occurred once the financial crisis began. Banks had to recognize these losses through a sudden series of increases (provisions) to the loan loss allowance that reduced earnings and regulatory capital. In December 2012, the Financial Accounting Standards Board issued a proposal for public comment for a loan loss provisioning model that is more forward looking and would incorporate a broader range of credit information. This would result in banks establishing earlier recognition of loan losses for the loans they underwrite and could incentivize prudent risk management practices. It should also help address the cycle of losses and failures that emerged in the recent crisis as banks were forced to increase loan loss allowances and raise capital when they were least able to do so. The Federal Deposit Insurance Corporation (FDIC) used shared loss agreements to help resolve 281 of the 414 bank failures during the recent financial crisis to minimize the impact on the Deposit Insurance Fund (DIF). Under a shared loss agreement, FDIC absorbs a portion of the loss on specified assets of a failed bank that are purchased by an acquiring bank. FDIC officials, state bank regulators, community banking associations, and acquiring banks of failed institutions GAO interviewed said that shared loss agreements helped to attract potential bidders for failed banks during the financial crisis. FDIC compared the estimated cost of the shared loss agreements to the estimated cost of directly liquidating the failed banks' assets and estimated that the use of shared loss agreements saved the DIF over $40 billion. GAO analysis of metropolitan and rural areas where bank failures occurred and econometric analysis of bank income and condition data suggested that the acquisitions of failed banks by healthy banks mitigated the potentially negative effects of failures on communities. However, the focus of local lending and philanthropy may have shifted. Also, bank officials whom GAO interviewed noted that in the wake of the bank failures, underwriting standards had tightened. As a result, credit was generally most available for small business owners with good credit histories and strong financials. Further, the effects of bank failures could potentially be significant for communities that had been serviced by only one bank or where only a few banks remain.
Port security overall has improved because of the development of organizations and programs such as AMSCs, Area Maritime Security Plans (AMSPs), maritime security exercises, and the International Port Security Program, but challenges to successful implementation of these efforts remain. Additionally, agencies may face challenges addressing the additional requirements directed by the SAFE Port Act, such as a provision that DHS establish interagency operational centers at all high- priority ports. AMSCs and the Coast Guard’s sector command centers have improved information sharing, but the types and ways information is shared vary. AMSPs, limited to security incidents, could benefit from unified planning to include an all-hazards approach. Maritime security exercises would benefit from timely and complete after-action reports, increased collaboration across federal agencies, and broader port-level coordination. The Coast Guard’s International Port Security Program is currently evaluating the antiterrorism measures maintained at foreign seaports. Two main types of forums have developed for agencies to coordinate and share information about port security: area committees and Coast Guard sector command centers. AMSCs serve as a forum for port stakeholders, facilitating the dissemination of information through regularly scheduled meetings, issuance of electronic bulletins, and sharing key documents. MTSA provided the Coast Guard with the authority to create AMSCs— composed of federal, state, local, and industry members—that help to develop the AMSP for the port. As of August 2007, the Coast Guard had organized 46 AMSCs. Each has flexibility to assemble and operate in a way that reflects the needs of its port area, resulting in variations in the number of participants, the types of state and local organizations involved, and the way in which information is shared. Some examples of information shared include assessments of vulnerabilities at specific port locations, information about potential threats or suspicious activities, and Coast Guard strategies intended for use in protecting key infrastructure. As part of an ongoing effort to improve its awareness of the maritime domain, the Coast Guard developed 35 sector command centers, four of which operate in partnership with the U.S. Navy. We have previously reported that both of these types of forums have helped foster cooperation and information sharing. We further reported that AMSCs provided a structure to improve the timeliness, completeness, and usefulness of information sharing between federal and nonfederal stakeholders. These committees improved upon previous information- sharing efforts because they established a formal structure and new procedures for sharing information. In contrast to AMSCs, the Coast Guard’s sector command centers can provide continuous information about maritime activities and involve various agencies directly in operational decisions using this information. We have reported that these centers have improved information sharing, and the types of information and the way information is shared vary at these centers depending on their purpose and mission, leadership and organization, membership, technology, and resources. The SAFE Port Act called for establishment of interagency operational centers, directing the Secretary of DHS to establish such centers at all high-priority ports no later than 3 years after the act’s enactment. The act required that the centers include a wide range of agencies and stakeholders and carry out specified maritime security functions. In addition to authorizing the appropriation of funds and requiring DHS to provide Congress a proposed budget and cost-sharing analysis for establishing the centers, the act directed the new interagency operational centers to utilize the same compositional and operational characteristics of existing sector command centers. According to the Coast Guard, none of the 35 centers meets the requirements set forth in the SAFE Port Act. Nevertheless, the four centers the Coast Guard operates in partnership with the Navy are a significant step in meeting these requirements, according to a senior Coast Guard official. The Coast Guard is currently piloting various aspects of future interagency operational centers at existing centers and is also working with multiple interagency partners to further develop this project. DHS has submitted the required budget and cost-sharing analysis proposal, which outlines a 5-year plan for upgrading its centers into future interagency operations centers to continue to foster information sharing and coordination in the maritime domain. The Coast Guard estimates the total acquisition cost of upgrading 24 sectors that encompass the nation’s high-priority ports into interagency operations centers will be approximately $260 million, to include investments in information system, sensor network, and facilities upgrades and expansions. According to the Coast Guard, future interagency operations centers will allow the Coast Guard and its partners to use port surveillance with joined tactical and intelligence information, and share these data with port partners working side by side in expanded facilities. In our April 2007 testimony, we reported on various challenges the Coast Guard faces in its information-sharing efforts. These challenges include obtaining security clearances for port security stakeholders and creating effective working relationships with clearly defined roles and responsibilities. In our past work, we found the lack of federal security clearances among area committee members had been routinely cited as a barrier to information sharing. In turn, this inability to share classified information may limit the ability to deter, prevent, and respond to a potential terrorist attack. The Coast Guard, having lead responsibility in coordinating maritime information, has made improvements to its program for granting clearances to area committee members and additional clearances have been granted to members with a need to know. In addition, the SAFE Port Act includes a specific provision requiring DHS to sponsor and expedite security clearances for participants in interagency operational centers. However, the extent to which these efforts will ultimately improve information sharing is not yet known. As the Coast Guard expands its relationships with multiple interagency partners, collaborating and sharing information effectively under new structures and procedures will be important. While some of the existing centers achieved results with existing interagency relationships, other high-priority ports might face challenges establishing new working relationships among port stakeholders and implementing their own interagency operational centers. Finally, addressing potential overlapping responsibilities —such as leadership roles for the Coast Guard and its interagency partners—will be important to ensure that actions across the various agencies are clear and coordinated. As part of its operations, the Coast Guard has also undertaken additional activities to provide overall port security. The Coast Guard’s operations order, Operation Neptune Shield, first released in 2003, specifies the level of security activities to be conducted. The order sets specific activities for each port. However, the amount of each activity is established based on the port’s specific security concerns. Some examples of security activities include conducting waterborne security patrols, boarding high-interest vessels, escorting vessels into ports, and enforcing fixed security zones. When a port security level increases, the amount of activity the Coast Guard must conduct also increases. The Coast Guard uses monthly field unit reports to indicate how many of its security activities it is able to perform. Our review of these field unit reports indicates that many ports are having difficulty meeting their port security responsibilities, with resource constraints being a major factor. In an effort to meet more of its security requirements, the Coast Guard uses a strategy that includes partnering with other government agencies, adjusting its activity requirements, and acquiring resources. Despite these efforts, many ports are still having difficulty meeting their port security requirements. The Coast Guard is currently studying what resources are needed to meet certain aspects of its port security program, but to enhance the effectiveness of its port security operations, a more comprehensive study to determine all additional resources and changes to strategy to meet minimum security requirements may be needed. Implementing regulations for MTSA specified that AMSPs include, among other things, operational and physical security measures in place at the port under different security levels, details of the security incident command and response structure, procedures for responding to security threats including provisions for maintaining operations in the port, and procedures to facilitate the recovery of the marine transportation system after a security incident. A Coast Guard Navigation and Vessel Inspection Circular (NVIC) provided a common template for AMSPs and specified the responsibilities of port stakeholders under them. As of September 2007, 46 AMSPs are in place at ports around the country. The Coast Guard approved the plans by June 1, 2004, and MTSA requires that they be updated at least every 5 years. The SAFE Port Act added a requirement to AMSPs that specified that they include recovery issues by identifying salvage equipment able to restore operational trade capacity. This requirement was established to ensure that the waterways are cleared and the flow of commerce through United States ports is reestablished as efficiently and quickly as possible after a security incident. While the Coast Guard sets out the general priorities for recovery operations in its guidelines for the development of AMSPs, we have found that this guidance offers limited instruction and assistance for developing procedures to address recovery situations. The Maritime Infrastructure Recovery Plan (MIRP) recognizes the limited nature of the Coast Guard’s guidance and notes the need to further develop recovery aspects of the AMSPs. The MIRP provides specific recommendations for developing the recovery sections of the AMSPs. The AMSPs that we reviewed often lacked recovery specifics, and none had been updated to reflect the recommendations made in the MIRP. The Coast Guard is currently updating the guidance for the AMSPs and aims to complete the updates by the end of calendar year 2007 so that the guidance will be ready for the mandatory 5-year re-approval of the AMSPs in 2009. Coast Guard officials commented that any changes to the recovery section would need to be consistent with the national protocols developed for the SAFE Port Act. Additionally, related to recovery planning, the Coast Guard and CBP have developed specific interagency actions focused on response and recovery. This should provide the Coast Guard and CBP with immediate security options for the recovery of ports and commerce. Further, AMSPs generally do not address natural disasters (i.e., they do not have an all-hazards approach). In a March 2007 report examining how ports are dealing with planning for natural disasters such as hurricanes and earthquakes, we noted that AMSPs cover security issues but not other issues that could have a major impact on a port’s ability to support maritime commerce. As currently written, AMSPs are concerned with deterring and, to a lesser extent, responding to security incidents. We found, however, that unified consideration of all risks—natural and man- made—faced by a port may be beneficial. Because of the similarities between the consequences of terrorist attacks and natural or accidental disasters, much of the planning for protection, response, and recovery capabilities is similar across all emergency events. Combining terrorism and other threats can thus enhance the efficiency of port planning efforts. This approach also allows port stakeholders to estimate the relative value of different mitigation alternatives. The exclusion of certain risks from consideration, or the separate consideration of a particular type of risk, raises the possibility that risks will not be accurately assessed or compared, and that too many or too few resources will be allocated toward mitigation of a particular risk. As ports continue to revise and improve their planning efforts, available evidence indicates that by taking a systemwide approach and thinking strategically about using resources to mitigate and recover from all forms of disaster, ports will be able to achieve the most effective results. AMSPs provide a useful foundation for establishing an all-hazards approach. While the SAFE Port Act does not call for expanding AMSPs in this manner, it does contain a requirement that natural disasters and other emergencies be included in the scenarios to be tested in the Port Security Exercise Program. On the basis of our prior work, we found there are challenges in using AMSCs and AMSPs as the basis for broader all-hazards planning. These challenges include determining the extent that security plans can serve all-hazards purposes. We recommended that DHS encourage port stakeholders to use the AMSCs and MTSA-required AMSPs to discuss all- hazards planning. DHS concurred with this recommendation. The Coast Guard Captain of the Port and the AMSC are required by MTSA regulations to conduct or participate in exercises to test the effectiveness of AMSPs annually, with no more than 18 months between exercises. These exercises—which have been conducted for the past several years— are designed to continuously improve preparedness by validating information and procedures in the area plan, identifying weaknesses and strengths, and practicing command and control within an incident command/unified command framework. In August 2005, the Coast Guard and the TSA initiated the Port Security Training Exercise Program (PortSTEP)—an exercise program designed to involve the entire port community, including public governmental agencies and private industry, and intended to improve connectivity of various surface transportation modes and enhance AMSPs. Between August 2005 and October 2007, the Coast Guard expected to conduct PortSTEP exercises for 40 area committees and other port stakeholders. Additionally, the Coast Guard initiated its own Area Maritime Security Training and Exercise Program (AMStep) in October 2005. This program was also designed to involve the entire port community in the implementation of the AMSP. Between the two programs, PortSTEP and AMStep, all AMSCs have received a port security exercise each year since inception. The SAFE Port Act included several new requirements related to security exercises, such as establishing a Port Security Exercise Program to test and evaluate the capabilities of governments and port stakeholders to prevent, prepare for, mitigate against, respond to, and recover from acts of terrorism, natural disasters, and other emergencies at facilities that MTSA regulates. The act also required the establishment of a port security exercise improvement plan process that would identify, disseminate, and monitor the implementation of lessons learned and best practices from port security exercises. Though we have not specifically examined compliance with these new requirements, our work in examining past exercises suggests that implementing a successful exercise program faces several challenges. These challenges include setting the scope of the program to determine how exercise requirements in the SAFE Port Act differ from area committee exercises that are currently performed. This is especially true for incorporating recovery scenarios into exercises. In this past work, we also found that Coast Guard terrorism exercises frequently focused on prevention and awareness, but often did not include recovery activities. According to the Coast Guard, with the recent emphasis on planning for recovery operations, it has held several exercises over the past year that have either included or consisted solely of recovery activities. It will be important that future exercises also focus on recovery operations so public and private stakeholders can cover gaps that might hinder commerce after a port incident. Other long-standing challenges include completing after-action reports in a timely and thorough manner and ensuring that all relevant agencies participate. According to the Coast Guard, as the primary sponsor of these programs, it faces a continuing challenge in getting comprehensive participation in these exercises. The security of domestic ports also depends upon security at foreign ports where cargoes bound for the United States originate. To help secure the overseas supply chain, MTSA required the Coast Guard to develop a program to assess security measures in foreign ports and, among other things, recommend steps necessary to improve security measures in those ports. The Coast Guard established this program, called the International Port Security Program, in April 2004. Under this program, the Coast Guard and host nations review the implementation of security measures in the host nations’ ports against established security standards, such as the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. Coast Guard teams have been established to conduct country visits, discuss security measures implemented, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. The conditions of these visits, such as timing and locations, are negotiated between the Coast Guard and the host nation. Coast Guard officials also make annual visits to the countries to obtain additional observations on the implementation of security measures and ensure deficiencies found during the country visits are addressed. Both the SAFE Port Act and other congressional directions have called for the Coast Guard to increase the pace of its visits to foreign countries. Although MTSA did not set a time frame for completion of these visits, the Coast Guard initially set a goal to visit the approximately 140 countries that conduct maritime trade with the United States by December 2008. In September 2006, the conference report accompanying the fiscal year 2007 DHS Appropriations Act directed the Coast Guard to “double the amount” of its visits. Subsequently, in October 2006, the SAFE Port Act required the Coast Guard to reassess security measures at the foreign ports every 3 years. Coast Guard officials said they will comply with the more stringent requirements and will reassess countries on a 2-year cycle. With the expedited pace, the Coast Guard now expects to assess all countries by March 2008, after which reassessments will begin. We are currently conducting a review of the Coast Guard’s International Port Security Program that evaluates the Coast Guard’s implementation of international enforcement programs. The report, expected to be issued in early 2008, will cover issues related to the program, such as the extent to which the program is using a risk-based approach in carrying out its work, what challenges the program faces as it moves forward, and the extent to which the observations collected during the country visits are used by other programs such as the Coast Guard’s port state control inspections and high-interest vessel boarding programs. As of September 2007, the Coast Guard reported that it has visited 109 countries under this program and plans to visit another 29 more by March 2008. For the countries for which the Coast Guard has issued a final report, the Coast Guard reported that most had “substantially implemented the security code,” while a few countries were found to have not yet implemented the ISPS Code and will be subject to a reassessment or other sanctions. The Coast Guard also found several facilities needing improvements in areas such as access controls, communication devices, fencing, and lighting. While our review is still preliminary, Coast Guard officials told us that to plan and prepare for the next cycle of reassessments that are to begin next year, they are considering modifying their current visit methodology to incorporate a risk-based approach to prioritize the order and intensity of the next round of country visits. To do this, they have consulted with a contractor to develop an updated country risk prioritization model. Under the previous model, the priority assigned to a country for a visit was weighted heavily toward the volume of U.S. trade with that country. The new model being considered is to incorporate other factors, such as corruption and terrorist activity levels within the countries. Program officials told us that the details of this revised approach have yet to be finalized. Coast Guard officials told us that as they complete the first round of visits and move into the next phase of revisits, challenges still exist in implementing the program. One challenge identified was that the faster rate at which foreign ports will now be reassessed will require hiring and training new staff—a challenge the officials expect will be made more difficult because experienced personnel who have been with the program since its inception are being transferred to other positions as part of the Coast Guard’s rotational policy. These officials will need to be replaced with newly assigned personnel. Reluctance by some countries to allow the Coast Guard to visit their ports due to concerns over sovereignty was another challenge cited by program officials in completing the first round of visits. According to these officials, before permitting Coast Guard officials to visit their ports, some countries insisted on visiting and assessing a sample of U.S. ports. The Coast Guard was able to accommodate their request through the program’s reciprocal visit feature in which the Coast Guard hosts foreign delegations to visit U.S. ports and observe ISPS Code implementation in the United States. This subsequently helped gain the cooperation of the countries in hosting a Coast Guard visit to their own ports. However, as they begin to revisit countries as part of the program’s next phase, program officials stated that sovereignty concerns may still be an issue. Some countries may be reluctant to host a comprehensive country visit on a recurring basis because they believe the frequency—once every 2 to 3 years—is too high. Sovereignty also affects the conditions of the visits, such as timing and locations, because such visits are negotiated between the Coast Guard and the host nation. Thus the Coast Guard team making the visit could be precluded from seeing locations that are not in compliance. Another challenge program officials cite is having limited ability to help countries build on or enhance their capacity to implement the ISPS Code requirements. For example, the SAFE Port Act required that GAO report on various aspects of port security in the Caribbean Basin. We earlier reported that although the Coast Guard found that most of the countries had substantially implemented the ISPS Code, some facilities needed to make improvements or take additional measures. In addition, our discussions with facility operators and government officials in the region indicated that assistance—such as additional training—would help enhance their port security. Program officials stated that while their visits provide opportunities for them to identify potential areas to improve or help sustain the security measures put in place, other than sharing best practices or providing presentations on security practices, the program does not currently have the resources to directly assist countries with more in-depth training or technical assistance. To overcome this, program officials have worked with other agencies (e.g., the Departments of Defense and State) and international organizations (e.g., the Organization of American States) to secure funding for training and assistance to countries where port security conferences have been held (e.g., the Dominican Republic and the Bahamas). Program officials indicated that as part of reexamining the approach for the program’s next phase, they will also consider possibilities to improve the program’s ability to provide training and capacity building to countries when a need is identified. To improve the security at individual facilities at ports, many long-standing programs are under way. However, new challenges to their successful implementation have emerged. The Coast Guard is required to conduct assessments of security plans and facility compliance inspections, but faces challenges in staffing and training to meet the SAFE Port Act’s additional requirements such as the sufficiency of trained personnel and guidance to conduct facility inspections. TSA’s TWIC program has addressed some of its initial program challenges, but will continue to face additional challenges as the program rollout continues. Many steps have been taken to ensure that transportation workers are properly screened, but redundancies in various background checks have decreased efficiency and highlighted the need for increased coordination. MTSA and its implementing regulations required owners and operators of certain maritime facilities (e.g., power stations, chemical manufacturing facilities, and refineries that are located on waterways and receive foreign vessels) to conduct assessments of their security vulnerabilities, develop security plans to mitigate these vulnerabilities, and implement measures called for in the security plans by July 1, 2004. Under the Coast Guard regulations, these plans are to include items such as measures for access control, responses to security threats, and drills and exercises to train staff and test the plan. The plans are “performance-based,” meaning that the Coast Guard has specified the outcomes it is seeking to achieve and has given facilities responsibility for identifying and delivering the measures needed to achieve these outcomes. Under MTSA, Coast Guard guidance calls for the Coast Guard to conduct one on-site facility inspection annually to verify continued compliance with the plan. The SAFE Port Act, enacted in 2006, required the Coast Guard to conduct at least two inspections—one of which was to be unannounced—of each facility annually. We currently have ongoing work that reviews the Coast Guard’s oversight strategy under MTSA and SAFE Port Act requirements. The report, expected later this year, will cover, among other things, the extent to which the Coast Guard has met its inspection requirements and found facilities to be in compliance with its security plans, the sufficiency of trained inspectors and guidance to conduct facility inspections, and aspects of the Coast Guard’s overall management of its MTSA facility oversight program, particularly documenting compliance activities. Our work is preliminary. However, according to our analysis of Coast Guard records and statements from officials, the Coast Guard seems to have conducted facility compliance exams annually at most—but not all— facilities. Redirection of staff to a higher-priority mission, such as Hurricane Katrina emergency operations, may have accounted for some facilities not having received an annual exam. The Coast Guard also conducted a number of unannounced inspections—about 4,500 in 2006, concentrated in around 1,200 facilities—prior to the SAFE Port Act’s passage. According to officials we spoke with, the Coast Guard selected facilities for unannounced inspection based on perceived risk and inspection convenience (e.g., if inspectors were already at the facility for another purpose). The Coast Guard has identified facility plan compliance deficiencies in about one-third of facilities inspected each year, and the deficiencies identified are concentrated in a small number of categories (e.g., failure to follow the approved plan for ensuring facility access control, record keeping, or meeting facility security officer requirements). We are still in the process of reviewing the data Coast Guard uses to document compliance activities and will have additional information in our forthcoming report. Sectors we visited generally reported having adequate guidance and staff for conducting consistent compliance exams, but until recently, little guidance on conducting unannounced inspections, which are often incorporated into work while performing other mission tasks. Lacking guidance on unannounced inspections, the process for conducting one varied considerably in the sectors we visited. For example, inspectors in one sector found the use of a telescope effective in remotely observing facility control measures (such as security guard activities), but these inspectors primarily conduct unannounced inspections as part of vehicle patrols. Inspectors in another sector conduct unannounced inspections at night, going up to the security gate and querying personnel about their security knowledge (e.g., knowledge of high-security-level procedures). As we completed our fieldwork, the Coast Guard issued a Commandant message with guidance on conducting unannounced inspections. This message may provide more consistency, but how the guidance will be applied and its impact on resource needs remain uncertain. Coast Guard officials said they plan to revise their primary circular on facility oversight by February 2008. They are also planning to revise MTSA regulations to conform to SAFE Port Act requirements in 2009 (in time for the reapproval of facility security plans) but are behind schedule. We recommended in June 2004 that the Coast Guard evaluate its compliance inspection efforts taken during the initial 6-month period after July 1, 2004, and use the results to strengthen its long-term strategy for ensuring compliance. The Coast Guard agreed with this recommendation. Nevertheless, based on our ongoing work, it appears that the Coast Guard has not conducted a comprehensive evaluation of its oversight program to identify strengths or target areas for improvement after 3 years of program implementation. Our prior work across a wide range of public and private sector organizations shows that high-performing organizations continuously assess their performance with information about results based on their activities. For decision makers to assess program strategies, guidance, and resources, they need accurate and complete data reflecting program activities. We are currently reviewing the accuracy and completeness of Coast Guard compliance data and will report on this issue later this year. To control access to secure areas of port facilities and vessels, the Secretary of DHS was required by MTSA to, among other things; issue a transportation worker identification card that uses biometrics, such as fingerprints. TSA had already initiated a program to create an identification credential that could be used by workers in all modes of transportation when MTSA was enacted. This program, called the TWIC program, is designed to collect personal and biometric information to validate workers’ identities, conduct background checks on transportation workers to ensure they do not pose a threat to security, issue tamper- resistant biometric credentials that cannot be counterfeited, verify these credentials using biometric access control systems before a worker is granted unescorted access to a secure area, and revoke credentials if disqualifying information is discovered, or if a card is lost, damaged, or stolen. TSA, in partnership with the Coast Guard, is focusing initial implementation on maritime facilities. We have previously reported on the status of this program and the challenges that it faces. Most recently, we reported that TSA has made progress in implementing the TWIC program and addressing problems we previously identified regarding contract planning and oversight and coordination with stakeholders. For example, TSA reported that it added staff with program and contract management expertise to help oversee the contract and developed plans for conducting public outreach and education efforts. The SAFE Port Act required TSA to implement TWIC at the 10 highest-risk ports by July 1, 2007, conduct a pilot program to test TWIC access control technologies in the maritime environment; issue regulations requiring TWIC card readers based on the findings of the pilot; and periodically report to Congress on the status of the program. According to TSA officials, the July 1 deadline was not met because the agency and the TWIC enrollment contractor needed to conduct additional tests of the software and equipment that will be used to enroll and issue cards to workers to ensure that they work effectively before implementation. TSA officials stated that such testing was needed to ensure that these systems will be able to handle the capacity of enrolling as many as 5,000 workers per day, conducting background checks on these workers in a timely manner, and efficiently producing a TWIC card for each worker. In October 2007, TSA announced that this testing was complete, and the agency reported that it began enrolling and issuing TWIC cards to workers at the Port of Wilmington, Delaware, on October 16, 2007, and that implementation would extend to 11 additional ports by November 2007. TSA has also begun planning a pilot to test TWIC access control technologies, such as biometric card readers, in the maritime environment as required by the SAFE Port Act. According to TSA, the agency is partnering with the port authorities of Los Angeles, Long Beach, Brownsville, and New York and New Jersey, in addition to Watermark Cruises in Annapolis, Maryland, to test the TWIC access control technologies in the maritime environment and is still seeking additional participants. TSA’s objective is to include pilot test participants that are representative of a variety of facilities and vessels in different geographic locations and environmental conditions. According to TSA, the results of the pilot program will help the agency issue future regulations that will require the installation of access control systems necessary to read the TWIC cards. We will also be testifying before the full Committee on Homeland Security on several key challenges that can affect the successful implementation of the TWIC program. Since the 9/11 attacks, the federal government has taken steps to ensure that transportation workers, many of whom transport hazardous materials or have access to secure areas in locations such as port facilities, are properly screened to ensure they do not pose a security risk. Concerns have been raised, however, that transportation workers may face a variety of background checks, each with different standards. In July 2004, the 9/11 Commission reported that having too many different biometric standards, travel facilitation systems, credentialing systems, and screening requirements hampers the development of information crucial for stopping terrorists from entering the country, is expensive, and is inefficient. The commission recommended that a coordinating body raise standards, facilitate information-sharing, and survey systems for potential problems. In August 2004, Homeland Security Presidential Directive - 11 announced a new U.S. policy to “implement a coordinated and comprehensive approach to terrorist-related screening—in immigration, law enforcement, intelligence, counterintelligence, and protection of the border, transportation systems, and critical infrastructure—that supports homeland security, at home and abroad.” DHS components have begun a number of their own background check initiatives. For example, in January 2007, TSA determined that the background checks required for three other DHS programs satisfied the background check requirement for the TWIC program. That is, an applicant who has already undergone a background check in association with any of these three programs does not have to undergo an additional background check and pays a reduced fee to obtain a TWIC card. Similarly, the Coast Guard plans to consolidate four credentials and require that all pertinent information previously submitted by an applicant at a Coast Guard Regional Examination Center will be forwarded by the center to TSA through the TWIC enrollment process. In April 2007, we completed a study of DHS background check programs as part of a SAFE Port Act requirement to do so. We found that the six programs we reviewed were conducted independently of one another, collected similar information, and used similar background check processes. Further, each program operated separate enrollment facilities to collect background information and did not share it with the other programs. We also found that DHS did not track the number of workers who, needing multiple credentials, were subjected to multiple background check programs. Because DHS is responsible for a large number of background check programs, we recommended that DHS ensure that its coordination plan includes implementation steps, time frames, and budget requirements; discusses potential costs/benefits of program standardization; and explores options for coordinating and aligning background checks within DHS and other federal agencies. DHS concurred with our recommendations and continues to take steps— both at the department level and within its various agencies—to consolidate, coordinate, and harmonize such background check programs. At the department level, DHS created SCO in July 2006 to coordinate DHS background check programs. SCO is in the early stages of developing its plans for this coordination. In December 2006, SCO issued a report identifying common problems, challenges, and needed improvements in the credentialing programs and processes across the department. The office awarded a contract in April 2007 that will provide the methodology and support for developing an implementation plan to include common design and comparability standards and related milestones to coordinate DHS screening and credentialing programs. Since April 2007, DHS and SCO have signed a contract to produce three deliverables to align its screening and credentialing activities, set a method and time frame for applying a common set of design and comparability standards, and eliminate redundancy through harmonization. These three deliverables are as follows: Credentialing framework: A framework completed in July 2007 that describes a credentialing lifecycle of registration and enrollment, eligibility vetting and risk assessment, issuance, expiration and revocation, and redress. This framework was to incorporate risk-based levels or criteria, and an assessment of the legal, privacy, policy, operational, and technical challenges. Technical review: An assessment scheduled for completion in October 2007 is to be completed by the contractor in conjunction with the DHS Office of the Chief Information Officer. This is to include a review of the issues present in the current technical environment and the proposed future technical environment needed to address those issues, and provide recommendations for targeted investment reuse and key target technologies. Transition plan: A plan scheduled to be completed in November 2007 is to outline the projects needed to actualize the framework, including identification of major activities, milestones, and associated timeline and costs. Stakeholders in this effort include multiple components of DHS and the Departments of State and Justice. In addition, the DHS Office of the Chief Information Officer (CIO) and the director of SCO issued a memo in May 2007 to promote standardization across screening and credentialing programs. In this memo, DHS indicated that (1) programs requiring the collection and use of fingerprints to vet individuals will use the Automated Biometric Identification System (IDENT); (2) these programs are to reuse existing or currently planned and funded infrastructure for the intake of identity information to the greatest extent possible; (3) its CIO is to establish a procurement plan to ensure that the department can handle a large volume of automated vetting from programs currently in the planning phase; and (4) to support the sharing of databases and potential consolidation of duplicative applications, the Enterprise Data Management Office is currently developing an inventory of biographic data assets that DHS maintains to support identity management and screening processes. While continuing to consolidate, coordinate, and harmonize background check programs, DHS will likely face additional challenges, such as ensuring that its plans are sufficiently complete without being overly restrictive, and lack of information regarding the potential costs and benefits associated with the number of redundant background checks. SCO will be challenged to coordinate DHS’s background check programs in such a way that any common set of standards developed to eliminate redundant checks meets the varied needs of all the programs without being so strict that it unduly limits the applicant pool or so intrusive that potential applicants are unwilling to take part. Without knowing the potential costs and benefits associated with the number of redundant background checks that harmonization would eliminate, DHS lacks the performance information that would allow its program managers to compare their program results with goals. Thus, DHS cannot be certain where to target program resources to improve performance. As we recommended, DHS could benefit from a plan that includes, at a minimum, a discussion of the potential costs and benefits associated with the number of redundant background checks that would be eliminated through harmonization. Through the development of strategic plans, human capital strategies, and performance measures, several container security programs have been established and matured. However, these programs continue to face technical and management challenges in implementation. As part of its layered security strategy, CBP developed the Automated Targeting System as a decision support tool to assess the risks of individual cargo containers. ATS is a complex mathematical model that uses weighted rules that assign a risk score to each arriving shipment based on shipping information (e.g., manifests, bills of lading, and entry data). Although the program has faced quality assurance challenges from its inception, CBP has made significant progress in addressing these challenges. CBP’s in- bond program does not collect detailed information at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote the effective use of inspection resources. In the past, CSI has lacked sufficient staff to meet program requirements. C-TPAT has faced challenges with validation quality and management in the past, in part due to its rapid growth. The Department of Energy’s Megaports Initiative faces ongoing operational and technical challenges in the installation and maintenance of radiation detection equipment at ports. In addition, implementing the Secure Freight Initiative and the 9/11 Commission Act of 2007 presents additional challenges for the scanning of cargo containers inbound to the United States. CBP is responsible for preventing terrorists and WMD from entering the United States. As part of this responsibility, CBP addresses the potential threat posed by the movement of oceangoing cargo containers. To perform this mission, CBP officers at seaports utilize officer knowledge and CBP automated systems to assist in determining which containers entering the country will undergo inspections, and then perform the necessary level of inspection of each container based upon risk. To assist in determining which containers are to be subjected to inspection, CBP uses a layered security strategy that attempts to focus resources on potentially risky cargo shipped in containers while allowing other oceangoing containers to proceed without disrupting commerce. ATS is one key element of this strategy. CBP uses ATS as a decision support tool to review documentation, including electronic manifest information submitted by the ocean carriers on all arriving shipments, and entry data submitted by brokers to develop risk scores that help identify containers for additional inspection. CBP requires the carriers to submit manifest information 24 hours prior to a United States-bound sea container being loaded onto a vessel in a foreign port. CBP officers use these scores to help them make decisions on the extent of documentary review or additional inspection as required. We have conducted several reviews of ATS and made recommendations for its improvement. Consistent with these recommendations, CBP has implemented a number of important internal controls for the administration and implementation of ATS. For example, CBP (1) has established performance metrics for ATS, (2) is manually comparing the results of randomly conducted inspections with the results of inspections resulting from ATS analysis of the shipment data, and (3) has developed and implemented a testing and simulation environment to conduct computer-generated tests of ATS. Since our last report on ATS, the SAFE Port Act required that the CBP Commissioner take additional actions to improve ATS. These requirements included steps such as (1) having an independent panel review the effectiveness and capabilities of ATS; (2) considering future iterations of ATS that would incorporate smart features; (3) ensuring that ATS has the capability to electronically compare manifest and other available data to detect any significant anomalies and facilitate their resolution; (4) ensuring that ATS has the capability to electronically identify, compile, and compare select data elements following a maritime transportation security incident; and (5) developing a schedule to address recommendations made by GAO and the Inspectors General of the Department of the Treasury and DHS. CBP’s in-bond system—which allows goods to transit the United States without officially entering U.S. commerce—must balance the competing goals of providing port security, facilitating trade, and collecting trade revenues. However, we have earlier reported that CBP’s management of the system has impeded efforts to manage security risks. Specifically, CBP does not collect detailed information on in-bond cargo at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote effective use of inspection resources. The in-bond system is designed to facilitate the flow of trade throughout the United States and is estimated to be widely used. The U.S. customs system allows cargo to move from the U.S. arrival port, without appraisal or payment of duties, to another U.S. port for official entry into U.S. commerce or for exportation. In-bond regulations currently permit bonded carriers from 15 to 60 days, depending on the mode of shipment, to reach their final destination and allow them to change a shipment’s final destination without notifying CBP. The in-bond system allows the trade community to avoid congestion and delays at U.S. seaports whose infrastructure has not kept pace with the dramatic growth in trade volume. In-bond facilitates trade by allowing importers and shipping agents the flexibility to move cargo more efficiently. Using the number of in-bond transactions reported by CBP for the 6-month period of October 2004 to March 2005, we found over 6.5 million in-bond transactions were initiated nationwide. Some CBP port officials have estimated that in-bond shipments represent from 30 percent to 60 percent of goods received at their ports. As discussed earlier in this testimony, CBP uses manifest information it receives on all cargo arriving at U.S. ports (including in-bond cargo) as input for ATS scoring to aid in identifying security risks and setting inspection priorities. For regular cargo, the ATS score is updated with more detailed information as the cargo makes official entry at the arrival port. For in-bond cargo, the ATS scores generally are not updated until these goods move from the port of arrival to the destination port for official entry into United States commerce, or not updated at all for cargo that is intended to be exported. As a result, in-bond goods might transit the United States without having the most accurate ATS risk score. Entry information frequently changes the ATS score for in-bond goods. For example, CBP provided data for four major ports comparing the ATS score assigned to in-bond cargo at the port of arrival based on the manifest to the ATS score given after goods made official entry at the destination port. These data show that for the four ports, the ATS score based on the manifest information stayed the same an average of 30 percent of the time after being updated with entry information, ATS scores increased an average of 23 percent of the time and decreased an average of 47 percent of the time. A higher ATS score can result in higher priority being given to cargo for inspection than otherwise would be given based solely on the manifest information. A lower ATS score can result in cargo being given a lower priority for inspection and potentially shift inspection resources to cargo deemed a higher security risk. Without having the most accurate ATS score, in-bond goods transiting the United States pose a potential security threat because higher-risk cargo may not be identified for inspection at the port of arrival. In addition, scarce inspection resources may be misdirected to in-bond goods that a security score based on better information might have shown did not warrant inspection. We earlier recommended that the Commissioner of CBP take action in three areas to improve the management of the in-bond program, which included collecting and using improved information on in-bond shipments to update the ATS score for in-bond movements at the arrival port and enable better informed decisions affecting security, trade, and revenue collection. DHS agreed with most of our recommendations. According to CBP, it is in the process of developing an in-bond weight set to be utilized to further identify cargo posing a security risk. The weight set is being developed based on expert knowledge, analysis of previous in-bond seizures, and creation of rules based on in-bond concepts. The SAFE Port Act of 2006 contains provisions related to securing the international cargo supply chain, including provisions related to the movement of in-bond cargo. Specifically, it requires that CBP submit a report to several congressional committees on the in-bond system that includes an assessment of whether ports of arrival should require additional information for in-bond cargo, a plan for tracking in-bond cargo in CBP’s Automated Commercial Environment information system, and assessment of the personnel required to ensure reconciliation of in-bond cargo between arrival port and destination port. It also requires that the report include an assessment of the feasibility of reducing transit time while traveling in-bond, and an evaluation of the criteria for targeting and examining in-bond cargo. CBP submitted the report to the Congress on October 17, 2007. In the report, CBP states its intention to propose various changes addressing the areas of concern, but it does not propose time frames for its actions. CBP initiated its CSI program to detect and deter terrorists from smuggling WMD via cargo containers before they reach domestic seaports in January 2002. The SAFE Port Act formalized the CSI program into law. Under CSI, foreign governments sign a bilateral agreement with CBP to allow teams of U.S. customs officials to be stationed at foreign seaports to identify cargo container shipments at risk of containing WMD. CBP personnel use automated risk assessment information and intelligence to target to identify those at risk of containing WMD. When a shipment is determined to be high risk, CBP officials refer it to host government officials who determine whether to examine the shipment before it leaves their seaport for the United States. In most cases, host government officials honor the U.S. request by examining the referred shipments with nonintrusive inspection equipment and, if they deem necessary, by opening the cargo containers to physically search the contents inside. CBP planned to have a total of 58 seaports by the end of fiscal year 2007. Our 2003 and 2005 reports on the CSI program found both successes and challenges faced by CBP in implementing the program. Since our last CSI report in 2005, CBP has addressed some of the challenges we identified and has taken steps to improve the CSI program. Specifically, CBP contributed to the Strategy to Enhance International Supply Chain Security that DHS issued in July 2007, which addressed a SAFE Port Act requirement and filled an important gap—between broad national strategies and program-specific strategies, such as for CSI—in the strategic framework for maritime security that has evolved since 9/11. In addition, in 2006 CBP issued a revised CSI strategic plan for 2006 to 2011, which added three critical elements that we had identified in our April 2005 report as missing from the plan’s previous iteration. In the revised plan, CBP described how performance goals and measures are related to CSI objectives, how CBP evaluates CSI program operations, and what external factors beyond CBP’s control could affect program operations and outcomes. Also, by expanding CSI operations to 58 seaports by the end of September 2007, CBP would have met its objective of expanding CSI locations and program activities. CBP projected that at the end of fiscal year 2007 between 85 and 87 percent of all U.S.-bound shipments in containers will pass through CSI ports where the risk level of the container cargo is assessed and the contents are examined as deemed necessary. Although CBP’s goal is to review information about all U.S.-bound containers at CSI seaports for high-risk contents before the containers depart for the United States, we reported in 2005 that the agency has not been able to place enough staff at some CSI ports to do so. Also, the SAFE Port Act required DHS to develop a human capital management plan to determine adequate staffing levels in U.S. and CSI ports. CBP has developed a human capital plan, increased the number of staff at CSI ports, and provided additional support to the deployed CSI staff by using staff in the United States to screen containers for various risk factors and potential inspection. With these additional resources, CBP reports that manifest data for all US-bound container cargo are reviewed using ATS to determine whether the container is at high risk of containing WMD. However, the agency faces challenges in ensuring that optimal numbers of staff are assigned to CSI ports, in part because of its reliance on placing staff overseas at CSI ports without systematically determining which functions could be performed overseas and which could be performed domestically. Also, in 2006 CBP improved its methods for conducting on-site evaluations of CSI ports, in part by requiring CSI teams at the seaports to demonstrate their proficiency at conducting program activities and by employing electronic tools designed to assist in the efficient and systematic collection and analysis of data to help in evaluating the CSI team’s proficiency. In addition, CBP continued to refine the performance measures it uses to track the effectiveness of the CSI program by streamlining the number of measures it uses to six, modifying how one measure is calculated to address an issue we identified in our April 2005 report, and developing performance targets for the measures. We are continuing to review these assessment practices as part of our ongoing review of the CSI program, and expect to report on the results of this effort shortly. Similar to our recommendation in a previous CSI report, the SAFE Port Act called upon DHS to establish minimum technical criteria for the use of nonintrusive inspection equipment in conjunction with CSI. The act also directs DHS to require that seaports receiving CSI designation operate such equipment in accordance with these criteria and with standard operating procedures developed by DHS. CBP officials stated that their agency faces challenges in implementing this requirement due to sovereignty issues and the fact that the agency is not a standard-setting organization, either for equipment or for inspections processes or for practices. However, CBP has developed minimum technical standards for equipment used at domestic ports, and the World Customs Organization (WCO) had described issues—not standards—to consider when procuring inspection equipment. Our work suggests that CBP may face continued challenges establishing equipment standards and monitoring host government operations, which we are also examining in our ongoing review of the CSI program. CBP initiated C-TPAT in November 2001 to complement other maritime security programs as part of the agency’s layered security strategy. In October 2006, the SAFE Port Act formalized C-TPAT into law. C-TPAT is a voluntary program that enables CBP officials to work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. In return for committing to improve the security of their shipments by joining the program, C-TPAT members receive benefits that result in the likelihood of reduced scrutiny of their shipments, such as a reduced number of inspections or shorter wait times for their shipments. CBP uses information about C-TPAT membership to adjust risk-based targeting of these members shipments in ATS. As of July 2007, CBP had certified more than 7,000 companies that import goods via cargo containers through U.S. seaports—which accounted for approximately 45 percent of all U.S. imports—and validated the security practices of 78 percent of these certified participants. We reported on the progress of the C-TPAT program in 2003 and 2005 and recommended that CBP develop a strategic plan and performance measures to track the program’s status in meeting its strategic goals. DHS concurred with these recommendations. The SAFE Port Act also mandated that CBP develop and implement a 5-year strategic plan with outcome-based goals and performance measures for C-TPAT. CBP officials stated that they are in the process of updating their strategic plan for C-TPAT, which was issued in November 2004, for 2007 to 2012. This updated plan is being reviewed within CBP, but a time frame for issuing the plan has not been established. We recommended in our March 2005 report that CBP establish performance measures to track its progress in meeting the goals and objectives established as part of the strategic planning process. Although CBP has since put additional performance measures in place, CBP’s efforts have focused on measures regarding program participation and facilitating trade and travel. CBP has not yet developed performance measures for C-TPAT’s efforts aimed at ensuring improved supply chain security, which is the program’s purpose. In our previous work, we acknowledged that the C-TPAT program holds promise as part of a layered maritime security strategy. However, we also raised a number of concerns about the overall management of the program. Since our past reports, the C-TPAT program has continued to mature. The SAFE Port Act mandated that actions—similar to ones we had recommended in our March 2005 report—be taken to strengthen the management of the program. For example, the act included a new goal that CBP make a certification determination within 90 days of CBP’s receipt of a C-TPAT application, validate C-TPAT members’ security measures and supply chain security practices within 1 year of their certification, and revalidate those members no less than once in every 4 years. As we recommended in our March 2005 report, CBP has developed a human capital plan and implemented a records management system for documenting key program decisions. CBP has addressed C-TPAT staffing challenges by increasing the number of supply chain security specialists from 41 in 2005 to 156 in 2007. In February 2007, CBP updated its resource needs to reflect SAFE Port Act requirements, including that certification, validation, and revalidation processes be conducted within specified time frames. CBP believes that C-TPAT’s current staff of 156 supply chain security specialists will allow it to meet the act’s initial validation and revalidation goals for 2007 and 2008. If an additional 50 specialists authorized by the act are made available by late 2008, CBP expects to be able to stay within compliance of the act’s time frame requirements through 2009. In addition, CBP developed and implemented a centralized electronic records management system to facilitate information storage and sharing and communication with C-TPAT partners. This system—known as the C-TPAT Portal—enables CBP to track and ascertain the status of C-TPAT applicants and partners to ensure that they are certified, validated, and revalidated within required time frames. As part of our ongoing work, we are reviewing the data captured in Portal, including data needed by CBP management to assess the efficiency of C-TPAT operations and to determine compliance with its program requirements. These actions—dedicating resources to carry out certification and validation reviews and putting a system in place to track the timeliness of these reviews—should help CBP meet several of the mandates of the SAFE Port Act. We expect to issue a final report early next year. Our 2005 report raised concerns about CBP granting benefits prematurely—before CBP had validated company practices. Instead of granting new members full benefits without actual verification of their supply chain security, CBP implemented three tiers to grant companies graduated benefits based on CBP’s certification and validation of their security practices. Related to this, the SAFE Port Act codified CBP’s policy of granting graduated benefits to C-TPAT members. Tier 1 benefits—a limited reduction in the score assigned in ATS—are granted to companies upon certification that their written description of their security profile meets minimum security criteria. Companies whose security practices CBP validates in an on-site assessment receive Tier 2 benefits that may include reduced scores in ATS, reduced cargo examinations, and priority searches of cargo. If CBP’s validation shows sustained commitment by a company to security practices beyond what is expected, the company receives Tier 3 benefits. Tier 3 benefits may include expedited cargo release at U.S. ports at all threat levels, further reduction in cargo examinations, priority examinations, and participation in joint incident management exercises. Our 2005 report also raised concerns about whether the validation process was rigorous enough. Similarly, the SAFE Port Act mandates that the validation process be strengthened, including setting a year time frame for completing validations. CBP initially set a goal of validating all companies within their first 3 years as C-TPAT members, but the program’s rapid growth in membership made the goal unachievable. CBP then moved to a risk-based approach to selecting members for validation, considering factors such as a company’s having foreign supply chain operations in a known terrorist area or involving multiple foreign suppliers. CBP further modified its approach to selecting companies for validation to achieve greater efficiency by conducting “blitz” operations to validate foreign elements of multiple members’ supply chains in a single trip. Blitz operations focus on factors such as C-TPAT members within a certain industry, supply chains within a certain geographic area, or foreign suppliers to multiple C-TPAT members. Risks remain a consideration, according to CBP, but the blitz strategy drives the decision of when a member company will be validated. In addition to taking these actions to efficiently conduct validations, CBP has periodically updated the minimum security requirements that companies must meet to be validated and is conducting a pilot program of using third-party contractors to conduct validation assessments. As part of our ongoing work, we are reviewing these actions, which are required as part of the SAFE Port Act, and other CBP efforts to enhance its C-TPAT validation process. The CSI and C-TPAT programs have provided a model for global customs security standards, but as other countries adopt the core principles of CSI and programs similar to C-TPAT, CBP may face new challenges. Foreign officials within the WCO and elsewhere have observed the CSI and C-TPAT programs as potential models for enhancing supply chain security. Also, CBP has taken a lead role in working with members of the domestic and international customs and trade community on approaches to standardizing supply chain security worldwide. As CBP has recognized, and we have previously reported, in security matters the United States is not self-contained, in either its problems or its solutions. The growing interdependence of nations requires policymakers to recognize the need to work in partnerships across international boundaries to achieve vital national goals. For this reason, CBP has committed through its strategic planning process to develop and promote an international framework of standards governing customs-to-customs relationships and customs-to-business relationships in a manner similar to CSI and C-TPAT, respectively. To achieve this, CBP has worked with foreign customs administrations through the WCO to establish a framework creating international standards that provide increased security of the global supply chain while facilitating international trade. The member countries of the WCO, including the United States, adopted such a framework, known as the WCO Framework of Standards to Secure and Facilitate Global Trade and commonly referred to as the SAFE Framework, in June 2005. The SAFE Framework internationalizes the core principles of CSI in creating global standards for customs security practices and promotes international customs-to-business partnership programs, such as C-TPAT. As of September 11, 2007, 148 WCO member countries had signed letters of intent to implement the SAFE Framework. CBP, along with the customs administrations of other countries and through the WCO, provides technical assistance and training to those countries that want to implement the SAFE Framework, but do not yet have the capacity to do so. The SAFE Framework enhances the CSI program by promoting the implementation of CSI-like customs security practices, including the use of electronic advance information requirements and risk-based targeting, in both CSI and non-CSI ports worldwide. The framework also lays the foundation for mutual recognition, an arrangement whereby one country can attain a certain level of assurance about the customs security standards and practices and business partnership programs of another country. In June 2007, CBP entered into the first mutual recognition arrangement of a business-to-customs partnership program with the New Zealand Customs Service. This arrangement stipulates that members of one country’s business-to-customs program be recognized and receive similar benefits from the customs service of the other country. CBP is pursuing similar arrangements with Jordan and Japan, and is conducting a pilot program with the European Commission to test approaches to achieving mutual recognition and address differences in their respective programs. However, the specific details of how the participating counties’ customs officials will implement the mutual recognition arrangement— such as what benefits, if any, should be allotted to members of other countries’ C-TPAT like programs—have yet to be determined. As CBP goes forward, it may face challenges in defining the future of its CSI and C-TPAT programs and, more specifically, in managing the implementation of mutual recognition arrangements, including articulating and agreeing to the criteria for accepting another country’s program; the specific arrangements for implementation, including the sharing of information; and the actions for verification, enforcement; and, if necessary, termination of the arrangement. DHS also has container security programs to develop and test equipment to scan containers for radiation. Its DNDO was originally created in April 2005 by presidential directive, but the office was formally established in October 2006 by Section 501 of the SAFE Port Act. DNDO has lead responsibility for conducting the research, development, testing, and evaluation of radiation detection equipment that can be used to prevent nuclear or radiological materials from entering the United States. DNDO is charged with devising the layered system of radiation detection equipment and operating procedures—known as the “global architecture”—designed to prevent nuclear smuggling at foreign ports, the nation’s borders, and inside the United States. Much of DNDO’s work on radiation detection equipment to date has focused on the development and use of radiation detection portal monitors, which are larger-scale equipment that can screen vehicles, people, and cargo entering the United States. Current portal monitors detect the presence of radiation but cannot distinguish between benign, naturally occurring radiological materials such as ceramic tile, and dangerous materials such as highly enriched uranium. Since 2005, DNDO has been testing, developing, and planning to deploy the next generation of portal monitors, known as “Advanced Spectroscopic Portals” (ASP), which can not only detect but also identify radiological and nuclear materials within a shipping container. In July 2006, DNDO announced that it had awarded contracts to three vendors to develop and purchase $1.2 billion worth of ASPs over 5 years for deployment at U.S. points of entry. We have reported a number of times to Congress concerning DNDO’s execution of the ASP program. To ensure that DHS's substantial investment in radiation detection technology yields the greatest possible level of detection capability at the lowest possible cost, in March 2006 we recommended that once the costs and capabilities of ASPs were well understood, and before any of the new equipment was purchased for deployment, the Secretary of DHS work with the Director of DNDO to analyze the costs and benefits of deploying ASPs. Further, we recommended that this analysis focus on determining whether any additional detection capability provided by the ASPs was worth the considerable additional costs. In response to our recommendation, DNDO issued its cost-benefit analysis in May 2006 and an updated, revised version in June 2006. According to senior agency officials, DNDO believes that the basic conclusions of its cost-benefit analysis showed that the new ASP monitors are a sound investment for the U.S. government. However, in October 2006, we concluded that DNDO’s cost-benefit analysis did not provide a sound basis for DNDO’s decision to purchase and deploy ASP technology because it relied on assumptions of the anticipated performance level of ASPs instead of actual test data and that it did not justify DHS’s planned $1.2 billion expenditure. We also reported that DNDO did not assess the likelihood that ASPs would either misidentify or fail to detect nuclear or radiological material. Rather, it focused its analysis on reducing the time necessary to screen traffic at border check points and reduce the impact of any delays on commerce. We recommended that DNDO conduct further testing of ASPs and the currently deployed portal monitors before spending additional funds to purchase ASPs. DNDO conducted this testing of ASPs at the Nevada test site during February and March 2007. In September 2007, we testified on these tests, stating that, in our view, DNDO used biased test methods that enhanced the performance of the ASPs. In particular, DNDO conducted preliminary runs of almost all the materials and combination of materials that it used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials. In addition, DNDO did not attempt in its tests to identify the limitations of ASPs—a critical oversight in its test plan. Specifically, the materials that DNDO included in its test plan did not emit enough radiation to hide or mask the presence of nuclear materials located within a shipping container. Finally, in its tests of the existing radiation detection system, DNDO did not include a critical standard operating procedure that officers with CBP use to improve the system’s effectiveness. It is important to note that, during the course of our work, CBP, DOE, and national laboratory officials we spoke to voiced concern about their lack of involvement in the planning and execution of the Nevada test site tests. For example, DOE officials told us that they informed DNDO in November 2006 of their concerns that the materials DNDO planned to use in its tests were too weak to effectively mask the presence of nuclear materials in a container. DNDO officials rejected DOE officials’ suggestion to use stronger materials in the tests because, according to DNDO, there would be insufficient time to obtain these materials and still obtain the DHS Secretary’s approval for full-scale production of ASPs by DNDO’s self- imposed deadline of June 26, 2007. Although DNDO has agreed to perform computer simulations to address this issue, the DNDO Director would not commit at the September testimony to delaying full-scale ASP production until all the test results were in. The Megaports Initiative, initiated by DOE’s National Nuclear Security Administration in 2003, represents another component in the efforts to prevent terrorists from smuggling WMD in cargo containers from overseas locations. The goal of this initiative is to enable foreign government personnel at key foreign seaports to use radiation detection equipment to screen shipping containers entering and leaving these ports, regardless of the containers’ destination, for nuclear and other radioactive material that could be used against the United States or its allies. DOE installs radiation detection equipment, such as radiation portal monitors and handheld radioactive isotope identification devices, at foreign seaports that is then operated by foreign government officials and port personnel working at these ports. Through August 2007, DOE had completed installation of radiation detection equipment at eight ports: Rotterdam, the Netherlands; Piraeus, Greece; Colombo, Sri Lanka; Algeciras, Spain; Singapore; Freeport, Bahamas; Manila, Philippines; and Antwerp, Belgium (Phase I). Operational testing is under way at four additional ports: Antwerp, Belgium (Phase II); Puerto Cortes, Honduras; Qasim, Pakistan; and Laem Chabang, Thailand. Additionally, DOE has signed agreements to begin work and is in various stages of implementation at ports in 12 other countries, including the United Kingdom, United Arab Emirates/Dubai, Oman, Israel, South Korea, China, Egypt, Jamaica, the Dominican Republic, Colombia, Panama, and Mexico, as well as Taiwan and Hong Kong. Several of these ports are also part of the Secure Freight Initiative, discussed in the next section. Further, in an effort to expand cooperation, DOE is engaged in negotiations with approximately 20 additional countries in Europe, Asia, the Middle East, and Latin America. DOE had made limited progress in gaining agreements to install radiation detection equipment at the highest priority seaports when we reported on this program in March 2005. Then, the agency had completed work at only two ports and signed agreements to initiate work at five others. We also noted that DOE’s cost projections for the program were uncertain, in part because they were based on DOE’s $15 million estimate for the average cost per port. This per port cost estimate may not be accurate because it was based primarily on DOE’s radiation detection assistance work at Russian land borders, airports, and seaports and did not account for the fact that the costs of installing equipment at individual ports vary and are influenced by factors such as a port’s size, physical layout, and existing infrastructure. Since our review, DOE has developed a strategic plan for the Megaports Initiative and revised it’s per port estimates to reflect port size, with per port estimates ranging from $2.6 million to $30.4 million. As we earlier reported, DOE faces several operational and technical challenges specific to installing and maintaining radiation detection equipment at foreign ports as the agency continues to implement its Megaports Initiative. These challenges include ensuring the ability to detect radioactive material, overcoming the physical layout of ports and cargo-stacking configurations, and sustaining equipment in port environments with high winds and sea spray. The SAFE Port Act required that a pilot program—known as the Secure Freight Initiative (SFI)—be conducted to determine the feasibility of 100 percent scanning of U.S.-bound containers. To fulfill this requirement, CBP and DOE jointly announced the formation of SFI in December 2006, as an effort to build upon existing port security measures by enhancing the U.S. government’s ability to scan containers for nuclear and radiological materials overseas and better assess the risk of inbound containers. In essence, SFI builds upon the CSI and Megaports programs. The SAFE Port Act specified that new integrated scanning systems that couple nonintrusive imaging equipment and radiation detection equipment must be pilot-tested. It also required that, once fully implemented, the pilot integrated scanning system scan 100 percent of containers destined for the United States that are loaded at pilot program ports. According to agency officials, the initial phase of the initiative will involve the deployment of a combination of existing container scanning technology—such as X-ray and gamma ray scanners used by host nations at CSI ports to locate high-density objects that could be used to shield nuclear materials inside containers—and radiation detection equipment. The ports chosen to receive this integrated technology are: Port Qasim in Pakistan, Puerto Cortes in Honduras, and Southampton in the United Kingdom. Four other ports located in Hong Kong, Singapore, South Korea, and Oman will receive more limited deployment of these technologies as part of the pilot program. According to CBP, containers from these ports will be scanned for radiation and other risk factors before they are allowed to depart for the United States. If the scanning systems indicate that there is a concern, both CSI personnel and host country officials will simultaneously receive an alert and the specific container will be inspected before that container continues to the United States. CBP officials, either on the scene locally or at CBP’s National Targeting Center, will determine which containers are inspected. Per the SAFE Port Act, CBP is to report by April 2008 on, among other things, the lessons learned from the SFI pilot ports and the need for and the feasibility of expanding the system to other CSI ports. Every 6 months thereafter, CBP is to report on the status of full-scale deployment of the integrated scanning systems to scan all containers bound for the United States before their arrival. Recent legislative actions have updated U.S. maritime security requirements and may affect overall international maritime security strategy. In particular, the recently enacted Implementing Recommendations of the 9/11 Commission Act (9/11 Act) requires, by 2012, 100 percent scanning of U.S.-bound cargo containers using nonintrusive imaging equipment and radiation detection equipment at foreign seaports. The act also specifies conditions for potential extensions beyond 2012 if a seaport cannot meet that deadline. Additionally, it requires the Secretary of DHS to develop technological and operational standards for scanning systems used to conduct 100 percent scanning at foreign seaports. The Secretary also is required to ensure that actions taken under the act do not violate international trade obligations and are consistent with the WCO SAFE Framework. The 9/11 Act provision replaces the requirement of the SAFE Port Act that called for 100 percent scanning of cargo containers before their arrival in the United States, but required implementation as soon as possible rather than specifying a deadline. While we have not yet reviewed the implementation of the 100 percent scanning requirement, we have a number of preliminary observations based on field visits of foreign ports regarding potential challenges CBP may face in implementing this requirement: CBP may face challenges balancing new requirement with current international risk management approach. CBP may have difficulty requiring 100 percent scanning while also maintaining a risk- based security approach that has been developed with many of its international partners. Currently, under the CSI program, CBP uses automated targeting tools to identify containers that pose a risk for terrorism for further inspection before being placed on vessels bound for the United States. As we have previously reported, using risk management allows for reduction of risk against possible terrorist attack on the nation given resources allocated and is an approach that has been accepted governmentwide. Furthermore, many U.S. and international customs officials we have spoken to, including officials from the World Customs Organization, have stated that the 100 percent scanning requirement is contrary to the SAFE Framework developed and implemented by the international customs community, including CBP. The SAFE Framework, based on CSI and C-TPAT, calls for a risk management approach, whereas the 9/11 Act calls for the scanning of all containers regardless of risk. United States may not be able to reciprocate if other countries request it. The CSI program, whereby CBP officers are placed at foreign seaports to target cargo bound for the United States, is based on a series of bilateral, reciprocal agreements with foreign governments. These reciprocal agreements also allow foreign governments the opportunity to place customs officials at U.S. seaports and request inspection of cargo containers departing from the United States and bound for their home country. Currently, customs officials from certain countries are stationed at domestic seaports, and agency officials have told us that CBP has inspected 100 percent of containers that these officials have requested for inspection. According to CBP officials, the SFI pilot, as an extension of the CSI program, allows foreign officials to ask the United States to reciprocate and scan 100 percent of cargo containers bound for those countries. Although the act establishing the 100 percent scanning requirement does not mention reciprocity, CBP officials have told us that the agency does not have the capacity to reciprocate should it be requested to do so, as other government officials have indicated they might when this provision of the 9/11 Act is in place. Logistical feasibility is unknown and may vary by port. Many ports may lack the space necessary to install additional equipment needed to comply with the requirement to scan 100 percent of U.S. bound containers. Additionally, we observed that scanning equipment at some seaports is located several miles away from where cargo containers are stored, which may make it time consuming and costly to transport these containers for scanning. Similarly, some seaports are configured in such a way that there are no natural bottlenecks that would allow for equipment to be placed such that all outgoing containers can be scanned and the potential to allow containers to slip by without scanning may be possible. Transshipment cargo containers—containers moved from one vessel to another—are only available for scanning for a short period of time and may be difficult to access. Similarly, it may be difficult to scan cargo containers that remain on board a vessel as it passes through a foreign seaport. CBP officials told us that currently containers such as these that are designated as high-risk at CSI ports are not scanned unless specific threat information is available regarding the cargo in that particular container. Technological maturity is unknown. Integrated scanning technologies to test the feasibility of scanning 100 percent of U.S. bound cargo containers are not yet operational at all seaports participating in the pilot program, known as SFI. The SAFE Port Act requires CBP to produce a report regarding the program, which will include an evaluation of the effectiveness of scanning equipment at the SFI ports. However, this report will not be due until April 2008. Moreover, agency officials have stated that the amount of bandwidth necessary to transmit scanning equipment outputs to CBP officers for review exceeds what is currently feasible and that the electronic infrastructure necessary to transmit these outputs may be limited at some foreign seaports. Additionally, there are currently no international standards for the technical capabilities of inspection equipment. Agency officials have stated that CBP is not a standard setting organization and has limited authority to implement standards for sovereign foreign governments. Resource responsibilities have not been determined. The 9/11 Act does not specify who would pay for additional scanning equipment, personnel, computer systems, or infrastructure necessary to establish 100 percent scanning of U.S.-bound cargo containers at foreign ports. According to the Congressional Budget Office (CBO) in its analysis of estimates for implementing this requirement, this provision would neither require nor prohibit the U.S. federal government from bearing the cost of conducting scans. For the purposes of its analysis, CBO assumed that the cost of acquiring, installing, and maintaining systems necessary to comply with the 100 percent scanning requirement would be borne by foreign ports to maintain trade with the United States. However, foreign government officials we have spoken to expressed concerns regarding the cost of equipment. They also stated that the process for procuring scanning equipment may take years and can be difficult when trying to comply with changing U.S. requirements. These officials also expressed concern regarding the cost of additional personnel necessary to (1) operate new scanning equipment, (2) view scanned images and transmit them to the United States, and (3) resolve false alarms. An official from one country with whom we met told us that while his country does not scan 100 percent of exports, modernizing its customs service to focus more on exports required a 50 percent increase in personnel, and other countries trying to implement the 100 percent scanning requirement would likely have to increase the size of their customs administrations by at least as much. Use and ownership of data have not been determined. The 9/11 Act does not specify who will be responsible for managing the data collected through 100 percent scanning of U.S.-bound containers at foreign seaports. However, the SAFE Port Act specifies that scanning equipment outputs from SFI will be available for review by U.S. government officials either at the foreign seaport or in the United States. It is not clear who would be responsible for collecting, maintaining, disseminating, viewing or analyzing scanning equipment outputs under the new requirement. Other questions to be resolved include ownership of data, how proprietary information would be treated, and how privacy concerns would be addressed. CBP officials have indicated they are aware that challenges exist. They also stated that the SFI will allow the agency to determine whether these challenges can be overcome. According to senior officials from CBP and international organizations we contacted, 100 percent scanning of containers may divert resources, causing containers that are truly high risk to not receive adequate scrutiny due to the sheer volume of scanning outputs that must be analyzed. These officials also expressed concerns that 100 percent scanning of U.S.-bound containers could hinder trade, leading to long lines and burdens on staff responsible for viewing images. However, given that the SFI pilot program has only recently begun, it is too soon to determine how the 100 percent scanning requirement will be implemented and its overall impact on security. We provided a draft of the information in this testimony to DHS. DHS provided technical comments, which we incorporated as appropriate. Madam Chairwoman and members of the subcommittee, this completes my prepared statement. I will be happy to respond to any questions that you or other members of the subcommittee have at this time. For information about this testimony, please contact Stephen L. Caldwell, Director, Homeland Security and Justice Issues, at (202) 512-9610, or caldwells@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Richard Ascarate, Jonathan Bachman, Jason Bair, Fredrick Berry, Christine Broderick, Stockton Butler, Steven Calvo, Frances Cook, Christopher Currie, Anthony DeFrank, Wayne Ekblad, Christine Fossett, Nkenge Gibson, Geoffrey Hamilton, Christopher Hatscher, Virginia Hughes, Valerie Kasindi, Monica Kelly, Ryan Lambert, Nicholas Larson, Daniel Klabunde, Matthew Lee, Gary Malavenda, Robert Rivas, Leslie Sarapu, James Shafer, Kate Siggerud, Daren Sweeney, and April Thompson. Maritime Security: One Year Later: A Progress Report on the SAFE Port Act. GAO-08-171T. Washington, D.C.: October 16, 2007. Maritime Security: The SAFE Port Act and Efforts to Secure Our Nation’s Seaports. GAO-08-86T. Washington, D.C.: October 4, 2007. Homeland Security: Preliminary Information on Federal Actions to Address Challenges Faced by State and Local Information Fusion Centers. GAO-07-1241T. Washington, D.C.: September 27, 2007. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation of Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1081T. Washington, D.C.: September 6, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Department of Homeland Security: Science and Technology Directorate’s Expenditure Plan. GAO-07-868. Washington, D.C.: June 22, 2007. Homeland Security: Guidance from Operations Directorate Will Enhance Collaboration among Departmental Operations Centers. GAO-07-683T. Washington, D.C.: June 20, 2007. Department of Homeland Security: Progress and Challenges in Implementing the Department’s Acquisition Oversight Plan. GAO-07-900. Washington, D.C.: June 13, 2007. Department of Homeland Security: Ongoing Challenges in Creating an Effective Acquisition Organization. GAO-07-948T. Washington, D.C.: June 7, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations. GAO-07-756. Washington, D.C.: April 26, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection’s Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: April 17, 2007. Transportation Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-07-681T. Washington, D.C.: April 12, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Programs. GAO-07-347R. Washington, D.C.: March 9, 2007. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Transportation Security: DHS Should Address Key Challenges before Implementing the Transportation Worker Identification Credential Program. GAO-06-982. Washington, D.C.: September 29, 2006. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Combating Nuclear Smuggling: DHS Made Progress Deploying Radiation Detection Equipment at U.S. Ports of Entry, but Concerns Remain. GAO-06-389. Washington: D.C.: March 22, 2006. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 30, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Because the safety and economic security of the United States depend in substantial part on the security of its 361 seaports, the United States has a vital national interest in maritime security. The Security and Accountability for Every Port Act (SAFE Port Act), modified existing legislation and created and codified new programs related to maritime security. The Department of Homeland Security (DHS) and its U.S. Coast Guard, Transportation Security Administration, and U.S. Customs and Border Protection have key maritime security responsibilities. This testimony synthesizes the results of GAO's completed work and preliminary observations from GAO's ongoing work related to the SAFE Port Act pertaining to (1) overall port security, (2) security at individual facilities, and (3) cargo container security. To perform this work GAO visited domestic and overseas ports; reviewed agency program documents, port security plans, and post-exercise reports; and interviewed officials from the federal, state, local, private, and international sectors. Federal agencies have improved overall port security efforts by establishing committees to share information with local port stakeholders, taking steps to establish interagency operations centers to monitor port activities, conducting operations such as harbor patrols and vessel escorts, writing port-level plans to prevent and respond to terrorist attacks, testing such plans through exercises, and assessing the security at foreign ports. However, these agencies face resource constraints and other challenges trying to meet the SAFE Port Act's requirements to expand these activities. For example, the Coast Guard faces budget constraints in trying to expand its current command centers and include other agencies at the centers. Similarly, private facilities and federal agencies have taken action to improve security at about 3,000 individual facilities by writing facility-specific security plans, inspecting facilities to determine compliance with their plans, and developing special identification cards for workers to help prevent terrorists from getting access to secure areas. Federal agencies face challenges trying to meet the act's requirements to expand the scope or speed the implementation of such activities. For example, the Transportation Security Administration missed the act's deadline to implement the identification card program at 10 selected ports because of delays in testing equipment and procedures. Federal programs related to the security of cargo containers have also improved as agencies are enhancing systems to identify high-risk cargo, expanding partnerships with other countries to screen containers before they depart for the United States, and working with international organizations to develop a global framework for container security. Federal agencies face challenges implementing container security aspects of the SAFE Port Act and other legislation. For example, Customs and Border Protection must test and implement a new program to scan 100 percent of all incoming containers overseas--a departure from its existing risk-based programs.
Fueled by global markets, more open borders, and improvements in telecommunications, international crime has become a growing worldwide problem. In 1995, the President identified international crime as a threat to the national interest of the United States. Prior to and since then, the federal government has been engaged in a crosscutting effort to address various types of such crime, including money laundering, terrorism, and public corruption. Despite the multiagency nature of the federal response, no sustained executive-level coordination—for which NSC has the designated responsibility—has been apparent. Furthermore, in the past, the government has neither tracked nor prioritized the billions of dollars in spending on certain elements of the response, such as combating terrorism. In addition, because of the absence of governmentwide, outcome-oriented performance measures, the effectiveness and impact of the response are unclear. Our prior work on other national issues that involve crosscutting responses—ranging from employment training to counterterrorism—shows that, ultimately, achieving any meaningful results requires firm linkages of strategy, resources, and outcome-oriented performance measures. Otherwise, scarce resources are likely to be wasted, overall effectiveness will be limited or not known, and accountability will not be ensured. Accordingly, we are recommending that the Assistant to the President for National Security Affairs take appropriate action to ensure sustained executive-level coordination and assessment of the multiagency federal efforts in connection with international crime. Presented below is summary information about each of the topics that we studied. More detailed information about each topic is presented in appendixes II through VII, respectively: U.S. framework for addressing international crime. The U.S. government’s framework for addressing international crime was the result of several developments. For example, in October 1995, recognizing that international crime presented a direct and immediate threat to national security, Presidential Decision Directive 42 (PDD-42) directed the development of an effective U.S. response. As a key part of the response, in May 1998, the President announced the U.S. government’s International Crime Control Strategy, which was formulated with input from multiple law enforcement agencies and was intended to serve as a dynamic, evolving roadmap for a coordinated, long-term attack on international crime. The strategy consists of 8 overarching goals (e.g., “counter international financial crime”) and 30 implementing objectives (e.g., “seize the assets of international criminals through aggressive use of forfeiture law”) and was intended to complement and not supplant related strategies, such as the National Drug Control Strategy. The crime control strategy has not been updated since its inception to reflect changes in the threat from international crime. In April 2001, in response to our inquiry, NSC officials told us that the issue of international crime and the framework for the U.S. response were under review by the new administration. The NSC officials had no estimate of when the review would be completed; however, the officials said that PDD-42 and the International Crime Control Strategy were still considered to be in effect during the ongoing review process. (See app. II.) Extent of international crime. While there is general consensus among law enforcement officials, researchers, and others that international crime is growing, there is also agreement that measuring the true extent of such crime is difficult. Nevertheless, several efforts have attempted to gauge the extent of and the threat posed by international crime to the United States and other countries. For example, in 1999 and 2000, threat assessments were prepared to support the International Crime Control Strategy. While the 1999 threat assessment was classified, a published version of the 2000 assessment provided various indicators or measures of international crime within five broad categories—(1) terrorism and drug trafficking; (2) illegal immigration, trafficking of women and children, and environmental crimes; (3) illicit transfer or trafficking of products across international borders; (4) economic trade crimes; and (5) financial crimes. Furthermore, within each of the five broad categories, specific types of crimes were discussed. Regarding the financial crime category, for example, the assessment noted that worldwide money laundering could involve roughly $1 trillion per year, with $300 billion to $500 billion of that representing laundering related to drug trafficking. The assessment acknowledged, however, that there is little analytical work supporting most estimates of money laundering. According to NSC, whether the threat assessment would continue to be updated periodically is being considered as part of the new administration’s review of international crime and no decisions had been made in this regard. (See app. III.) Selected federal entities’ roles in responding to international crime and coordination of the response. In response to our inquiry, NSC identified 34 federal entities—including cabinet-level departments and their components, and independent agencies—that it considered as having significant roles in fighting international crime. The federal entities included those that are the focus of this report, namely the departments of Justice, Treasury, and State, and USAID. Within Justice, for example, relevant components include the Criminal Division, Federal Bureau of Investigation (FBI), Drug Enforcement Administration (DEA), Immigration and Naturalization Service, U.S. National Central Bureau of the International Criminal Police Organization (INTERPOL), U.S. Marshals Service, and U.S. Attorney Offices. Relevant Treasury components include the Bureau of Alcohol, Tobacco and Firearms (ATF); Customs Service; Internal Revenue Service-Criminal Investigation; Secret Service; Financial Crimes Enforcement Network (FinCEN); the Federal Law Enforcement Training Center; and the Office of Foreign Assets Control. Within State, the Bureau for International Narcotics and Law Enforcement Affairs has a significant role, which includes coordinating and funding U.S. training assistance provided to foreign law enforcement entities; also within State, the Bureau of Diplomatic Security and the Coordinator for Counterterrorism have roles in combating international crime. To illustrate the broad interagency nature of international crime control, in 1997 we identified 43 federal entities with terrorism-related programs and activities. Similarly, 41 federal entities have an interest or are involved in operations at U.S. seaports; 15 of these entities have some jurisdiction over criminal activities occurring at these seaports, according to an interagency commission report. Implementation of the International Crime Control Strategy inherently involves some jurisdictional overlaps, which necessitate coordination among agencies. To facilitate executive-level coordination of the strategy, PDD-42 established the Special Coordination Group on International Crime, composed of high-level officials from relevant agencies and chaired by a senior NSC official. The Special Coordination Group was to meet periodically to ensure an integrated focus on the federal response to international crime. According to State and NSC officials, however, while the Special Coordination Group met 14 times in 1998, it met infrequently thereafter. At one point the Special Coordination Group did not meet at all for about 9 months (between September 1999 and June 2000) because some of its members were involved in other activities, such as preparing for year-2000 computer compliance and because of staffing shortages. In this regard, two NSC staff were assigned to coordinate international crime matters. A Presidential directive issued in February 2001 (National Security Presidential Directive 1, or NSPD-1) reorganized NSC and abolished the existing structure of interagency groups, including the Special Coordination Group. The directive did not indicate how the overall response to international crime would be coordinated at the time under NSC’s new structure. In April 2001, the Assistant to the President for National Security Affairs established a Policy Coordination Committee (PCC) for International Organized Crime. The PCC is to be comprised of officials at the Assistant Secretary level from relevant federal entities and is to be chaired by the NSC Senior Director for Transnational Threats. The PCC is to coordinate policy formulation, program oversight, and new initiatives related to a number of international crime issues, including arms trafficking, trafficking in persons, and foreign official corruption. According to NSC, one of the PCC’s priorities is to evaluate the 1998 International Crime Control Strategy. Various other departmental and agency-level coordination mechanisms— such as coordination centers, interagency coordinators, and working groups—have been established over the years to address specific types of international crimes. For example, Justice and State recently created a center for combating trafficking in persons and migrant smuggling. (See app. IV.) Efforts to combat public corruption internationally. The International Crime Control Strategy addresses corruption in two contexts. One context involves efforts to eliminate the use of bribes in transnational business activities, such as government contracting. In this context, an international anti-bribery agreement adopted by the Organization for Economic Cooperation and Development (OECD) represents an effort to eliminate bribery of foreign public officials in business transactions. This agreement—the OECD Convention on Combating Bribery of Foreign Public Officials in International Business Transactions—entered into force in 1999. The Convention generally requires signatory nations to criminalize bribes to foreign public officials made to obtain or retain business or other improper advantage in the conduct of international business. Essentially, the Convention, according to State, reflects the long-term U.S. interest in creating a level playing field among the world’s major trading nations by internationalizing the anti- bribery principles of the Foreign Corrupt Practices Act (P.L. 95-213),which the United States enacted in 1977. The Departments of State and Commerce are required to provide the Congress with annual reports on the implementation of the OECD Convention. In its third annual report, issued in July 2001, Commerce noted that progress has been made on the first priority of ensuring that all signatories deposit an instrument of ratification with OECD. As of July 2001, 33 of the 34 signatories to the Convention had deposited instruments of ratification and 30 have legislation in place to implement the Convention. The report pointed out that the United States continued to have concerns about the adequacy of countries’ legislation to meet all commitments under the Convention. The strategy’s other context on public corruption involves rule of law assistance, which focuses on U.S. support for legal, judicial, and law enforcement reform efforts undertaken by foreign governments. Generally, proponents view such assistance as being especially important in that widespread corruption among justice and security officials can potentially destabilize governments. In a 1999 report to congressional requesters, we noted that the United States provided at least $970 million in rule of law assistance to countries throughout the world from fiscal years 1993 through 1998. Four regions—Latin America and the Caribbean, Africa, Central Europe, and the New Independent States—received about 80 percent of the total. Our 1999 report also noted that at least 35 federal entities—consisting of 7 cabinet-level departments and 28 related agencies, bureaus, and offices—had a role in providing the assistance. Furthermore, the report recognized that, due to longstanding congressional concerns about ineffective coordination, in February 1999, State appointed a rule of law coordinator to work with all the relevant U.S. governmental entities. More recently, in April 2001, we reported that— after 10 years and almost $200 million in funding—rule of law assistance to 12 countries of the former Soviet Union had shown limited results. We recommended that program management be improved by implementing requirements for projects to include specific strategies for (1) achieving impact and sustainable results and (2) monitoring and evaluating outcomes. (See app. V.) U.S. programs for providing technical assistance. Much of the technical assistance that the United States provides to other nations for fighting international crime involves training, particularly training at law enforcement academies established abroad. For instance, State Department-funded academies have been established in Europe, Southeast Asia, and Southern Africa, and plans are underway to establish an academy to serve Central America. Also, the Department of Justice strives to strengthen justice systems abroad through training and assistance in developing criminal justice institutions provided through two programs— (1) the International Criminal Investigative Training Assistance Program and (2) Overseas Prosecutorial Development, Assistance and Training. In addition to training, federal agencies—particularly Justice and Treasury— help foreign nations combat international crime by providing technical assistance through specialized support services and systems, such as computerized databases and forensic laboratories. For example, the National Tracing Center—operated by ATF—traces firearms for foreign law enforcement agencies, as well as for federal and state agencies. (See app. VI.) Measures of the effectiveness of U.S. efforts. There are no standard measures of effectiveness to assess the federal government’s overall efforts to address international crime. As one of its objectives, the International Crime Control Strategy indicated that a governmentwide performance measurement system for international crime would be established—similar to the system for measuring the effectiveness of the nation’s drug control efforts implemented by the Office of National Drug Control Policy. However, according to NSC officials, no actions were ever taken to establish such as system. Rather, the task of developing performance measures was deferred to the individual federal entities with roles in combating international crime. Under the Government Performance and Results Act of 1993 (GPRA), federal agencies are to prepare strategic and performance plans, which describe their respective program activities and how effectiveness will be measured. Regarding international crime, Justice’s, Treasury’s, and State’s plans each have sections describing their efforts to combat specific types of crime, along with the performance measures to be tracked. In some cases, however, these measures do not adequately address effectiveness. For example, in June 2000, we reported our observations on key outcomes described in Justice’s performance report and plan. Among other things, we noted that Justice’s performance measures focused on outputs rather than outcomes and did not capture all aspects of performance. Furthermore, in a broader context—despite the existence of GPRA-related reports and plans—there has been no effort to consolidate the various federal agencies’ results into an overall performance measurement system, as envisioned by the International Crime Control Strategy. Another performance measurement mechanism applicable to international crime involves focusing on selected types of crimes. That is, for a few types of international crimes, the government has developed separate strategies that include measures of results and effectiveness. The most notable such strategy is the National Drug Control Strategy, which identifies goals, objectives, and performance indicators to measure the effectiveness of the nation’s war on drugs. Similar national strategies have been developed for money laundering and counterterrorism. These national strategies—although focused on specific types of crimes—are nonetheless similar to the International Crime Control Strategy in that challenges are presented in developing goals, objectives, and indicators that adequately measure results and effectiveness. (See app. VII.) We believe it is appropriate that the new administration is currently reviewing the existing framework for addressing international crime and considering options for top-level coordination mechanisms. But, it is also important for systems to be in place to ensure that crosscutting goals are consistent, program efforts are mutually reinforcing—and, where appropriate, common or complementary performance measures are used as a basis for results-oriented management. In past reports, we have noted instances across a wide range of federal programs where a lack of executive-level coordination has led to inefficient and/or ineffective programs, including those to combat specific types of international crime such as terrorism. Generally, at the field or operational levels in relation to specific types or aspects of international crimes, a wide range of inter- and intra-agency coordination activities arguably are being carried out routinely. However, these activities cannot take the place of top-level leadership in setting and implementing an overall strategy to ensure that priorities are being established, federal goals and objectives are being met, and governmentwide performance is being measured. International crime is a complex and multifaceted issue of great national importance. Accordingly, the U.S. response to international crime involves a wide variety of federal entities spending a significant amount of time and money. We recognize that individual federal entities have developed strategies to address a variety of international crime issues. We also recognize that for some crimes, integrated mechanisms exist to coordinate efforts across agencies, and that, at the operational level, law enforcement and other personnel are working across agencies. However, we believe that without an up-to-date and integrated strategy and sustained top-level leadership to implement and monitor it, the risk is high that scarce resources will be wasted, overall effectiveness will be limited or not known, and accountability will not be ensured. Accordingly, we note that the establishment of the PCC for International Organized Crime in April 2001 is a step in the right direction and—on the basis of what is known about its role and priorities—appears to address some of the coordination and related issues discussed in this report, such as providing oversight of international crime issues. Recognizing the establishment of the PCC for International Organized Crime and its intended responsibilities and priorities, we recommend that the Assistant to the President for National Security Affairs take appropriate action to ensure that this PCC provides sustained and centralized oversight of the extensive and crosscutting federal effort to combat international crime. Consistent with the coordination and related issues we have discussed in this report, we recommend that as the responsibilities of the PCC are defined, they include systematically updating the existing governmentwide international crime threat assessment to maintain a thorough understanding of credible existing and emerging threats; updating the International Crime Control Strategy, or developing a successor—to include prioritized goals and implementing objectives—as appropriate to reflect changes in the threat; designating responsibility for executing the strategy and resolving any jurisdictional issues; identifying and aligning the necessary resources with the strategy’s execution; developing outcome-oriented performance measures linked to the strategy’s goals and objectives to track and assess progress, identify emerging challenges, and establish overall accountability; and periodically reporting the strategy’s results to the President and the Congress. We requested comments on a draft of this report from the Assistant to the President for National Security Affairs, the Attorney General, the Secretaries of State and the Treasury, and the Administrator of USAID. In response, we received comments from NSC’s National Coordinator for Security, Infrastructure Protection and Counter-Terrorism; Justice’s Acting Assistant Attorney General for Administration; State’s Acting Chief Financial Officer; and USAID’s Acting Assistant Administrator, Bureau of Management. The comments are reprinted in appendixes VIII through XI and discussed briefly in the next sections. In addition to their comments, NSC, Justice, State, and USAID provided technical comments that are incorporated in this report where appropriate. Treasury did not submit written comments but provided technical comments. NSC generally concurred with the thrust of the report’s recommendation, indicating that the coordination of the federal government’s efforts to combat international crime should be improved further; that NSC is the logical choice to provide enhanced coordination and policy direction at the most senior levels of government; and that comprehensive measures should be developed to assess the effectiveness of international crime control programs and form an iterative cycle of regular threat, strategy, and program reviews. NSC also indicated that the PCC for International Organized Crime would consider our recommendation as it reviews the International Crime Control Strategy and works to enhance the government’s approach to fighting international crime. At the same time, NSC expressed concern that the report did not adequately reflect a number of initiatives it led—including the establishment of the Special Coordination Group and the development of the International Crime Control Strategy—that were aimed at a more integrated U.S. government approach to fighting international crime. Furthermore, NSC indicated that the report overstated the Council’s proper role in international crime control efforts. In this regard, NSC said that senior-level interagency coordination by NSC and its formal committee structure is only part of the picture and that the overwhelming majority of coordination at that level— as well as at the operational level—occurs without any involvement by the Council. Regarding NSC’s comment about the report not adequately reflecting the initiatives undertaken to integrate the government’s response to international crime, we believe that the report—in keeping with its intent to provide overview-level information on the subject—adequately identifies and describes, in a framework context, key components of the response. These include PDD-42, the International Crime Control Strategy, the International Crime Threat Assessment, and the now-defunct Special Coordination Group. Regarding NSC’s comment about its role in international crime control efforts, the report recognizes that extensive day-to-day coordination does occur at the operational and executive levels. The report’s discussion of NSC’s role in coordinating efforts to combat international crime centers on the delineation of that role in, among other documents, NSPD-1 and the memorandum establishing the PCC for International Organized Crime. For example, NSPD-1 states that the various PCCs shall be the main day-to-day mechanisms for senior interagency coordination of national security policy issues, of which international crime control is one. Justice agreed with the report’s concluding observations regarding, among other things, the executive branch’s need to prioritize its response to the increasing threat from international crime. However, Justice expressed what it characterized as “serious reservations” about the report’s discussion and recommendation concerning interagency coordination of the federal response to international crime. Specifically, Justice believed that the report understated the extent of interagency coordination that has occurred in the past, especially at the lower levels where, according to Justice, law enforcement coordination has often led to successful international criminal investigations and prosecutions. Justice also said that the report's recommendation for high-level coordination overestimated the importance of the Special Coordination Group and its sub-groups. Justice indicated that while high-level interagency coordination may be useful for general policy matters, such coordination is generally not appropriate for particular criminal investigations. Finally, Justice said that the report did not give proper recognition to what it characterized as the Attorney General’s “central role” in addressing international crime, especially in determining whether, and under what circumstances, to prosecute international criminal conduct. Consequently, according to Justice, the report’s recommendation appears to be an intrusion into the “traditional law enforcement responsibilities of the Attorney General.” Regarding Justice’s statement that the extent of interagency coordination is understated, the report is not intended to be an exhaustive representation of the federal response to international crime and the coordination of this response. Rather, the report describes the various means through which coordination occurs—especially at the operational level—and presents illustrative examples, provided by a variety of federal law enforcement and other agencies, without reaching any conclusions about the effectiveness of coordination at this level. Regarding Justice’s statement that the importance of the Special Coordination Group and its sub-groups is overestimated, the report discusses the roles and responsibilities of the Group as envisioned by and delineated in PDD-42 and the International Crime Control Strategy. According to these documents—which form the framework of the federal response to international crime—the Group was intended as the high-level mechanism to ensure an integrated and sustained focus on the federal response to international crime. Regarding Justice’s statement that the report does not recognize the Attorney General’s central role in combating international crime and that its recommendation appears to intrude on his law enforcement responsibilities, we offer two points in response. First, the report, reflecting a consensus view, describes Justice’s role in combating international crime as “significant” and accordingly provides a detailed description of the relevant responsibilities and programs of its various components. Second, building upon a mechanism already put in place by NSC, the recommendation seeks to enhance executive-level coordination and oversight of the large-scale federal effort to combat international crime. The recommendation’s specific components—which focus on strategic-level matters—are not intended to delve into operational-level matters, such as decisions to prosecute specific instances of international criminal conduct. State indicated that it agreed with the basic premise and recommendation of the report. It further indicated that centrally led coordination—focusing on general policy rather than particular criminal matters and issues—can be useful in sorting out and better delineating the many overlapping responsibilities of federal law enforcement agencies and avoiding duplications and gaps in anticrime programs that can waste limited resources and reduce program effectiveness. State did note that since some activities discussed in the report, such as nonproliferation and counterterrorism, involve broader political and national security issues that extend beyond international crime, they should remain under the jurisdiction of the appropriate PCC, such as the one for Nonproliferation, Counter-proliferation, and Homeland Defense. In this regard, we acknowledge this distinction and, to the extent that they continue to be considered also as part of the broader context of international crime, defer to the Assistant to the President for National Security Affairs to determine the appropriate PCC jurisdiction for activities such as nonproliferation and counterterrorism. USAID submitted a letter with technical clarifications, which we included in the report where appropriate. As indicated earlier, the Department of the Treasury had no written comments on a draft of this report. However, Treasury entities provided technical comments which we incorporated in this report where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committees on Appropriations, Armed Services, Finance, Foreign Relations, Governmental Affairs, and the Judiciary; and to the Chairmen and Ranking Minority Members of the House Committees on Appropriations, Armed Services, Government Reform, International Relations, the Judiciary, and Ways and Means. In addition, we will send copies to the Assistant to the President for National Security Affairs, the Attorney General, the Secretary of the Treasury, the Secretary of State, and the Administrator of USAID. We will also make copies available to others on request. If you have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or Danny R. Burton at (214) 777-5600. Other key contributors are acknowledged in appendix XII. Our objectives in this review were to develop overview information on the following topics: The U.S. framework for addressing international crime. The extent of international crime. Selected federal entities’ roles in responding to international crime and issues related to the coordination of the response. U.S. efforts to combat public corruption internationally. U.S. programs for providing technical assistance to other nations to combat international crime. Issues related to measures of the effectiveness of U.S. efforts to combat international crime. As agreed with the requester’s office, given the number and potential breadth of the topics—and the time frames for conducting our review—we focused on developing overview information rather than analyzing each topic in depth. Also, while we tried to identify and contact as many relevant federal agencies as possible, most of our interactions were with officials in the National Security Council (NSC); the Departments of Justice, Treasury, and State; the U.S. Agency for International Development (USAID); and their relevant components. As such, the information contained in this report does not represent the full extent of the federal government’s response to international crime. Nor does the information represent the full response to international crime by NSC, Justice, Treasury, State, and USAID—in that some of the information is based on examples, rather than an exhaustive listing of all relevant activities or programs. To obtain background information and other contextual perspectives, we relied to a considerable extent on publicly available information—such as published reports or studies—and we also used the Internet to access information on the Web sites of various federal and other relevant entities. To obtain additional information about federal entities’ roles and responsibilities for international crime, we also submitted a data collection instrument to Justice, Treasury, State, and USAID, and we submitted written questions to NSC. As agreed with the requester’s office, our work did not include reviewing any classified documents. In addition, we did not independently verify or evaluate the information we obtained, including strategies, threat assessments, international crime control initiatives, and assistance program descriptions. The following sections present more information about our scope and methodology for each of the six topics noted earlier. Regarding the U.S. framework for addressing international crime, we focused on key documents, such as Presidential Decision Directive 42 (PDD-42), which was issued in October 1995 to authorize development of an effective U.S. response to international crime and the International Crime Control Strategy (May 1998)—an integral part of the response—that was formulated with input from multiple law enforcement agencies. For this report, we have defined “international crime” consistent with the International Crime Control Strategy, the “roadmap” document for federal law enforcement efforts. The strategy uses the term “international crime” to describe criminal conduct that transcends national borders and threatens U.S. interests in three broad, interrelated categories: threats to Americans and their communities, threats to American businesses and financial institutions, and threats to global security and stability. Using this characterization, the strategy (and the subsequent International Crime Threat Assessment prepared pursuant to the strategy) designate the following as the major international crimes from the U.S. perspective: corruption; terrorism; drug trafficking; illegal immigration and alien smuggling; trafficking in women and children; environmental crimes (including flora and fauna trafficking); sanctions violations; illicit technology transfers and smuggling of materials for weapons of mass destruction; arms trafficking; trafficking in precious gems; piracy; non- drug contraband smuggling; intellectual property rights violations; foreign economic espionage; foreign corrupt business practices; counterfeiting; financial fraud (including advance fee scams and credit card fraud); high- tech crime; and money laundering. We discussed the International Crime Control Strategy’s development— and its continuing significance and use—with officials from various federal agencies, including NSC; the Departments of Justice, Treasury, and State; USAID; and their components. We also obtained and reviewed other key documents that address aspects of the federal government’s response to international crime, including the proposed “International Crime Control Act of 1998” (S. 2303), the International Crime Threat Assessment, and the United Nations Convention Against Transnational Organized Crime (and supplementary protocols). To obtain information concerning the extent of international crime, we conducted a literature search and interviewed officials of various federal law enforcement agencies, including the U.S. National Central Bureau of the International Criminal Police Organization (INTERPOL). We also summarized data from the International Crime Threat Assessment (Dec. 2000), which was prepared by a U.S. government interagency working group with membership from various federal law enforcement agencies, as well as the Central Intelligence Agency and NSC. Furthermore, we reviewed relevant documents from other sources— including the National Intelligence Council, the United States Commission on National Security/21st Century, and the United Nations. Regarding selected federal entities’ roles and coordination, we focused on identifying and contacting the federal entities responsible for implementing the basic “framework” document mentioned previously— that is, the International Crime Control Strategy (May 1998). However, as also mentioned previously, most of our interactions were with officials in Justice, Treasury, State, and USAID—and their relevant components. Also, we contacted NSC to discuss the role of the Special Coordination Group on International Crime—a team whose members include high-level officials from, among others, Justice, Treasury, and State. To obtain additional information about federal agencies’ roles and responsibilities for international crime, we submitted a data collection instrument to Justice, Treasury, State, and USAID. Generally, we designed the instrument to request information about threat assessments, budgets and staffing, areas of responsibility and authority, interagency and intergovernmental coordination, performance measures, and foreign technical assistance. We also met with cognizant officials at these entities to discuss these issues. Furthermore, we submitted questions to NSC concerning the agency’s roles and responsibilities. However, because the issue of international crime and the framework for the U.S. response were still under review by the new administration, NSC officials declined to respond to our questions. In reference to combating corruption, the International Crime Control Strategy presents two related objectives: Establish international standards, goals, and objectives to combat international crime, including corruption and bribery. Strengthen the rule of law as the foundation for democratic and free markets in order to reduce societies’ vulnerability to criminal exploitation. We contacted officials at Justice, Treasury, State, and USAID to identify and discuss (1) the major obstacles or challenges in implementing these objectives and (2) what actions were being taken or planned to address these obstacles or challenges. Furthermore, we reviewed testimony presented at a July 1999 hearing before the Commission on Security and Cooperation in Europe and reviewed relevant information from Transparency International, a leading nongovernmental organization that addresses corruption issues. We analyzed national and international documents on corruption, bribery, and the rule of law—including the Foreign Corrupt Practices Act and documents associated with (1) the Organization for Economic Cooperation and Development Convention on Combating Bribery of Foreign Public Officials in International Business Transactions, (2) the First Global Forum on Fighting Corruption, and (3) the Council of Europe’s Criminal Law Convention Against Corruption and the attendant Group of States Against Corruption. Also, from several of our recent reports, we summarized data about U.S. rule of law worldwide funding and federal entities involved in rule of law assistance programs. Regarding U.S. technical assistance to other nations to combat international crime, we contacted officials at Justice, Treasury, State, and USAID to identify and discuss relevant training programs and other forms of assistance, such as access to (1) automated criminal history records, (2) other computerized information systems, or (3) forensic or other laboratories. However, we did not visit any field or operational sites to observe training or other assistance programs. Nor did we contact any recipient nations to obtain the views of foreign government officials or law enforcement officers. The International Crime Control Strategy (May 1998) called for the establishment of a performance measurement system for monitoring progress in meeting the strategy’s goals and objectives. To determine the extent to which such a system had been established and was being used— and, if applicable, to identify and discuss other relevant performance measures—we contacted federal officials at NSC, Justice, Treasury, and State. Also, to identify alternative approaches used for measuring the results of international crime control efforts, we reviewed department and agency strategic and performance plans that were prepared pursuant to the Government Performance and Results Act. That is, we reviewed these plans to determine whether and to what extent they contained performance measures for monitoring international crime control efforts. Furthermore, we reviewed national strategies and related documents for three specific international crimes to determine whether and to what extent they contained performance measures. These crime-specific strategies were the National Drug Control Strategy, the National Money Laundering Strategy, and the Five-Year Interagency Counter-Terrorism and Technology Crime Plan. The U.S. government’s framework for addressing international crime is based on various initiatives involving President-directed federal law enforcement interagency actions, a proposal for additional statutory authority, and efforts to increase international cooperation: In October 1995, recognizing that international crime presented a direct and immediate threat to national security, Presidential Decision Directive 42 (PDD-42) was issued to authorize the development of an effective U.S. response. Also in October 1995, in a speech at the United Nations (UN), the President called for increased international cooperation to fight various aspects of international crime. In May 1998, the President announced the U.S. government’s International Crime Control Strategy, which was formulated with input from multiple law enforcement agencies and was intended to serve as a roadmap for a coordinated, long-term attack on international crime. Also in May 1998, the White House announced proposed legislation that was intended to help implement the strategy. In July 1998, Senator Patrick J. Leahy introduced the proposed legislation—the International Crime Control Act of 1998 (S. 2303) in the 105th Congress. In 1999 and in 2000, as part of the International Crime Control Strategy, a U.S. government interagency working group prepared and issued assessments of the threat posed by international crime. In December 2000, the United States and many other countries signed the United Nations Convention on Transnational Organized Crime, along with supplementary protocols on migrant smuggling and trafficking in persons. On October 21, 1995, President Clinton issued PDD-42 to initiate certain federal efforts to counter international crime. The general purpose of PDD-42—as stated in the foreign Narcotics Kingpin Designation Act, P.L. 106-120, title VIII, section 802—was to order executive branch agencies to take the following actions: Increase the priority and resources devoted to addressing the threat that international crime presents to national security. Work more closely with other governments to develop a global response to the threat of international crime. Use aggressively and creatively all legal means available to combat international crime. Specifically, PDD-42 required various agencies, including Justice, Treasury, and State, to integrate their efforts against international crime syndicates and money laundering. PDD-42 also established interagency working groups to address aspects of international crime control—such as efforts to reduce money laundering by strengthening international cooperation with critical nations. Subsequently, according to a State Department official, to help implement PDD-42, the National Security Council (NSC) asked the Departments of Justice, Treasury, and State to take the lead in developing a comprehensive national strategy to attack international crime. According to a senior Justice official we interviewed during our review, the President’s 1995 UN speech—which was delivered the day after PDD- 42 was issued—can also be considered reflective of the U.S. framework for addressing international crime. Specifically, on October 22, 1995, in a speech before the UN General Assembly to mark the organization’s 50th anniversary, the President called for cooperation in “fighting the increasingly interconnected groups that traffic in terror, organized crime, drug smuggling and the spread of weapons of mass destruction.” The President indicated, for example, that nations needed to work together to negotiate and endorse a “no sanctuary pledge” to ensure that organized criminals, terrorists, and drug traffickers and smugglers have nowhere to run or hide. Also, in his UN speech, the President enumerated several steps that the United States was taking to address international crime. For instance, the President noted that he had directed applicable U.S. government agencies to identify and work (using sanctions, if appropriate) with those nations that needed to bring their banks and financial systems into conformity with international antimoney-laundering standards and identify the front companies (and freeze their assets) of the Cali Cartel, the largest drug ring in the world. Also, the President said that he had instructed Justice to “prepare legislation to provide our other agencies with the tools they need to respond to organized criminal activity.” The resulting proposed legislation—the International Crime Control Act of 1998 (S. 2303)—is discussed below. Developed with input from multiple federal law enforcement agencies, the U.S. government’s International Crime Control Strategy was released in May 1998. As summarized in table 1, the strategy consisted of 8 overarching goals and 30 implementing objectives. It should be noted that according to the federal officials we interviewed, the strategy—and its goals and objectives—is intended to supplement and not supplant related strategies, such as the National Drug Control Strategy. The International Crime Control Strategy stated that its goals and objectives were dynamic and would evolve over time as conditions changed, new crime trends emerged, and improved anticrime techniques were developed. However, the strategy has not been updated since its inception in 1998, even though threat assessments (discussed below) were conducted in 1999 and 2000. The International Crime Control Strategy was intended to build on and complement existing national security and crime control strategies, such as the National Security Strategy and the National Drug Control Strategy. These strategies are required to be updated periodically to reflect changes in the threat posed to the national security and other interests of the United States (see P.L. 105-277, Title VII, section 706(b); and P.L. 99-433, section 603). Our previous work has shown that the development of a national strategy to address a specific threat, such as terrorism, first requires a thorough understanding of the threat. This understanding can be obtained, in turn, by conducting threat and risk assessments. In May 1998, concurrent with the release of the International Crime Control Strategy, the White House announced a legislative proposal to help implement objectives in the strategy. In July 1998, Senator Patrick J. Leahy introduced the proposed legislation—the “International Crime Control Act of 1998” (S. 2303) in the 105th Congress. According to the White House, S. 2303 contained statutory provisions intended to “close gaps in current federal law, criminalize additional types of harmful activities, and promote a strengthening of both domestic and foreign criminal justice systems to respond to the new challenges posed by crime that crosses international boundaries.” Although not enacted by the 105th Congress, the proposed legislation contained provisions to establish jurisdiction in the United States over violent acts committed abroad against state and local officials while engaged in official federal business; authorize U.S. Customs Service officers to search international, outbound sealed mail if there is reasonable cause to suspect that the mail contains monetary instruments, drugs, weapons of mass destruction, or merchandise mailed in violation of enumerated U.S. statutes, including obscenity and export control laws; strengthen immigration laws to exclude international criminals from the expand the list of money laundering predicate crimes to include certain serious foreign crimes, such as violent crimes and bribery of public officials; address the problem of alien smuggling by authorizing the forfeiture of its provide extraterritorial jurisdiction for fraud involving access devices such expand the authority of the Treasury and Justice departments to transfer the forfeited assets of international criminals to eligible foreign countries that participated in the seizure or forfeiture of the assets; provide new authority, in cases where there is no applicable mutual legal assistance treaty provision, to transfer a person in U.S. government custody to a requesting country temporarily for purposes of testifying in a criminal proceeding, if both the foreign country and the witness consent; and establish a hearsay exception to admit certain foreign government records into evidence in U.S. civil proceedings. In 1999 and in 2000, as part of the International Crime Control Strategy, a U.S. government interagency working group prepared assessments of the threat posed by international crime. According to NSC and State officials, the first assessment—prepared in 1999—was a classified document and was not available to the public. An unclassified version of the second assessment was publicly released—International Crime Threat Assessment, December 2000. This document consists of the following five chapters: Chapter I addresses the global context of international crime, identifying the factors that have contributed to the growing problem of international crime. Chapter II gives an overview of specific international crimes affecting U.S. interests. Chapter III addresses worldwide areas of international criminal activity, especially as source areas for specific crimes and bases of operations for international criminal organizations. Chapter IV addresses the consequences of international crime for U.S. strategic interests, including the ability to work cooperatively with foreign governments and the problem of criminal safehavens, failed states, and kleptocracies. Chapter V gives a perspective on future developments anticipated in international crime. In December 2000, the United States and over 120 other countries signed the UN Convention Against Transnational Organized Crime (including two supplementary protocols). Before it comes into force, however, the Convention must be ratified by at least 40 countries. The main purpose of the Convention and its protocols is to enable the international community to better combat organized crime by harmonizing nations’ criminal laws and promoting increased cooperation. For example, nations that sign and ratify the Convention would be required to establish in their domestic laws four criminal offenses—participation in an organized criminal group, money laundering, corruption, and obstruction of justice. “While globalization has brought progress and expanded economic opportunities to the world, an unfortunate consequence of globalization is transnational crime. … We must match the increasingly sophisticated means that organized criminal groups have found to exploit globalization if we are to win this battle. In particular it takes international agreements that are global to fight crime that is global. “The Transnational Organized Crime Convention and its supplementary protocols include several common themes that characterize successful global agreements. Perhaps most important, they establish global standards that all countries must meet, and then provide for flexibility in the manner in which they meet them. For example, the Convention and Protocols define—for the first time in binding international agreements—organized crime, migrant smuggling and trafficking in persons; and they require all parties to criminalize this defined conduct under their domestic law. But they permit individual countries to tailor the manner in which they implement their obligations to the particular needs of their system. For example, the Convention recognizes that different countries have different approaches to the crime that we in the United States label as conspiracy. “The international norms established by this Convention and its protocols lead to another common theme of successful global treaties—namely, they facilitate increased cooperation among governments, in this case law enforcement officials. Having accepted definitions of organized crime, migrant smuggling, and trafficking in persons makes international collaboration on these subjects easier. The Convention and Protocols build on these definitions, by including numerous mechanisms for cooperation. For example, rather than going through the time-consuming and expensive process of negotiating bilateral agreements, countries will be able to rely on these treaties for extradition and mutual legal “We have taken the first steps together, and now we must bring these instruments to life as meaningful tools in our fight against transnational organized crime.” According to an NSC official, the issue of international crime and the framework of the U.S. response are currently under review by the Bush administration. The official could not estimate when the review would be completed. In the meantime, according to this official, the framework for the response—established primarily by PDD-42 and the crime control strategy—is still in effect, pending the outcome of the review. In this regard, it should be noted that, in February 2001, National Security Presidential Directive 1 (NSPD-1) was issued to reorganize the structure of NSC. NSPD-1 abolished the then-existing system of interagency groups but did not indicate which one, if any, of the 17 newly established policy coordination committees would coordinate the issue of international crime and the U.S. response. In April 2001, the Assistant to the President for National Security Affairs established a multiagency Policy Coordination Committee on International Organized Crime (PCC) to be chaired by NSC to provide oversight of the federal response to international crime. According to an NSC official, one of the PCC’s first priorities—as part of the administration’s ongoing review—is to evaluate the International Crime Control Strategy to reflect any changes in the threat from international crime as described in the December 2000 threat assessment. The official did not provide a time frame for completion of the evaluation. According to law enforcement and intelligence officials, researchers, and others, the extent of international crime has been growing since the early 1990s —a growth fueled by a number of factors, including the end of the Cold War and increased globalization of commerce and trade and financial and communications technology. Criminal organizations have been able to exploit these developments to their advantage to further illicit activities and execute financial transactions related to these activities. While there is general consensus that international crime is growing, there is also agreement that measuring the true extent of such crime is difficult. This is mainly because of the clandestine nature of criminal activity and the fact that criminals are not likely to self-report their activity. Nevertheless, a number of efforts have attempted to gauge and describe the threat posed by international crime to the United States and other countries. These efforts rely primarily on estimates of international crime activities as developed and reported by, among others, law enforcement entities, business groups, and researchers. In December 2000, as called for by the International Crime Control Strategy, the U.S. government released an International Crime Threat Assessment. The assessment was developed by an interagency working group and provided various indicators or measures of international crime within five broad categories. While the assessment did not address the crimes in any priority order to indicate severity of the threat to U.S. interests, the categories were (1) terrorism and drug trafficking; (2) illegal immigration, trafficking of women and children, and environmental crimes; (3) illicit transfer or trafficking of products across international borders; (4) economic trade crimes; and (5) financial crimes. Furthermore, within each of the five broad categories, the threats posed by specific types of crimes were discussed. For example, within the financial crimes category—as shown in table 2, which summarizes the threat assessment— worldwide money laundering was estimated to be as much as $1 trillion per year, with $300 billion to $500 billion of that representing laundering related to drug trafficking. The assessment acknowledged, however, that there is little analytical work supporting most estimates of money laundering. Based on our interviews with NSC and State officials, it is not clear whether the threat assessment will continue to be periodically updated— as part of an iterative process—and used to systematically measure trends and identify new threats posed by various types of international crime. According to an NSC official, the matter of updating the threat assessment is being considered as part of the Bush administration’s ongoing review of the federal response to international crime. Our prior work shows that because threats to national security are dynamic and countermeasures may become outdated, it is generally sound practice to periodically reassess such threats. Our work has also pointed out that national-level threat assessments—and accompanying risk assessments that attempt to determine the likelihood of a threat occurring—are decision-making support tools that are used to establish requirements, develop strategies, and prioritize program investments to help focus national efforts on achieving results. As indicated earlier, the December 2000 threat assessment did not prioritize the types of international crimes it discussed in terms of the severity of threat they posed to U.S. interests. In responding to our survey, a number of federal law enforcement officials indicated that their agencies do not use the December 2000 threat assessment. The agencies have, instead, developed their own threat assessments based on information obtained through their own intelligence. Examples of agency assessments include the following: Annual assessments developed by the Immigration and Naturalization Service’s (INS) District Offices that focus on activities such as alien smuggling. Country-specific corruption assessments prepared for the U.S. Agency for International Development (USAID) by a private firm. For instance, a March/April 2000 assessment on Nigeria concluded that corruption was pervasive in the private and public sectors and had become woven into the fabric of that country’s society. A forthcoming operational assessment represents a joint effort—among the Bureau of Alcohol, Tobacco and Firearms (ATF), the U.S. Customs Service, and the Canadian government—to determine the nature, size, and scope of the legal and illegal tobacco trade and the involvement of organized crime in this trade. According to ATF, this initiative is intended to identify emerging trends, threats to the legal tobacco trade and government revenues, and obstacles to effective enforcement. In responding to our inquiries, several federal law enforcement and other officials identified a number of challenges in accurately and reliably determining the extent and impact of international crime. These challenges included (1) the reluctance among agencies to share information; (2) insufficient human resources deployed in foreign countries to gather information; (3) the accuracy of information supplied by some countries; (4) the clandestine and consensual nature of criminal activity (e.g., public corruption); (5) the use of sophisticated technology by criminals to avoid detection; and (6) the absence of a single designated entity to act as the lead or coordination authority on information/intelligence matters. A number of other sources have attempted to assess and quantify the threat posed by international crime. For example, the President’s December 1999 National Security Strategy For a New Century identified international crime—such as terrorism and drug trafficking—as a threat to U.S. interests. The strategy outlined a number of actions, including the deployment of interagency teams to respond to terrorist incidents, designed to counter such crime. Also, a December 2000 report by the National Intelligence Council (NIC)—titled Global Trends 2015: A Dialogue About the Future With Nongovernment Experts—concluded that between now and 2015, one of the three main challenges facing countries would be to combat criminal networks and their growing reach. The report noted that criminal organizations would become increasingly adept at exploiting the global diffusion of information, as well as financial and transportation networks. As an example of criminal activity, the report estimated that corruption costs about $500 billion annually—the equivalent of about 1 percent of global gross national product—in slower growth, reduced foreign investment, and lower profits. The April 2000 Phase II Report on a U.S. National Security Strategy for the 21st Century, issued by the United States Commission on National Security/21st Century, noted that international criminality—such as terrorism and drug trafficking—affected the global environment in which the United States acted. The report concluded that it was in the significant interest of the United States that international criminality be minimized. A 1999 UN report—Global Report on Crime and Justice—estimated the extent of a variety of international crimes, such as the theft of art and antiquities ($4.5 billion to $6 billion annually) and theft of intellectual property, such as software ($7.5 billion annually). In a related matter, the UN has initiated a 5-year project (Sept. 1999 to Aug. 2004) to assess the activities of organized crime groups worldwide and the level of danger that these groups pose to society. Congressional testimony by various intelligence and law enforcement officials has also highlighted the threat posed by international crime. For example, in a February 2001 statement on the worldwide threat before the Senate Select Committee on Intelligence, the Director of Central Intelligence stressed that terrorism and drug trafficking, among other things, posed a real, immediate, and evolving threat to the United States. The Director also added that these two threats were intertwined since, in some instances, profits from drug trafficking funded terrorist operations.Testimony in April 1998 by the Director of the Federal Bureau of Investigation before the Senate Appropriations Subcommittee on Foreign Operations indicated that international crime posed an immediate and increasing concern for the United States and the worldwide law enforcement community. Furthermore, at a March 2000 hearing before the Commission on Security and Cooperation in Europe (CSCE) on the impact of organized crime and corruption on democratic and economic reform, several witnesses commented, among other things, that organized crime and corruption were significant threats to the political, economic, and social stability of countries in Southeast Europe and Central Asia. In response to our inquiry, as shown in table 3, the National Security Council (NSC) identified 34 federal entities—including cabinet-level departments and their components, and independent agencies—that it considered as having significant roles in combating international crime. NSC cautioned that its compilation of federal entities was not intended to be exhaustive. Given the large number of federal entities with a role in international crime (detailed in table 3), as agreed with the requester, this appendix presents an overview of the role of selected federal entities in responding to international crime and the coordination of the response. The specific federal entities are the Departments of Justice, Treasury, and State; USAID; and their respective components. NSC—as directed by Presidential Decision Directive 42 (PDD-42), discussed in appendix II—is to serve as the overall coordinator of the federal response to international crime. Because the focus of our work was limited to these particular entities, the information in this appendix does not reflect the full extent of the federal response. However, this appendix presents a number of examples to illustrate the federal response to specific types of international criminal activity (such as terrorism) and at particular physical locations (such as ports of entry). Department of Justice components that have roles in addressing international crime include the Criminal Division, FBI, DEA, INS, the U.S. National Central Bureau of the International Criminal Police Organization (USNCB/INTERPOL), and the U.S. Marshals Service. Justice’s Criminal Division is responsible for developing, enforcing, and supervising the application of all federal criminal laws except for those specifically assigned to other divisions. Fourteen offices or sections within the Criminal Division have responsibilities for international crime or other related activities, as table 4 indicates. All sensitive federal international criminal matters are coordinated through the Criminal Division. According to a Criminal Division Deputy Assistant Attorney General, responding to international crime is an increasingly critical responsibility for the Criminal Division. In this regard, in a 1999 speech, the then Assistant Attorney General stated that well over half his time was devoted to issues and cases that have foreign policy and national security implications. Furthermore, according to the Deputy Assistant Attorney General, while precise estimates are difficult, over the past few years, about 40 to 50 percent of the Division’s workload has been associated with international crime matters. Examples of the Criminal Division’s workload related to international crime include prosecuting cases involving international crime—such as organized crime, drug trafficking, money laundering, and international terrorism—often in cooperation with U.S. Attorneys’ Offices; negotiating—in cooperation with State and other departments—and implementing bilateral and multilateral treaties with other countries, such as agreements for mutual legal assistance and maritime boarding agreements, and the recent United Nations Convention against Transnational Organized Crime; and providing training and other technical assistance to the law enforcement and justice sectors of foreign countries. The FBI is Justice’s principal investigative arm and is charged with investigating all violations of federal law, except for those assigned by statute to another agency. According to the FBI Director, the Bureau’s response to international crime consists of three key elements— maintaining an active overseas presence, training foreign law enforcement officers, and facilitating institution building. Within this context, the FBI identified five of its components as having roles in responding to international crime. These components are (1) Criminal Investigative Division, (2) International Training Assistance Units, (3) National Infrastructure Protection Center (NIPC), (4) International Operations Section, and (5) the International Terrorism Operations Section. Examples of the international crime initiatives undertaken by FBI components include the following: Project “Millennium.” The FBI and law enforcement agencies from 23 other countries have provided INTERPOL with the names and profiles of thousands of subjects involved in Eurasian organized crime in order to establish a worldwide database. The database is intended to allow participating countries to cross-reference and coordinate leads involving Russian and Eastern European organized crime members. U.S.-Mexico Fugitive Initiative. This initiative—involving the FBI, Justice, and the government of Mexico—is designed to improve procedures for obtaining provisional arrest warrants for fugitives who have fled to the United States from Mexico. Plan Colombia. Under the umbrella of this broad-ranging initiative, the FBI and Justice are assisting Colombia in developing a program to investigate kidnappings. The program includes establishing a Colombian law enforcement task force consisting of specially trained investigators. The task force is intended to work with the FBI when appropriate, such as when cases involve U.S. nationals. Middle Eastern Law Enforcement Training Center. The Center is a joint law enforcement training initiative between the FBI and the Dubai, United Arab Emirate police department. The Center—funded entirely by the Emirate’s government—is being established to address transnational/cross-border crimes within the Middle East region; according to FBI, these crimes have an impact on the United States. Working with police officials in the region, the FBI identified a number of crime issues to be addressed by the Center’s training, including corruption, counterterrorism, organized crime, money laundering, drugs, cybercrime, and illegal immigation. DEA is responsible for enforcing the federal drug control laws and is the single point of contact for coordinating international drug investigations for the United States in foreign countries. DEA’s primary responsibilities include investigating major drug traffickers operating at interstate and international levels and working on drug law enforcement programs with its counterparts in foreign countries. According to DEA, targeting international drug trafficking organizations and their direct affiliates is one of its highest priorities. In July 1999, we reported on the major enforcement strategies, programs, initiatives, and approaches that DEA implemented in the 1990s to carry out its mission, including efforts to target and investigate national and international drug traffickers. According to DEA, four of its components have roles in responding to international crime: (1) the Office of International Operations; (2) the Office of Domestic Operations; (3) the Financial Operations Section, which deals with money laundering; and (4) the Office of Training, which trains drug enforcement officials in other countries. These components are involved in implementing provisions of the National Drug Control Strategy. These provisions entail, among other things, the implementation of interdiction and international programs. For example, DEA participates in the Southwest Border Initiative—a cooperative law enforcement effort—to combat Mexico-based drug trafficking along the U.S.-Mexico border. Internationally, DEA is involved in counternarcotics efforts with the governments of Bolivia, Colombia, Peru, and Thailand, among others. INS is charged with the administration and enforcement of U.S. immigration laws, including facilitating entry of those legally admissible into the United States and deterring the entry of those seeking to enter illegally. According to INS, four components within its Office of Field Operations have roles in responding to international crime. These components are (1) the Office of International Affairs; (2) the Office of Intelligence; (3) the Investigations Division, including the Smuggling and Criminal Organizations Branch, the INS component of the Organized Crime Drug Enforcement Task Force, the National Security Unit, and the Fraud Section; and (4) the Border Patrol. Examples of the international crime initiatives undertaken by these components include the following: National Border Patrol Strategic Plan. In effect since 1994, this is the Border Patrol’s attempt to deter illegal entries into the United States between ports of entry. Southeast European Cooperative Initiative. This is an interagency initiative to assist Southeastern European countries with, among other things, combating cross-border crime as it relates to alien smuggling. Nigerian Crime Initiative. This interagency initiative is intended to ensure the sharing of intelligence and providing training on Nigerian criminal enterprises and removing Nigerian criminal aliens from the United States. Operation “Crossroads.” This is an interdiction operation being conducted along the Southwest Border in the Arizona Corridor (the area between Phoenix and Tucson, Arizona). The operation has branches stretching into Mexico and Central America. The U.S. National Central Bureau (USNCB) of the International Criminal Police Organization—as the U.S. component of the broader INTERPOL network—is intended as a point of contact for American and foreign police seeking assistance in criminal investigations that extend beyond their national boundaries. USNCB’s staff is composed of representatives from various federal law enforcement entities, including Customs, ATF, Marshals Service, and DEA. In addition to providing operational coordination and training at the international, federal, and state level, examples of the services USNCB provides and the projects it is involved in include the following: International Notice Program. USNCB disseminates subject lookouts and advisories through the circulation of INTERPOL notices. The color- coded notices communicate various kinds of criminal information. For example, the Red Notice (International Wanted Notice) informs member countries that a warrant has been issued for a person whose arrest is requested with a view to subsequent extradition. Project “Rockers.” This 28-country INTERPOL project is targeting outlaw motorcycle organizations involved in criminal activities. The project’s main objective is to identify the organizations and their membership and to collect information on their criminality for analysis and dissemination to affected countries. The Marshals Service is responsible for, among other things, apprehending federal fugitives and maintaining custody of and transporting federal prisoners. The International Investigations Unit, within the Investigative Services Division, has responsibility over international crime matters. Specifically, according to a Marshals Service official, this unit is responsible for (1) apprehending fugitives (foreign and international), (2) escorting extradited international fugitives back to the United States, and (3) training foreign police officers. For example, according to this official, the Marshals Service trains foreign police officers in investigating and apprehending fugitives. The training is held in the United States and is funded by the State Department. Treasury components that have roles in addressing international crime. include the Office of Enforcement; the U.S. Customs Service; ATF; the U.S. Secret Service; IRS Criminal Investigation (IRS-CI); the Federal Law Enforcement Training Center (FLETC); FinCEN; and the Office of Foreign Assets Control (OFAC). Treasury’s Office of Enforcement has responsibility for several functions that relate to international crime control. These functions include coordinating all Treasury law enforcement matters, including formulation of law enforcement policies; providing oversight, monitoring, and/or guidance to Treasury enforcement bureaus—Customs, ATF, Secret Service, IRS-CI, and FLETC—and FinCEN; ensuring cooperation between Treasury law enforcement and other federal departments and agencies; and negotiating international agreements to engage in joint law enforcement operations and exchange financial information and records. Within its dual missions of enforcing laws and regulating commercial activities, Customs has significant responsibilities for ensuring that goods and persons enter and exit the United States legally. Within these missions, Customs’ strategic plan identifies specific goals and objectives —such as disrupting the illegal flow of drugs and money—that are linked to international crime. Three of Customs’ principal components have a role in responding to international crime. These components are (1) the Office of Investigations; (2) the Office of International Affairs, and (3) the Office of Field Operations. Within these offices, a number of divisions and other units have roles in responding to international crime. According to Customs, within the Office of Investigations, 10 divisions are actively involved in responding to various types of international crime. The divisions are (1) Investigative Services, (2) Covert Operations, (3) Special Operations, (4) Financial Investigations, (5) Fraud Investigations, (6) Strategic Investigations, (7) CyberSmuggling, (8) Smuggling Investigations, (9) Intelligence, and (10) Air and Marine. Within the Office of International Affairs, three units have a role in responding to international crime. The units are the (1) Operations Division, (2) the Training and Assistance Division, and (3) the Policy and Programs Division. Within the Office of Field Operations, three units have roles in responding to international crime—(1) Outbound Enforcement Team, (2) Anti-Smuggling Division, and (3) Trade Programs. Examples of international crime initiatives undertaken by various Customs components include the following: Industry Partnership Programs. These programs—the Carrier Initiative Program, the Business Anti-Smuggling Coalition, and the Americas Counter Smuggling Initiative (training program)—are designed to deter and prevent narcotics from being smuggled into the United States via commercial cargo and conveyances. These programs are also designed to enlist industry support in activities related to narcotics interdiction. Border Coordination Initiative. This initiative is a border management strategy involving Customs, INS, the U.S. Coast Guard, and the U.S. Department of Agriculture. The initiative is intended to increase cooperation among federal entities along the Southwest border of the United States to more efficiently interdict illegal aliens, drugs, and other contraband. The initiative has six core parts, including developing joint port management and community partnership plans. ATF enforces federal laws and regulations relating to firearms, explosives, arson, alcohol, and tobacco. ATF units that have international crime responsibilities are (1) the International Programs Branch, (2) the Alcohol and Tobacco Diversion Branch, and (3) the International Training Branch. ATF has a number of international crime-related responsibilities and initiatives. For example, ATF’s Traffic in Arms Program is an enforcement effort to combat the illegal movement of U.S.-source firearms, explosives, and ammunition in international traffic. Also, ATF traces U.S. alcohol and tobacco products recovered in foreign countries to identify individuals and/or organized crime groups involved in the purchase and smuggling of these items. In this regard, ATF assists foreign countries by assessing their tax systems as they relate to alcohol and tobacco products and educating foreign officials in how such products are regulated in the United States. ATF’s National Tracing Center helps foreign law enforcement trace U.S.- sourced crime firearms. According to ATF, this trace information enables it to identify and target subjects responsible for illegally trafficking firearms in the United States. Furthermore, ATF’s International Response Team is the result of an agreement with the State Department’s Diplomatic Security Service. The agreement originally provided for ATF investigative assistance at fire and post-blast scenes on U.S. property abroad, where the Diplomatic Security Service has investigative responsibility. The agreement has since been expanded to include responses in which ATF would provide technical/forensic assistance and oversight in arson and explosives investigations to foreign governments on their territory. Such requests for assistance are to be relayed to ATF through the Department of State, after receiving authorization from the U.S. ambassador of the affected country. The Secret Service carries out two distinct missions: protection and criminal investigations. The investigative mission expanded from enforcement of U.S. counterfeiting statutes to include other financial crimes, such as financial institution fraud, computer fraud, financial identity theft, access device fraud, and computer-based attacks against the national’s financial, banking, and telecommunications infrastructure. According to the Secret Service, these types of crimes have become increasingly international in nature, given the seamless interaction among monetary and economic systems around the world. Within the Secret Service’s Office of Investigations, the following branches and divisions have roles in combating international crime: (1) International Programs Branch, (2) Financial Crimes Division, (3) Counterfeit Division, (4) Forensic Services Division, and (5) Investigative Support Division. In its strategic plan for fiscal years 2000-2005, the Secret Service established an investigative strategic goal of reducing crimes against the nation’s currency and financial system. The goal comprises four strategic objectives, all of which have a link to international crime: (1) reduce losses from financial crime, (2) reduce transnational financial crime, (3) enhance foreign and domestic partnerships, and (4) support the protective mission. To meet these objectives, the Secret Service is engaged in a number of activities. Examples of these activities include implementing the International Currency Audit Plan. Under this plan, the Secret Service—along with representatives from the Federal Reserve Board, the Bureau of Engraving and Printing, and the Federal Reserve Bank of New York—are to study the use of foreign currency abroad and develop estimates of counterfeiting levels outside the United States. Also, through the use of specialized task forces—such as the West African Task Force and the Asian Organized Crime Task Force—the Secret Service is targeting international organized crime groups and the proceeds of their criminal enterprises. IRS-CI’s mission is to investigate violations of the Internal Revenue Code and related financial crimes, such as money laundering, in order to enhance deterrence and compliance with tax laws. According to IRS-CI, tax evasion and money laundering are closely related and can involve similar activities. Money laundering can usually be considered as tax evasion in progress because illicit funds are rarely reported on subsequent tax returns. With the globalization of the world economy and financial systems, many of the complex evasion and money laundering schemes are employing international components, such as offshore banks, trusts, and corporations in “tax haven” countries. Although IRS-CI does not have specific jurisdiction over international crimes, the complex evasion and money laundering schemes require it to document evidence of the international movement of funds. According to IRS-CI, its International Strategy complements the overall U.S. strategy to combat the growing trend of international financial crimes. In this regard, IRS-CI participates in the Financial Action Task Force for Money Laundering (FATF). IRS-CI assists FATF in the development and implementation of strategies and laws that are intended to deter international financial crimes and enhance compliance with U.S. tax laws. As part of its international strategy, IRS-CI assigns special agents (attaches) in foreign posts that it considers “strategic,” such as Canada, China, Colombia, Germany, and Mexico. The attaches are responsible for, among other things, assisting IRS-CI special agents in gathering and developing foreign evidence related to investigations under IRS-CI’s jurisdiction and training host government personnel on financial investigative techniques. In this regard, as part of Plan Colombia (discussed earlier), IRS-CI is providing financial investigation training to Colombian law enforcement officials and prosecutors. FLETC serves as an interagency law enforcement training organization for more than 70 federal agencies. FLETC also provides services to state, local, and international law enforcement agencies. In its strategic plan,FLETC noted that training must be closely linked to changing law enforcement challenges, issues, and needs. For one area of change—the nature of crime itself—FLETC identified three types of international- related crime that law enforcement training must address: terrorism (both foreign and domestic groups), internet-related crime (including money laundering), and organized crime (including foreign organizations). In an effort to help combat international-related crime, FLETC offers a range of training programs to foreign law enforcement agencies. Most of these programs are offered at FLETC’s training campuses. Some are exportable to user locations or are available at respective International Law Enforcement Academies (ILEA). Under agreement with the Department of State and administered by FLETC’s International Programs Division, this training focuses on the following three areas: Law and democracy. Current initiatives under the United States Law and Democracy Program provide technical assistance and training to law enforcement personnel in Russia, Ukraine, and other Eastern European and Central Asian countries. The program funds training to combat white- collar crime, financial and computer crimes, and illegal narcotics trafficking. The program also supports human rights, free market economies, and the building of democratic systems and institutions. Antiterrorism assistance. The antiterrorism training programs conducted by FLETC and funded by the Department of State’s Office of Antiterrorism Assistance provide technical assistance and training to foreign law enforcement in an effort to combat world terrorism. International academies. The ILEAs in Hungary, Thailand, and Botswana offer opportunities for foreign prosecutors, police, and criminal investigators to interact with their U.S. counterparts. U.S. trainers share operational methods, investigative techniques, criminal trends, and current law enforcement issues with foreign law enforcement personnel. While FLETC provides support for the efforts of all of the ILEAs, it has lead responsibility for the Botswana academy and will also be responsible for a fourth academy planned for Central America. FinCEN’s mission is to (1) support law enforcement investigative efforts and foster interagency and global cooperation against domestic and international financial crimes; and (2) provide U.S. policymakers with strategic analyses of domestic and worldwide money-laundering developments, trends, and patterns. Within its overall mission, FinCEN’s strategic plan identifies a number of strategic objectives, including preventing, detecting, and prosecuting money laundering and other financial crimes; and establishing and strengthening mechanisms for the global exchange of information to combat money laundering and other financial crimes. Regarding international cooperation, FinCEN is to work closely with other components of the U.S. government and its global partners to counter the threat of transnational crime to financial institutions and governments. FinCEN activities include, for example, the following: Developing Financial Intelligence Units. FinCEN supports the development of Financial Intelligence Units in other nations to help facilitate the exchange of information in support of anti-money laundering investigations. These units—of which FinCEN is one model—have been established in various countries around the world to protect the banking community, detect criminal abuse of the financial system, and ensure adherence to laws against financial crime. Implementing the National Money Laundering Strategy. FinCEN supports Treasury’s initiatives highlighted in the 2000 National Money Laundering Strategy. Among other things, these initiatives include providing training and assistance to nations implementing counter-money laundering measures. FinCEN also plans to expand support of Treasury initiatives concerning (1) efforts to identify those international jurisdictions that pose a money laundering threat to the United States and (2) expertise and analysis related to correspondent banking and offshore financial services. Participating in the Financial Action Task Force. FinCEN supports Treasury’s efforts to promote the adoption of international anti-money laundering standards, such as those of the FATF. Formed by the G-7 Economic Summit of 1989, the FATF is dedicated to promoting the development of effective anti-money laundering controls and enhanced cooperation in counter-money laundering efforts among its membership around the world. Created in 1950, OFAC administers and enforces economic and trade sanctions against targeted foreign countries, terrorism sponsoring organizations, and international narcotics traffickers in accordance with U.S. foreign policy and national security goals. In its role, OFAC acts under Presidential wartime and national emergency powers to impose controls on transactions and freeze foreign assets under U.S. jurisdiction. Such sanctions are designed to immobilize assets and deny the targeted country, groups, or individuals access to the U.S. financial system and the benefits of trade and transactions involving U.S. businesses and individuals. Examples of OFAC’s activities include administering prohibitions contained in congressionally mandated programs involving terrorism and narcotics—these include those required by the Anti-Terrorism and Effective Death Penalty Act of 1996, P.L. 104-132 and the Foreign Narcotics Kingpin Designation Act, P.L. 106-120, Title VIII. The State Department’s role in addressing international crime is both diplomatic and programmatic. In carrying out this role, the State Department’s primary focal point for all international narcotics and international criminal matters is the Assistant Secretary for the Bureau for International Narcotics and Law Enforcement Affairs (INL). For drug control and anticrime issues, the Department’s Bureau of International Organization Affairs works with INL in coordinating interactions with agencies of the United Nations system. Furthermore, State’s geographic bureaus—such as the Bureau of European Affairs, and the Bureau of South Asian Affairs—have responsibilities in guiding U.S. diplomatic operations in their respective areas. The Office of the Coordinator for Counterterrorism—within the Office of the Secretary of State—is responsible for the overall supervision of international counterterrorism activities. The Bureau of Diplomatic Security manages multiple anticrime efforts and, according to State, is the primary point of contact for host nations’ law enforcement entities in their efforts to work collaboratively with the United States in combating international crime. The Office of the Legal Adviser, in coordination with Justice’s Office of International Affairs, is responsible for negotiating and bringing into force bilateral and multilateral agreements that provide for the extradition of fugitives and for assistance and cooperation by law enforcement authorities in criminal cases in U.S. or foreign courts. INL has broad responsibility for federal law enforcement policy and program coordination in the international area. INL funds various bilateral and multilateral international drug and crime control programs to accomplish its goals and objectives. In this regard, INL administers an annual budget of over $200 million in assistance—appropriated under annual Foreign Operations bills—to foreign countries. INL played a central role in developing the 1998 International Crime Control Strategy. In 1999, INL organized and coordinated the Vice President’s Global Forum on Fighting Corruption. This effort included participants from 90 nations and various multilateral and nongovernmental organizations. Since that time, INL has continued to coordinate a number of international anticorruption initiatives and activities. According to State, INL’s most important initiative in terms of funding is counternarcotics assistance in support of Plan Colombia, a combination of interdiction, eradication, and alternative development as well as rule of law and development assistance. State has sent to the Congress a proposal entitled the Regional Andean Initiative which expands key parts of Plan Colombia, primarily the rule of law and economic development portions, to Bolivia, Brazil, Ecuador, Panama, Peru, and Venezuela. Regarding future initiatives, INL plans to pursue efforts to establish an ILEA for Central/South America, in addition to those already established in Budapest, Bangkok, and Gaborone (Botswana). According to INL, due to endemic widespread poverty, weak police and judicial infrastructure, and governmental corruption, Africa is a fertile ground for a growing international crime threat. A new graduate-level facility is set to open in Roswell, New Mexico, in September 2001. A second initiative is to establish a reserve of up to 2,000 civilian police officers, similar in concept to the National Guard. According to INL, the United States currently contributes over 700 civilian police officers worldwide to international law enforcement operations. INL is to respond to requests by directing a contractor to recruit, select, and train U.S. law enforcement personnel for missions. In this endeavor, police officers are to volunteer but remain in their regular jobs until called for active duty. A third initiative involves the creation of an interagency Migrant Smuggling and Trafficking in Persons Coordination Center designed to develop strategies and coordinate intelligence and other information. The Bureau of International Organization Affairs is charged with developing and implementing the policies of the U.S. government with respect to the United Nations and its affiliated agencies, as well as within certain other international organizations. The Bureau is to engage in what is known as multilateral diplomacy to promote and defend the various overlapping interests of the American people. More specifically, with respect to international crime-related issues, the Bureau is to support efforts in the areas of nonproliferation, nuclear safeguard, arms control, and efforts to combat terrorism, organized crime, and narcotics trafficking; democratic principles and the rule of law in government and politics; and human rights, including the advancement of women’s rights. On a less global scale, State’s geographically defined bureaus—for Africa, East Asia and the Pacific, Europe, the Near East, South Asia, the Western Hemisphere, and the New Independent States—are to guide the operation of the U.S. diplomatic missions within their regional jurisdiction. These bureaus are to work closely with U.S. embassies and consulates overseas and with foreign embassies in Washington, D.C. Unlike the Bureau of International Organization Affairs—which engages in multilateral diplomacy—the geographic bureaus are to coordinate the conduct of bilateral foreign relations. For example: Europe. The Bureau of European Affairs is responsible for developing, coordinating, and implementing U.S. foreign policy on a variety of issues dealing with national security, economic prosperity, democracy, human rights, protection of the environment, halting the proliferation of weapons of mass destruction, and combating terrorism and international crime. A key policy goal is the establishment of an integrated system to enhance regional stability and security, involving the North Atlantic Treaty Organization, cooperation with Russia, the Organization for Security and Cooperation in Europe, the European Union, and the treaty on Conventional Armed Forces in Europe. Western Hemisphere. The Bureau of Western Hemisphere Affairs is responsible for managing and promoting U.S. interests in the region by supporting democracy, trade, and sustainable economic development, and fostering cooperation on issues such as drug trafficking and crime, poverty reduction, and environmental protection. A key initiative supported by the Bureau is “Plan Colombia”—an integrated strategy for promoting the peace process, combating the narcotics industry, reviving the Colombian economy, and strengthening Colombia’s democratic society. The Office of the Coordinator for Counterterrorism has the primary responsibility for developing, coordinating, and implementing U.S. international counterterrorism policy. The office chairs the Interagency Working Group for Counterterrorism—to develop and coordinate policy— and State’s own task force on counterterrorism to coordinate the response to international terrorist incidents that are in progress. According to State, in order to ensure better interagency coordination, officers from the FBI and the Central Intelligence Agency are detailed to the office. In addition, the office coordinates U.S. government efforts to improve counterterrorism cooperation with foreign governments, including the policy and planning of State’s Antiterrorism Assistance Program. This program is intended to provide assistance, including training and equipment, to foreign countries to enhance the ability of their law enforcement personnel to deter terrorism and terrorist groups from engaging in international terrorist acts. In addition to its security and protection roles both domestically and abroad, the Bureau of Diplomatic Security is responsible for the investigation of passport and visa fraud, which are often linked to the movement of international criminals. The Buraeu also coordinates State’s anti-terrorism and anticrime “Rewards” efforts; coordinates investigative leads overseas for State and other U.S. federal, state, and local law enforcement agencies; and provides anti-terrorism training to both U.S. and foreign government law enforcement agencies. USAID has a twofold purpose of furthering U.S. foreign policy interests in expanding democracy and free markets, while improving the lives of the citizens of the developing world. In doing this, USAID is the principal U.S. agency to provide assistance to countries recovering from disaster, trying to escape poverty, and engaging in democratic reforms. Although USAID is an independent federal government agency, it receives overall foreign policy guidance from the Secretary of State. With respect to narcotics and crime control, USAID is responsible for designing and implementing development assistance programs—for example, assistance to drug-producing countries to diversify their economies away from dependency on illegal drugs and towards open market economies. In the short term, USAID is responsible for alleviating the economic and social dislocation resulting from successful drug control programs. In the longer run, USAID’s mandate includes strengthening democratic institutions and the respect for human rights. USAID sponsors anti-drug education programs designed to build institutions overseas to address the growing problem of drug abuse. USAID also funds justice programs to strengthen host nation capability to prosecute criminal cases in court and to develop and implement laws to deter criminal elements. USAID identified four of its bureaus and five of their components—offices or centers—and its Office of the Inspector General (OIG) as having roles in combating various types of international crime. The four bureaus (and their relevant offices or centers) are the (1) Bureau for Policy and Program Coordination (which includes the Office of Program Coordination and the Office of Policy Development and Coordination); (2) Bureau for Global Programs (which includes the Center for Democracy and Governance and the Center for Economic Growth); (3) Bureau for Europe and Eurasia; and (4) Bureau for Humanitarian Response (which includes the Office of Transitional Initiatives). The Center for Democracy and Governance is an example representative of USAID’s efforts against international crime. To further support and advance USAID’s democracy and governance program, the Center for Democracy and Governance was founded in May 1994. The Center is to help USAID field missions design and implement democracy strategies, provide technical and intellectual leadership in the field of democracy development, and manage some USAID programs directly. The Center is organized along the lines of USAID’s strategic framework for democracy and governance. The framework has four objectives: (1) rule of law (strengthening legal systems); (2) elections and political processes (conducting elections and developing political parties and educating voters); (3) civil society (promoting a politically active civil society); and (4) governance (promoting accountable and transparent government institutions). Under the rule of law objective, the Center’s efforts to strengthen legal systems—in conjunction with the activities of USAID missions—fall under three interconnected priority areas, each of which is to integrate human rights concerns: supporting legal reform, improving the administration of justice, and increasing citizens’ access to justice. For example, the Center is represented on an advisory committee, which was established to enhance interagency communication and coordination in the areas of police and prosecutor training and development. With respect to the governance objective, the Center is to concentrate on the following five areas: legislative strengthening, decentralization and democratic local governance, anticorruption, civil-military relations, and improving policy implementation. For example, the Center has provided financial support to Transparency International, a nongovernmental organization dedicated to generating public support and action for anticorruption programs and enhancing transparency and accountability in governments worldwide. Overall, USAID has anticorruption activities in 54 countries; and the Center manages country-specific, anticorruption programs valued at $19 million. In addition to its efforts against corruption, USAID has activities that are designed to address other types of international crime and support the 1998 International Crime Control Strategy. For example, to counter narcotics, USAID has implemented “alternative development” programs in several coca-producing countries, such as Peru and Bolivia. Such programs are intended to strengthen the coca-producing areas’ licit economies and improve their social and economic infrastructure. According to USAID, since 1995, the areas used in Peru for coca cultivation have declined by 70 percent. USAID also supports efforts against trafficking in precious gems, violations of intellectual property rights, environmental crimes, trafficking in women and children, and financial fraud. In addition, the USAID OIG’s Investigations and Audit Divisions have investigated incidents of financial fraud related to the agency’s developmental, humanitarian, and reconstructive aid programs around the world. According to USAID, recent successful OIG investigations of entities involved in financial fraud related to USAID programs in the United States and overseas have resulted in the recovery of more that $100 million in fines. Discussed below are examples that illustrate the federal response to specific types of international criminal activity—such as corruption and terrorism—and at particular physical locations—such as ports of entry. The federal response in these areas includes numerous entities within Justice, Treasury, and State, as well as various other federal departments and agencies. Regarding specific types of international crime, in an earlier report, we identified at least 35 federal entities—consisting of 7 cabinet-level departments and 28 related agencies, bureaus, and offices—that had a role in providing rule-of-law assistance to fight corruption during fiscal years 1993 to 1998. Appendix V provides a complete listing of the 35 federal entities. In terms of the response to terrorism, we previously identified 43 federal agencies, bureaus, and offices that have terrorism-related programs or activities. These entities included the departments of Justice, Treasury, and State and their components (as discussed in this appendix), as well as other federal entities such as NSC and the Central Intelligence Agency. In August 2000, responding to an April 1999 Executive Memorandum from the President, the Interagency Commission on Crime and Security in U.S. Seaports issued a report detailing, among other things, the missions and authorities of federal entities handling crime at seaports. Many of the crimes cited by the Commission—such as terrorism and alien smuggling— fit our definition of “international crime.” On the basis of its review of 12 of the 361 U.S. seaports, the Commission identified 10 federal departments, 25 of their components, and 6 other federal entities that are involved or interested in seaport operations. Fifteen of these—including the Departments of Treasury and State, the FBI and Customs, and EPA— were also identified as having jurisdiction over and a role in combating criminal activity at the seaports reviewed. Table 5 presents information about the types of criminal activity encountered at seaports and the relevant federal (and state and local) entities with jurisdiction over these activities. We identified a group responsible for the executive-level coordination of international crime. We also identified a number of coordination mechanisms at the operational level focusing on specific types or aspects of international crime, as well as particular geographic areas. Various officials we contacted identified challenges involved in coordinating the response to international crime. Our prior work has stressed the need for sustained executive-level coordination of crosscutting efforts that address national issues. Regarding coordination of the overall federal response to international crime, PDD-42 established the Special Coordination Group on International Crime (SCG) to ensure sustained and focused attention on international crime fighting. The SCG was comprised of high-level officials from relevant federal entities, including Justice, Treasury, and State and was chaired by a senior NSC official. A number of subgroups—one for each of the types of international crime enumerated in the crime control strategy—were also formed. Because the SCG’s and its subgroups’ proceedings—and any results and products—are classified, they are not discussed in this report. Separately within NSC, the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism—as Special Assistant to the President—is intended to be responsible for interagency coordination on issues related to international organized crime. In addition, the Office of Transnational Threats is intended to be the NSC point of contact on international narcotics issues. According to an NSC official, two NSC staff were assigned full-time to international crime coordination matters. In response to our review, a number of officials identified some challenges faced by the SCG in implementing its role. For example, according to State and NSC officials, the SCG was to meet periodically to discuss matters related to the response to international crime. After meeting 14 times from about mid-1998 through mid-1999, the SCG did not subsequently meet very frequently. Specifically, according to an NSC official, the SCG did not meet at all for almost 9 months—from about September 1999 to June 2000—in part because some of its members were involved in other activities, such as working on year 2000 computer compliance matters; and in part, because of staffing shortages. According to this official, the SCG met four times each in 1999 and 2000. This official also noted that the coordination of the federal response to international crime—given its scope and number of participants—could be further improved. In this regard, he stated that the SCG had been a step in the right direction toward improving coordination and had worked reasonably effectively in certain instances. Separately, USAID officials pointed out that while SCG was an effective way to share information among agencies, it lacked the authority to broker differences between agencies or between headquarters and field units. The SCG was abolished by National Security Presidential Directive 1 (NSPD-1)—which was issued in February 2001 by the new administration—and the directive did not designate a specific successor at that time. In addition, absent naming a successor for the SCG, the directive did not identify which of the 17 geographic and functional Policy Coordination Committees (PCC) it established were to handle coordination of federal efforts against international crime. Subsequently, as discussed briefly in appendix II, in April 2001, the Assistant to the President for National Security Affairs—as part of the Bush administration’s ongoing review of international crime, terrorism, and critical infrastructure—established a PCC for International Organized Crime. This PCC is to be comprised of officials at the Assistant Secretary level from relevant federal entities and is to be chaired by the NSC Senior Director for Transnational Threats. The PCC is intended to coordinate policy formulation, program oversight, and new initiatives related to a number of international crime issues not directly related to counterterrorism, including arms trafficking, trafficking in women and children, and foreign official corruption. According to an NSC official, as its first task, the PCC is expected to evaluate the 1998 International Crime Control Strategy to reflect changes in the threat from international crime as described by the December 2000 International Crime Threat Assessment. This official did not provide a time frame for beginning and completing the evaluation of the strategy. Regarding the coordination of specific types or aspects of international crime, a number of coordination centers, interagency coordinators, and coordination bodies and working groups have been established in recent years. For example, State and Justice created a Migrant Smuggling and Trafficking in Persons Coordination Center to achieve greater integration and overall effectiveness of the U.S. effort to combat trafficking in persons and smuggling of migrants. In addition, FBI and Customs formed a center to fight intellectual property rights violations. Among other things, the center is to coordinate all U.S. government domestic and international law enforcement activities involving intellectual property rights and to serve as the collection point for intelligence provided by private industry. Also, as discussed in appendix V, in 1999, State appointed a coordinator for rule of law assistance programs. According to State, the position lapsed at the end of the Clinton administration and has not been reestablished. In addition, federal entities identified a variety of mechanisms, working groups, and organizations and law enforcement entities with which they coordinate their international crime activities. Within Justice, for example, the FBI coordinates its activities against financial fraud through the International Securities Working Group. Furthermore, the FBI coordinates its activities against various types of international crime with foreign police organizations—such as the Royal Canadian Mounted Police—and domestic law enforcement entities, such as Customs and INS. The FBI coordinates its training activities through a variety of means, such as the International Law Enforcement Academy Steering Committee. DEA coordinates its drug enforcement efforts through interagency coordinating groups or committees, such as “Linear” for cocaine and “Linkage” for heroin. INS’ Border Patrol coordinates its alien smuggling efforts through the Justice Alien Smuggling Task Force and the Interagency Working Group on Smuggling and Trafficking. Within Treasury, for example, Customs coordinates its high-tech crime efforts with the G-8’s High Tech Crime Sub-Group, as well as with DEA and FBI. Also, Customs coordinates its terrorism efforts through, among others, the Interagency Intelligence Committee on Terrorism and efforts against intellectual property crimes through the National Intellectual Law Enforcement Coordination Council. ATF coordinates its arms-trafficking efforts with, among others, Customs and INTERPOL. IRS-CI coordinates many of its anti-money laundering efforts with components of Justice and Treasury. Within State, for example, INL coordinates, among other things, with the Departments of Justice and Defense and others the designation of major narcotics transit and trafficking countries, and the decisions on the certification of countries as cooperating with the United States in connection with counternarcotics efforts. As discussed earlier, the Office of the Coordinator for Counterterrorism coordinates the response to international terrorism with, among others, FBI and the Central Intelligence Agency. Within USAID, the Global Bureau’s Center for Democracy and Governance coordinates its public corruption activities through the SCG’s Sub-Group on Diplomatic Initiatives and Institutional Development. Internationally, USAID coordinates with multinational entities, such as the Organization for Economic Cooperation and Development, and with nongovernmental entities, such as Transparency International, among others. In addition to the challenges related to the SCG discussed earlier, various federal entities identified a number of challenges in coordinating their efforts against international crime. For example: Customs identified an absence of mechanisms to share data in a timely fashion and restrictions related to the sharing of sensitive information, especially with the intelligence community. The FBI cited challenges in obtaining evidence from foreign law enforcement agencies necessary to support U.S. criminal charges, such as predicate acts for money laundering. USAID noted that the large number of actors involved and the diffuse nature of decisionmaking—between field offices and headquarters and among the actors—posed particular coordination challenges for anticorruption efforts. State noted the challenge of overlapping responsibilities and competition of limited resources among federal agencies and the mismatch of institutions and expertise between U.S. and foreign law enforcement agencies, including different definitions of crime and the capacity to “absorb” training. Our previous work has shown that extensive federal crosscutting responses to national issues—such as international crime—require a high level of sustained coordination. Our work has also shown that such high- level coordination can bring about the required firm linkages of threat assessments, strategy and prioritization of effort, resource allocation and tracking, and outcome-oriented performance measures. Otherwise, our work has concluded, scarce resources are likely to be wasted, overall effectiveness will be limited or not known, and accountability will not be ensured. In this regard, we note that the establishment of the PCC for International Organized Crime is a step in the right direction in seeking to provide coordination and oversight of the federal response to international crime. On the basis of the known details about its role and priorities, the PCC appears to address some of the coordination and related issues we discuss in this report, such as evaluating the International Crime Control Strategy in light of any changes in the threat from international crime. The federal effort to combat terrorism—one of the activities in our definition of international crime—illustrates some of the challenges involved in implementing crosscutting responses to complex public problems and national issues. Specifically, our work pointed out that the counterterrorism effort has been prone to problems with interagency coordination. Our work pointed out, for example, that: The federal agencies were not tracking expenditures or developing priorities for the billions of dollars being invested in an increasing number of counterterrorism programs. These resources and programs, in turn, had not been clearly linked to sound threat analyses. This situation had created the potential for various federal entities creating their own programs without adequate coordination, with the further potential for gaps and/or duplication. In response, we recommended that, among other things, the federal government conduct sound threat assessments to define and prioritize requirements and properly focus programs and investments in combating terrorism. In commenting on a draft of this report, Justice—as it had done in commenting on our report that originally raised this issue—disputed the conclusions that there were major problems with interagency coordination of terrorism activities and that sound threat assessments were not being conducted and used to define, prioritize, and address current terrorism threats. Furthermore, as with the prior report, Justice reiterated its position that the Attorney General’s Five Year Interagency Plan on Counterterrorism and Technology Crime included an articulation of goals, objectives, and time frames and that—together with a number of presidential directives—the Plan essentially constituted a baseline national strategy to counter terrorism. As presented in appendix II, the International Crime Control Strategy, announced in May 1998, consists of 8 overarching goals and 30 implementing objectives. Of these totals, one goal and two implementing objectives address the topic of corruption, as table 6 shows. In response to our inquiries regarding federal efforts to address international corruption, a senior Department of State official confirmed that the International Crime Control Strategy has two implementing objectives that address corruption. Furthermore, in providing perspectives on the two objectives, the official commented substantially as follows: One objective addresses corruption and bribery in the context of transnational business practices, particularly regarding government procurement contracts. This context may involve, in a hypothetical example, bribes resulting in the purchase of French Mirage versus U.S. F-16 military aircraft. In sum, this section of the strategy addresses the type of corruption that is of direct concern to competing transnational businesses. The other objective addresses corruption in a broader context—a rule of law context—wherein corruption among justice and security officials has a special significance. These officials are charged with upholding the rule of law for governments, which establishes the basic framework within which all elements of society, including business, are to operate. Widespread corruption among justice and security officials can potentially destabilize governments. According to Commerce and State Department reports, the bribery of foreign public officials is a deeply embedded practice in many countries. For example, in the period from May 1994 through April 2001, Commerce received reports that the outcome of 414 contracts valued at $202 billion may have been affected by bribery involving foreign firms. During this period, U.S. firms are alleged to have lost 101 of these contracts worth approximately $30 billion because of this corrupt practice. In recent years, a variety of anticorruption and transparency initiatives have been considered by various international governmental entities. Furthermore, a number of legal and business associations and nongovernmental organizations have had key advisory roles in developing the various anticorruption initiatives. Of the various initiatives, an international agreement adopted by the Organization for Economic Cooperation and Development (OECD) has been described as the “centerpiece of a comprehensive U.S. government strategy to combat bribery and corruption” in international business transactions. This agreement is the OECD Convention on Combating Bribery of Foreign Public Officials in International Business Transactions. In 1977, the United States enacted the Foreign Corrupt Practices Act (FCPA), 15 U.S.C. 78dd-1, et seq., 78ff, which makes it unlawful to bribe foreign government officials for the purpose of obtaining or retaining business. Subsequently, partly as a result of U.S. leadership efforts to create a level playing field among the world’s major trading nations, an international anti-bribery agreement was created and entered into force in 1999. This agreement—the OECD Convention on Combating Bribery of Foreign Public Officials in International Business Transactions—obligates its parties to criminalize the bribery of foreign public officials in order to obtain or retain business or other improper advantage in the conduct of international business. In effect, the OECD Convention internationalizes the principles in the FCPA. The State Department’s Assistant Secretary for Economic and Business Affairs has described the OECD Convention as “our principal weapon for combating a particularly damaging form of corruption, the payment of bribes to foreign officials in international business transactions, sometimes referred to as the ‘supply side’ of bribery.” The OECD Convention provides a mechanism for monitoring—through a peer review process following the model used by the FATF—the quality of the implementing legislation enacted by each participating nation and the effectiveness of efforts to enforce relevant national laws. Also, under U.S. law, the Departments of State and Commerce are required to provide the Congress with annual reports on the implementation of the OECD Convention. State and Commerce submitted their most recent annual reports in June 2001 and July 2001, respectively. The State and Commerce reports presented similar key points and findings. For instance, in its July 2001 report, Commerce noted that: Further progress has been made on the first priority of ensuring that all signatories ratify the OECD Convention and enact implementing criminal legislation prohibiting the bribery of foreign government officials. Thirty- three of the 34 signatories had deposited instruments of ratification and 30 had legislation in place to implement the Convention. As of June 4, 2001, Brazil, Chile, Ireland, and Turkey had not ratified and/or enacted implementing legislation. Countries that have ratified the Convention had generally taken a serious approach to fulfilling their obligations on criminalizing the bribery of foreign government officials. During the Phase I monitoring procedure in the OECD’s Working Group on Bribery, all 28 signatories with implementing legislation have had such legislation reviewed. On the basis of its own review of implementing legislation, the U.S. government is concerned that some countries’ legislation—particularly, that of France, Japan, and the Untied Kingdom—may be inadequate to meet all of their commitments under the Convention. Since the Convention had been in force for only a short time, it was still too early to make judgments regarding the effectiveness of enforcement measures. According to Justice’s Criminal Division, the OECD’s Working Group on Bribery will embark later this year upon Phase II of the monitoring procedure. Phase II reviews will focus on the quality of enforcement under each signatory’s implementing criminal legislation. As mentioned previously, in addition to the OECD Convention, a variety of other anticorruption and transparency initiatives have been started by various international governmental entities, including the Organization for Security and Cooperation in Europe (OSCE), the Organization of American States (OAS), the Asia Pacific Economic Cooperation Forum, the Global Coalition for Africa, and the United Nations. An example of these initiatives is OSCE’s Charter for European Security, Rule of Law, and Fight Against Corruption. Furthermore, key advisory roles in developing the various anticorruption initiatives have involved a number of legal and business associations and nongovernmental organizations—such as the American Bar Association, the U.S. Chamber of Commerce, the International Chamber of Commerce (ICC), and Transparency International. An example of these initiatives is ICC’s Rules of Conduct and Bribery, which are to apply to business conducted across borders. More information about these various initiatives is presented in a May 2000 brochure prepared by the State Department, in consultation and cooperation with other federal entities. The brochure was developed as an outreach effort to provide U.S. companies and business associations with information about the benefits of corporate anti-bribery policies, as well as give guidance on the requirements of U.S. law and the OECD Convention. In recent years, in addition to the OECD Convention focusing on transnational bribery, a number of broad-based multilateral regional initiatives against corruption have been developed. According to Justice’s Criminal Division, efforts in this area in Europe and the Western Hemisphere are currently the most developed. In this regard, the United States has provided assistance worldwide to support the development of democratic principles and institutions, although the effectiveness of some of this assistance has been recently questioned. In 1999, the Council of Europe’s (COE) Criminal Law Convention Against Corruption was opened for signature. In general, the COE Convention obligates state parties to criminalize a wide variety of domestic and international bribery offenses and related money laundering offenses, as well as to adopt asset forfeiture and international legal assistance measures. The COE Convention also provides that the Group of States Against Corruption (GRECO) shall monitor parties’ implementation of the Convention. GRECO is a peer review mechanism through which members evaluate each other’s implementation of the COE Convention as well as a variety of preventative measures against corruption. The United States signed the Convention and joined GRECO in fall 2000. A number of Eastern and Central European countries—such as Romania, Croatia, Georgia, and Latvia—have also joined GRECO. Several U.S. government agencies are providing corruption experts to participate in GRECO evaluations of other countries. The Departments of Justice and State expect that over time, the GRECO evaluations will not only encourage internal reforms but also help the United States and other donor countries better target anticorruption technical assistance. In 1996, negotiation of the Inter-American Convention Against Corruption was completed. The Convention obligates state parties to criminalize domestic bribery, including the fraudulent use or concealment of property derived from such acts of bribery, and to criminalize transnational bribery, if consistent with the state’s constitution and legal system. It also encourages state parties to the Convention to adopt a broad range of preventive measures, including open and equitable systems for government hiring and procurement, standards of conduct for public servants, financial disclosure registration systems for certain public servants, and anticorruption oversight bodies. Twenty-two OAS countries, including the United States, have ratified the Convention. In May 2001, the state parties to the Convention concluded negotiation of a follow-up mechanism whereby international teams of experts are to review the level of implementation by each party. The mechanism was established by the state parties to the Inter-American Convention by means of a declaration signed on the margins of the June 2001 meeting of the OAS General Assembly. Additional anticorruption efforts, outside the framework of a formal instrument, are reflected in the First Global Forum on Fighting Corruption, which was hosted by the Vice President and held in Washington, D.C., in February 1999. Forum participants—from 90 governments—agreed to a final conference declaration that called on governments to (1) adopt principles and effective practices to fight corruption, (2) promote transparency and good governance, and (3) establish ways to assist each other through mutual evaluation. During May 28-31, 2001, 143 countries attended the Second Global Forum on Fighting Corruption at The Hague in the Netherlands. The Forum was hosted by the Dutch government and co-sponsored by the United States. The U.S. Attorney General led the U.S. delegation. These global efforts have been characterized as important for securing public integrity and controlling corruption among government officials, especially those responsible for maintaining the rule of law. In the early 1980s, as a way to support democratic principles and institutions, the United States began helping Latin American countries improve their judicial and law enforcement organizations. Until 1990, such assistance was provided primarily to Latin American and Caribbean countries. Since the breakup of the Soviet Union, however, the United States has also provided rule of law and related assistance to Central and Eastern Europe and other regions of the world. Generally, the phrase “rule of law assistance” refers to U.S. efforts to support legal, judicial, and law enforcement reform efforts undertaken by foreign governments. The term encompasses assistance to help reform legal systems (criminal, civil, administrative, and commercial laws and regulations) as well as judicial and law enforcement institutions (ministries of justice, courts, and police, including their organizations, procedures, and personnel). Also, the term includes assistance ranging from long-term reform efforts, with countries receiving funding over a period of years, to one-time training courses provided to the police or other law enforcement organizations. In a 1999 report to congressional requesters, who asked us to identify the amount of U.S. rule of law funding provided worldwide (by region and country) in fiscal years 1993-98, we noted that such data were not readily available for various reasons, including the following: The departments and agencies involved did not have an agreed-upon definition of what constitutes rule of law activities. Some entities could not provide funding data for all the years of interest nor had other problems in compiling the information we requested. Nonetheless, based on data that cognizant departments and agencies made available, our 1999 report presented a funding summary (see table 7) and made the following observations: The United States provided at least $970 million in rule of law assistance to countries throughout the world during fiscal years 1993-98. Some assistance—ranging from $138 million for Haiti to $2,000 for Burkina Faso—was provided to 184 countries. Over the 1993-98 period, the largest recipient of U.S. rule of law assistance was the Latin America and Caribbean region, which accounted for $349 million, or more than one-third of the total assistance. However, in the more recent years of the period, Central European countries received an increasing share. In 1998, for instance, the largest regional recipient was Central Europe, which accounted for about one- third of all rule of law assistance. In our 1999 report, we also noted that at least 35 federal entities— consisting of 7 cabinet-level departments and 28 related agencies, bureaus, and offices—had a role in providing rule of law assistance programs. These entities are listed in table 8. Regarding overall responsibility for coordinating rule of law programs and activities, our 1999 report noted that: There have been longstanding congressional concerns that rule of law coordination efforts among the numerous departments and agencies in Washington, D.C., were ineffective. Thus, in February 1999, State appointed a rule of law coordinator, whose principal mandate is to work with all the relevant U.S. governmental entities to develop a framework for future U.S. international rule of law assistance efforts.In addition, the coordinator is to be the principal U.S. liaison to other donors and private sector organizations involved in rule of law activities. In April 2001, we reported on rule of law assistance to 12 countries of the former Soviet Union. We concluded that—after 10 years and almost $200 million in funding—such assistance had produced limited results. Also, the report questioned the sustainability—the extent to which the benefits of a program extend beyond its life span—of the rather limited results that had been achieved; and attributed the lack of impact and sustainability to a number of factors, such as limited political consensus on reforms in recipient countries, a shortage of domestic resources for many of the more expensive innovations, and weaknesses in the design and management of assistance programs by U.S. agencies. The report recommended that program management be improved by implementing requirements for projects to include specific strategies for (1) achieving impact and sustainable results and (2) monitoring and evaluating outcomes. Much of the technical assistance that the United States provides to other nations for fighting international crime involves training, particularly training at law enforcement academies established abroad. The Department of Justice’s technical assistance efforts include two units within the Criminal Division—(1) the International Criminal Investigative Training Assistance Program (ICITAP) and (2) the Overseas Prosecutorial Development, Assistance and Training (OPDAT)—which attempt to strengthen police and legal systems in foreign countries. Justice, Treasury, State, and the U.S. Agency for International Development (USAID) provide a number of other training programs. In addition to training, U.S. technical assistance includes providing foreign nations with information from computerized law enforcement databases and investigative and forensic services. An example of such assistance is the U.S. National Central Bureau of the International Criminal Police Organization’s (USNCB/INTERPOL) Notice Program. The International Law Enforcement Academies (ILEA) are a cooperative effort among the Departments of State (which provides funding), Justice, and Treasury. To accomplish overall coordination of the ILEAs domestically, a Policy Board was established that is comprised of members from each Department and appointed by the Secretary of State, the Attorney General, and the Secretary of the Treasury. The mission of these academies has been to support emerging democracies; help protect U.S. interests through international cooperation; and promote social, political, and economic stability by combating crime. ILEAs also are to encourage strong partnerships among regional countries to address common problems associated with criminal activities. ILEAs have been established in Europe, Southeast Asia, and Southern Africa, and plans are underway to establish an ILEA in the western hemisphere to serve Central America and the Dominican Republic. State plans to open a graduate-level ILEA in Roswell, New Mexico, in September 2001. In 1995, the United States and the government of Hungary cooperated to create the first ILEA in Budapest, Hungary, under FBI leadership. This ILEA’s purpose is to train law enforcement officers from Central Europe and the newly independent states of the former Soviet Union. The academy offers two categories of courses: Core course. An 8-week core course—a personal and professional development program—focuses on leadership, personnel and financial management issues, ethics, the rule of law, and management of the investigative process. Annually, according to the State Department, approximately 250 to 300 mid-level police officers and managers receive this training, which is provided by various U.S. agencies and Hungarian and Western European law enforcement agencies. Specialized short-term courses. These courses provide law enforcement officers with training on combating various types of crime— for example, organized crime, financial crime, corruption, nuclear smuggling, illegal migration, and terrorism—including training on prosecuting criminal cases. Annually, according to the State Department, about 500 police, prosecutors, immigration specialists, and others participate in these courses. ILEA Southeast Asia—located in Bangkok, Thailand—opened in March 1999, under DEA leadership. Like the ILEA Budapest program, the purpose of the Bangkok ILEA is to strengthen regional law enforcement cooperation and improve performance. According to the State Department: This academy’s curriculum and structure are similar to those of ILEA Budapest, with the exception of a shorter core course (6 weeks). In 1999, over 700 law enforcement personnel representing 10 countries participated in courses at the academy. In July 2000, the State Department announced an agreement with the Government of Botswana to establish the ILEA for Southern Africa in Gaborone, under the leadership of FLETC. Similar in overall format to the other academies, ILEA Southern Africa is to follow the model developed for ILEAs in Budapest and Bangkok by providing courses on a wide range of law enforcement skills, including police survival, forensics, basic case management, fighting organized crime, supervisory police training, police strategy, narcotics identification and evidence handling, customs interdiction, illegal migration, and public corruption; and a permanent location from which to address special topics, such as stolen vehicles, money laundering, crimes against women, domestic violence, terrorism, and other critical topics such as human rights and policing. In September 2001, State will open a new ILEA in Roswell, New Mexico. This new facility, which will be open to graduates of the regional ILEAs, will offer shorter-term (4 weeks versus 8 weeks) advanced training with a greater focus on an academic versus practical or operational curriculum. Tailored to the regional needs of officials from Central/South America, pilot courses of ILEA Western Hemisphere have already been conducted at a temporary site in Panama. However, activities have been suspended until a permanent location can be selected. Two Justice units—ICITAP and OPDAT—are to work in tandem to strengthen justice systems abroad. The purpose of ICITAP—functionally located in Justice’s Criminal Division—is to provide training and development assistance to police organizations worldwide. That is, the mission of ICITAP is to support U.S. foreign policy and criminal justice goals by helping foreign governments develop the capacity to provide modern professional law enforcement services based on democratic principles and respect for human rights. The program was first created in 1986 to train police forces in Latin America on how to conduct criminal investigations. ICITAP’s activities have expanded worldwide since then and now consist of two principal types of assistance projects: developing police forces in the context of international peacekeeping enhancing the capabilities of existing police organizations in emerging democracies. Specific ICITAP activities or projects are to be initiated at the request of the National Security Council and the Department of State, in agreement with the foreign governments requesting the assistance. Priority is to be given to countries in transition to democracy, where unique opportunities exist for major restructuring and refocusing of police and investigative resources toward establishment of the rule of law. Regarding funding, according to Justice, ICITAP is unique among federal law enforcement assistance programs in that ICITAP is not listed as a “line item” in Justice’s budget. Rather, most of ICITAP’s budget consists of project-specific funding, which is provided to Justice by the Department of State and USAID. For fiscal year 2000, according to Justice’s Criminal Division, ICITAP received $6.6 million for the Latin American Regional Program and $23.6 million for training and development projects in Africa, the Middle East, Eastern Europe, and the Far East. According to State, it has proposed to Justice that ICITAP be transferred to State. State believes that such a transfer would improve the linkage between policy and implementation, provide better financial and administrative support, and strengthen ICITAP’s ability to respond to fast developing situations abroad. Created in 1991, the Office of OPDAT (also in Justice’s Criminal Division) provides justice-sector institution-building assistance, including training of foreign judges and prosecutors, in coordination with various government agencies and U.S. embassies. Although part of the Criminal Division, OPDAT programs are funded principally by the Department of State and USAID. OPDAT programs take place in South and Central America, the Caribbean, Central and Eastern Europe, Russia, the new independent states of the former Soviet Union, Africa, the Middle East, and Asia and the Pacific Region. In many of these countries, OPDAT has placed “Resident Legal Advisors.” The advisors are experienced prosecutors who are intended to interact with local justice-sector officials and direct OPDAT assistance projects. These projects seek to strengthen the legislative and regulatory criminal justice infrastructure within the host country, and enhance the capacity of that country to investigate and prosecute crime more effectively, consistent with the rule of law. Furthermore, USAID—through its Center for Democracy and Governance—has an agreement with Justice regarding OPDAT. The agreement allows USAID missions around the world to access the Office of OPDAT for help in activities such as conducting justice sector assessments, reviewing laws and legislation, designing rule of law programs, and providing other technical assistance. Federal agencies—particularly Justice and Treasury—help foreign nations combat international crime by providing technical assistance in the form of access to and use of specialized support services and systems, such as computerized databases and forensic laboratories. The following descriptions are examples—and not a complete or exhaustive listing—of this type of assistance. Examples of Justice support services and systems that foreign law enforcement entities may access or use for combating international crime include the following: Federal Bureau of Investigation’s (FBI) National Crime Information Center (NCIC). NCIC, the nation’s most extensive computerized criminal justice information system, consists of a central computer at FBI headquarters, dedicated telecommunications lines, and a coordinated network of federal and state criminal justice information systems. The center provides users with access to files on wanted persons, stolen vehicles, and missing persons, as well as millions of criminal history information records contained in state systems. Data in NCIC files are exchanged with and for the official use of authorized officials of the federal government, the states, cities, and penal and other institutions, as well as certain foreign governments. Drug Enforcement Administration’s (DEA) El Paso Intelligence Center (EPIC). Established in 1974, EPIC is a multiagency tactical drug intelligence center managed by DEA. EPIC’s mission is to support counterdrug efforts through the exchange of time-sensitive, tactical intelligence dealing principally with drug movement. Today, EPIC’s focus has broadened to include all of the United States and the Western Hemisphere where drug and alien movements are directed toward the United States. Through information sharing agreements with federal law enforcement agencies, the Royal Canadian Mounted Police, and the 50 states, EPIC can provide requesters with real-time information from different federal databases and EPIC’s internal database. INTERPOL’s Notice Program. Through the circulation of international notices, INTERPOL disseminates subject lookouts and advisories to member country police forces. These notices, color-coded to designate their specific purposes, are published at the request of a member country. INTERPOL members (such as USNCB) then receive and distribute the notices among appropriate law enforcement authorities within their respective countries. Ten different types of notices exist to communicate various kinds of criminal information. For example, a “red notice” indicates a wanted fugitive—that is, a subject for whom an arrest warrant has been issued and where extradition will be requested. Examples of Treasury support services and systems that foreign law enforcement entities may access or use for combating international crime include the following: Customs’ National Intellectual Property Rights Coordination Center. Located at Customs Service headquarters in Washington, D.C., the center’s core staffing consists of Customs Service and FBI personnel. The center’s responsibilities include (1) coordinating all U.S. government domestic and international law enforcement activities involving intellectual property rights (IPR) issues and (2) integrating domestic and international law enforcement intelligence with private industry information relating to IPR crime. According to Customs, particular emphasis is given to investigating major criminal organizations and those using the Internet to facilitate IPR crime. Bureau of Alcohol, Tobacco and Firearms’ (ATF) National Tracing Center. Through its National Tracing Center, ATF traces firearms for federal, state, and foreign law enforcement agencies. The firearms are traced from the manufacturer to the retail purchaser for the purpose of aiding law enforcement officials in identifying suspects involved in criminal activity. By examining patterns in aggregates of traces, gun tracing can help identify opportunities for intervention on the supply side of illegal firearm markets. Such intervention can then reduce further trafficking and associated violent crime. For example, ATF’s Project LEAD—an automated data system that tracks illegal firearms—is designed to help identify recurring patterns of illegal firearm suppliers, both in the United States and across international borders, and provide evidence for prosecution. Financial Crimes Enforcement Network’s (FinCEN) support of financial intelligence units. FinCEN supports the development of financial intelligence units in other nations to help facilitate the exchange of information to assist anti-money laundering investigations, detect criminal abuse of the financial system, and ensure adherence to laws against financial crime. Working together, these financial intelligence units have created a secure communication network—developed by FinCEN— which permits the units and FinCEN to post and access information about money laundering trends, financial analysis tools, and technological developments. Existing frameworks for measuring the effectiveness of federal efforts to address international crime include (1) the International Crime Control Strategy, (2) Government Performance and Results Act (GPRA) strategic and performance plans prepared by federal departments and agencies, and (3) crime-specific national strategies. As we have previously reported, for any given program area, virtually all the results that the federal government strives to achieve require the concerted efforts of two or more agencies. The International Crime Control Strategy represents a national strategic plan for combating international crime and reducing its adverse impacts on the American people. The strategy articulates 8 overarching goals and 30 related objectives as a blueprint for a coordinated, long-term attack on international crime. Each of the eight general goals is associated with a number of specific implementing objectives—with the expectation that achieving the objectives will result in reaching the overall goal. To further describe how the objectives will be achieved, the strategy outlines specific programs and initiatives that will be carried out to address each identified objective. To illustrate, goal 2 of the stragegy is “Protect U.S. borders by attacking smuggling and smuggling-related crimes.” Four implementing objectives are associated with achieving this goal: Enhance our land border inspection, detection, and monitoring capabilities through a greater resource commitment, further coordination of federal agency efforts, and increased cooperation with the private sector. Improve the effectiveness of maritime and air smuggling interdiction efforts in the transit zone. Seek new and stiffer criminal penalties for smuggling activities. Target enforcement and prosecutorial resources more effectively against smuggling crimes and organizations. Furthermore, for each of the four objectives, the strategy identifies programs and initiatives that are to take place to carry out the objective. Regarding the first objective, for example—“Enhance our land border inspection, detection, and monitoring capabilities”—these programs and initiatives include the following: (1) implementing the Southwest border strategy, (2) deploying new detection and identification technology, and (3) cooperating with the private sector. Under goal 8 of the International Crime Control Strategy (“Optimize the full range of U.S. efforts”), one of the objectives is to develop measures of effectiveness to assess progress over time. Specifically, the purpose of this objective is to establish a system to measure progress on the major goals of the strategy, provide feedback for the strategy refinement and system management, and assist the administration in resource allocation. Moreover, as stated in the strategy, the goals and objectives are dynamic and are expected to evolve over time as conditions change, new crime trends emerge, and improved anticrime techniques are developed. As described in the strategy, the performance measurement system is to be designed to quantify the measurement of results in the following areas: Disrupting major criminal organizations. Reducing criminal activity at our borders. Improving coordination among U.S. agencies. Improving coordination with other nations against criminal targets. Increasing adoption of international standards and norms to combat crime. Securing passage and implementation of major anticrime conventions internationally. Reducing incidence and costs to the United States of intellectual property theft and economic crime. Improving the coordination of international investigations into and prosecutions of high-tech crime. Strengthening international capabilities against smuggling and raising the cost of smuggling activities to smugglers. Strengthening international cooperation against alien smuggling and reducing the flow of illegal migrants to the United States. Fighting money laundering and financial crime. Increasing the number of nations that extradite nationals and that provide mutual legal assistance. Combating illicit smuggling in firearms. Combating illicit trafficking in women and children. Decreasing the production and distribution of child pornography. Combating corruption and improving the administration of justice in foreign criminal justice systems. Achieving the other goals and objectives of the strategy. In describing the prescribed measurement system, the International Crime Control Strategy compared it to a similar performance measurement system being created and implemented by the Office of National Drug Control Policy (ONDCP) to measure the effectiveness of the nation’s war on drugs. That system—ONDCP’s Performance Measures of Effectiveness (PME)—was established in February 1998 and is designed to implement the National Drug Control Strategy and measure the effectiveness of the nations’ drug control efforts through a framework of measurable goals, objectives, and targets. Additional details on these performance measures appear later in this appendix. During our review, we found that no progress has been made towards establishing the performance measurement system described in the International Crime Control Strategy. According to a National Security Council (NSC) official, the set of performance measures envisioned under the strategy was never implemented. Rather, the decision to devise and implement performance measures was left up to the individual departments and their components. In response to our inquiries, the NSC official indicated that he was unaware of any specific measures used by departments or their components to gauge the success of their efforts to combat international crime, especially in the context of the strategy. Generally speaking, however, the official noted that the concept of measuring performance is farther along in the area of counterdrug efforts than for any other types of international crime. In lieu of the performance measurement system envisioned in the International Crime Control Strategy, strategic and performance plans required by GPRA present an alternative approach for measuring the effectiveness of the federal government’s international crime control efforts. For example, we have previously reported that GPRA offers a framework for addressing crosscutting federal programs (such as international crime control) and could be used by the Congress, the Office of Management and Budget, and the agencies to ensure that such programs are being effectively coordinated. Furthermore, we noted that agencies could use the GPRA planning processes to consider whether agency goals are complementary and common performance measures are needed. Our recent reports on agencies’ GPRA reports and plans indicate that agencies are still challenged to develop meaningful goals, objectives, and indicators that adequately measure their own program results and effectiveness. Furthermore, despite the potential benefits, there has been no governmentwide effort by NSC or others to consolidate information from agencies’ GPRA plans into a single plan measuring the government’s overall results on international crime control. The following sections discuss in more detail how the strategic and performance plans of Justice, Treasury, and State address international crime and the extent to which these plans measure program performance. The Department of Justice’s 2000-2005 strategic plan identified 7 strategic goals and 34 related strategic objectives. For each of the strategic objectives, the plan further outlined various strategies for achieving the objectives. Among the goals most directly linked to international crime are goal 1 (“Enforcing federal criminal laws”) and goal 4 (“Administering immigration laws”). Although the plan does not discuss the International Crime Control Strategy or identify linkages between the two strategies, Justice highlighted several international crimes—including terrorism, worldwide drug trafficking, and immigration/border control—as key global challenges that it expected to focus its work on over the next 5 years. Table 9 illustrates how selected objectives and strategies in Justice’s strategic plan address similar overarching goals and implementing objectives in the International Crime Control Strategy. While Justice’s strategic plan provides overall direction and framework, its annual performance plan links the broadly stated goals and objectives with specific annual performance goals or targets. For example, for the strategic goal “Secure the land border, ports of entry, and coasts of the United States against illegal immigration,” Justice’s 2001 summary performance plan identified an annual goal to effectively control the border and thwart international alien and drug smuggling. This annual goal is to be measured by three performance indicators: (1) increased operational effectiveness within identified Southwest border zones, (2) interception of mala fide travelers and migrants (i.e., persons attempting illegal entry) en route to the United States, and (3) offshore prosecutions assisted by INS aided by fraudulent document detection. Regarding performance measurement, in June 2000, we reported our observations on key outcomes described in Justice’s GPRA performance report and plan. Two of these key outcomes were most directly related to international crime: (1) less drug- and gang-related violence and (2) U.S. borders secure from illegal immigration. Overall, we found that Justice’s performance plan did not contain sufficient performance goals and measures to objectively capture and describe performance results or measure progress towards desired outcomes. We reported that Justice’s performance measures were more output-oriented than outcome-oriented and did not capture all aspects of performance. Also, we noted that Justice had not stated performance goals in some instances. For the international crime-related outcomes, we reported mixed results. For example, the performance measures for drug- and gang-related violence did not cover the full range of issues that the goal covers, and the performance measures also tended to be more output-oriented than outcome-oriented. The Department of the Treasury’s 2000-2005 strategic plan identified 14 strategic goals and 40 related strategic objectives, grouped into 4 broad departmental missions. The goals most directly linked to international crime control—money laundering and financial crime, border control, and violent crime and terrorism—are associated with Treasury’s law enforcement mission. Although the plan does not discuss the International Crime Control Strategy or identify linkages between the two strategies, Treasury highlighted linkages between its own strategic plan and other national crime control strategies, such as the National Money Laundering Strategy and the National Drug Control Strategy. Table 10 illustrates how selected objectives and strategies in Treasury’s strategic plan address similar overarching goals and implementing objectives in the International Crime Control Strategy. Treasury’s strategic plan generally describes the department’s overall goals, objectives, and strategies. The plan also forms the baseline for the development of the Treasury components’ strategic and performance plans—which contain additional details on the specific performance goals and measures. For example, for the strategic objective “Deny the smuggling of illicit drugs,” there are two related bureau strategic goals: (1) reduce the amount of illegal drugs entering the United States and (2) effectively use asset forfeiture as a high-impact law enforcement sanction to punish and deter criminal activity. Progress towards these goals is to be measured via two performance goals—one to be reported by the Customs Service (seized drugs) and one by the Treasury Forfeiture Fund (seized property). Regarding performance measurement, in June 2000, we reported our observations on key outcomes described in Treasury’s GPRA performance report and plan. Among these key outcomes, two—for Customs and the Bureau of Alcohol, Tobacco and Firearms (ATF)—were most directly related to international crime: (1) reduced availability and/or use of illegal drugs and (2) criminals are denied access to firearms, and firearms-related crime is reduced. Overall, we reported that it was difficult to determine Treasury’s progress towards these two outcomes because performance measures were generally output measures. At the agency level, for example, we noted that Customs’ performance measures for illegal drugs had historically been output-related—such as, pounds of narcotics seized and number of drug seizures. Customs recognized that measures for some of its goals did not fully measure achievement of the goals and also indicated that it was working to develop outcome measures to better demonstrate the impact of its activities. Regarding firearms, we noted that ATF’s performance measures had also been primarily output-related (e.g., number of firearm traces, average trace response time, and number of persons trained). However, ATF’s performance plan contained a refined measure of “future crimes avoided,” as a way to measure progress towards reducing the risk of violent crime by estimating the number of crimes prevented through the incarceration of criminals and the elimination of crime gun sources. The Department of State’s 2000 strategic plan identified 16 strategic goals, which were grouped into 7 areas of national interest. For each strategic goal, the plan further outlined strategies for achieving the goal, as well as State’s specific responsibilities for each of the strategies. For those strategies that involved the cooperation of multiple agencies, the plan also identified the “lead” U.S. government agencies involved. Although the plan does not discuss the International Crime Control Strategy or identify linkages between the two strategies, it does identify how State’s strategic planning process has considered other national and agency strategic plans—such as the National Security Strategy and the National Drug Control Strategy. Table 11 illustrates how selected goals and strategies in State’s strategic plan address similar overarching goals and implementing objectives in the International Crime Control Strategy. In its strategic plan, State identified various indicators to measure performance towards each goal. For example, regarding the national security goal “Reduce the threat to the United States from weapons of mass destruction,” State identified 12 performance indicators. However, these indicators were not associated with any particular strategy, such as combating nuclear smuggling. Rather, the 12 indicators—taken as a whole—measure progress towards the overall goal of reducing the threat from weapons of mass destruction. Regarding performance measurement, in June 2000, we reported our observations on key outcomes described in State’s GPRA performance report and plan. Among these key outcomes, three were most directly related to international crime: (1) eliminate threats from weapons of mass destruction, (2) reduce international crime and availability and/or use of illegal drugs, and (3) reduce international terrorism. Overall, we found that State’s performance plan provided more detail on goals and measures than in previous years, but there were still some limitations. We reported that: Goals and measures were presented by individual bureau, making it difficult to obtain an agencywide perspective or sense of priority. Assessing performance against the many targets listed would be time- consuming and likely inconclusive about whether tangible results were achieved. There was no discussion about whether State coordinated with the numerous partner agencies listed in the plan. For the international crime-related outcomes noted above, we reported mixed results. For example, regarding one of the expected outcomes— ”Eliminated threats from ”—State’s performance plan covered a more complete range of activities than it planned to undertake to achieve the goal, as compared to prior years. However, some of the goals and measures did not provide valid indicators of progress. For example, one of the performance goals was to “be authoritative, relevant, and timely,” and measures were to “use technology and report on specific activities such as producing and maintaining web pages.” Regarding the response to international terrorism, the performance plan referred to using diplomatic pressure, enlisting cooperation, and developing new technologies as general ways to address this goal. However, training was the only performance goal reported for this desired outcome. Furthermore, while the plan more clearly identified goals and measures for this outcome compared to prior years, some goals and measures would be difficult to quantify, such as the status of U.S. policies in various international forums. In addition to the International Crime Control Strategy, the federal government has developed national crime control strategies that focus on specific types of international crimes. Like the International Crime Control Strategy, these strategies are interagency in nature and identify national goals or objectives. However, they are specifically focused on a particular type of crime or related set of crimes. This approach can also provide a framework—not unlike GPRA—for developing performance indicators for measuring the effectiveness and results of efforts to combat specific types of international crime. Even with this targeted approach, however, the government is still challenged to develop crime-specific strategies containing meaningful goals, objectives, and indicators that adequately measure program results and effectiveness. Probably the most well-known of the national crime control strategies is ONDCP’s National Drug Control Strategy, which identifies long-range national goals and measurable objectives for reducing drug use, drug availability, and the consequences of drug abuse and trafficking. The development of this strategy was mandated by the Congress in 1988, when it created ONDCP in order to set priorities, implement a national strategy, and certify federal drug control budgets for the nation’s war on drugs. The Congress later expanded ONDCP’s mandate to require the establishment of a drug control performance measurement system. In 1998, ONDCP established the PME system—to provide performance goals, objectives, and targets designed to implement the strategy and measure the effectiveness of the nation’s drug control efforts. The PME system also identified intermediate and long-term impact targets—for example, “Reduce the Availability of Illicit Drugs by 25 Percent in 2002”— as a way to measure the strategy’s overall impact on drug demand and supply, as well as the consequences of drug abuse and trafficking. ONDCP is required to report to the Congress annually on the implementation of the PME system. As noted above, the performance measurement system envisioned by the International Crime Control Strategy was compared with the PME system. Jointly developed by Treasury and Justice in 1999, the National Money Laundering Strategy outlined a comprehensive, integrated approach to combating money laundering in the United States and abroad, through both law enforcement and banking supervision. This strategy defined a framework of objectives and “action items” (performance goals) designed to advance four broad goals: strengthen domestic enforcement, enhance the engagement of banks and other financial institutions, provide more effective assistance to state and local governments, and bolster international cooperation. In 2000, an updated version of the strategy was released, setting forth a broad array of action items organized in a consolidated, governmentwide plan. Each action item included a designation of the government office/official accountable for implementation and for meeting specified goals and milestones. For example, under goal 1 of the strategy— “Strengthen domestic enforcement to disrupt the flow of illicit money”— one of the action items is to promote cooperation with the governments of Colombia, Aruba, Panama, and Venezuela to address black market currency exchanges. Treasury’s Deputy Assistant Secretary for Enforcement Policy is identified as the lead official responsible for establishing a multilateral task force to examine the issue and recommend policy options to the appropriate government officials. To address the national and international problem of terrorism, Justice developed the Five-Year Interagency Counter-Terrorism and Technology Crime Plan in 1998, with funds appropriated by the Congress for this purpose. The resulting document was intended to serve as a blueprint for coordinating national policy and operational capabilities to combat terrorism in the United States and against U.S. interests abroad. The plan involved the implementation of three strategies: (1) identify, investigate, and prosecute suspected terrorists; (2) ensure domestic preparedness; and (3) prevent and deter damage to the U.S. information infrastructure. As discussed in appendix IV, Presidential Decision Directive 62 (PDD-62) had previously created within NSC the Office of the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism to oversee and report on the federal government’s efforts in such areas as counterterrorism, protection of critical infrastructures, and preparedness and consequence management for weapons of mass destruction. Despite this effort, questions remain about whether the counterterrorism plan functions as a true national strategy. A federally funded advisory panel, supported by research from the RAND Corporation, recently concluded that the plan could not be considered a national strategy because it did not synchronize existing government programs or identify future program priorities needed to achieve national objectives for domestic preparedness for terrorism. Among other things, the panel recommended creating a comprehensive strategy that was truly national in scope, appropriately resourced, and based on measurable performance objectives. We reached a similar conclusion in our recent report on the federal response to terrorism. We concluded that the counterterrorism plan, either taken alone or with other documents, did not constitute a fully developed national strategy. We further reiterated the need for a federal or national strategy that clearly identifies a desired outcome, provides a goal, and allows measurement of progress toward that goal. As discussed in appendix IV, in commenting on a draft of this report, Justice still considers the counterterrorism plan to be a baseline national strategy to combat terrorism. David P. Alexander, Seto J. Bagdoyan, Nancy A. Briggs, Philip D. Caramia, Christine F. Davis, James M. Fields, Anthony L. Hill, and Bethany L. Letiecq also made key contributions to this report.
International crimes, such as drugs and arms trafficking, terrorism, money laundering, and public corruption, transcend national borders and threaten global security and stability. The National Security Council (NSC) told GAO that international crime and the framework for the U.S. response are under review by the new administration. The extent of International crime is growing, but measuring its true extent is difficult. Several efforts have been made to gauge the threat posed to the United States and other countries by international crime. The 1999 threat assessment was classified, but a published version of the 2000 assessment divided the threat into the following five broad categories: (1) terrorism and drug trafficking; (2) illegal immigration, trafficking of women and children, and environmental crimes; (3) illicit transfer or trafficking of products across international borders; (4) economic trade crimes; and (5) financial crimes. NSC identified 34 federal entities with significant roles in fighting international crime. These included the Department of Justice, Treasury, and State, and the U.S. Agency for International Development. The efforts to combat public corruption internationally involves two strategies: the elimination of bribes in transnational business activities, such as government contracting, and the implementation of law assistance, which focuses on U.S. support for legal, judicial, and law enforcement reform efforts by foreign governments. Much of the technical assistance that the U.S. provides to other nations for fighting international crime involves training, particularly training at law enforcement academies established abroad. There are no standard measures of effectiveness to assess the federal government's overall efforts to address international crime. Justice's, Treasury's, and State's plans describe their efforts to combat specific types of crime, along with the performance measures to be tracked. In some cases, however, these measures do not adequately address effectiveness.
The size and nature of the Medicare program make HCFA unique in authority and responsibility among health care payers. Fee-for-service Medicare serves about 33 million beneficiaries and processes a high volume of claims—an estimated 900 million in fiscal year 1997—from hundreds of thousands of providers, such as physicians, hospitals, skilled nursing facilities, home health agencies, and medical equipment suppliers. HCFA is also responsible for paying and monitoring more than 400 managed care health plans that serve more than 5 million beneficiaries. Enrollment in these plans has been growing by about 85,000 beneficiaries monthly. The Medicare statute divides benefits into two parts: (1) “hospital insurance,” or part A, which covers inpatient hospital, skilled nursing facility, hospice, and certain home health care services, and (2) “supplementary medical insurance,” or part B, which covers physician and outpatient hospital services, diagnostic tests, and ambulance and other medical services and supplies. In fiscal year 1997, part A covered an estimated 39 million aged and disabled beneficiaries, while a slightly smaller number were covered by part B, which requires payment of a monthly premium. currently consists mostly of risk contract health maintenance organizations (HMO). Medicare pays these HMOs a monthly amount, fixed in advance, for all the services provided to each beneficiary enrolled. HCFA, an agency within HHS, has slightly less than 4,000 full-time employees, 65 percent of whom work in the agency’s headquarters offices; the rest work in the agency’s 10 regional offices across the country. In addition to the agency’s workforce, HCFA oversees more than 60 claims processing contractors that are insurance companies—like Blue Cross and Blue Shield plans, Mutual of Omaha, and CIGNA. In fiscal year 1997, the contractors employed an estimated 22,200 people to perform Medicare claims processing and review functions. Two recent acts grant HCFA substantial authority and responsibility to reform Medicare. The Health Insurance Portability and Accountability Act of 1996 (HIPAA), P.L. 104-191, provides the opportunity to enhance Medicare’s anti-fraud-and-abuse activities. The Balanced Budget Act of 1997 (BBA), P.L. 105-33, introduces new health plan options and major payment reforms. In correspondence to this Subcommittee last October, we noted that these two pieces of legislation addressed in large measure our concerns and those of the HHS Inspector General regarding the tools needed to combat fraud and abuse. They also address many of the weaknesses discussed in our High-Risk Series report on Medicare. network to perform payment safeguard functions while avoiding conflicts of interest. HIPAA also adds new civil and criminal penalties to heretofore little-used enforcement powers. BBA provides for a dramatic expansion of health plan choices available to Medicare beneficiaries and makes reforms to payment methods in traditional fee-for-service Medicare and managed care. Under the act’s new Medicare+Choice program, beneficiaries will have new health plan options, including preferred provider organizations (PPO), provider sponsored organizations (PSO), and private fee-for-service plans. Medicare+Choice introduces new consumer information and protection provisions, including a requirement to disseminate comparative information on Medicare+Choice plans in beneficiaries’ communities and a requirement that all Medicare+Choice plans obtain external review from an independent quality assurance organization. These provisions address problems we have worked to correct with this committee and others in the Congress. BBA also provided for revamping many of Medicare’s decades-old payment systems to contain the unbridled growth in certain program components. Specifically, the act mandated prospective payment systems for services provided by about 1,100 inpatient rehabilitation facilities, 14,000 skilled nursing facilities, 5,000 hospital outpatient departments, and 8,900 home health agencies. In addition, it made changes to the payment methods for hospitals, including payments for direct and indirect medical education costs. It also adjusted fee schedule payments for physicians and durable medical equipment and authorized the conversion of the remaining reasonable charge payment systems to fee schedules. Finally, the act granted the authority to conduct demonstrations on the cost-effectiveness of purchasing items and services through competitive bids from suppliers and providers. While legislative reforms are dramatically reshaping Medicare, other changes are occurring, thus compounding difficult management challenges. For example, HCFA is rethinking its strategy to develop, modernize, or otherwise improve the agency’s multiple automated claims processing and other information systems. This will involve preparing systems for the year 2000, repairing the deteriorating managed care enrollment systems, and making the necessary modifications to existing systems. HCFA plans to make these changes as an interim measure until, consistent with the Information Technology Management Reform Act of 1996 (P.L. 104-106), comprehensive reengineering can take place, such as making claims processing systems and payment mechanisms more efficient, programming BBA payment changes, and modernizing the anti-fraud-and-abuse system software. HCFA is also confronting transition problems resulting from the recent loss of large-volume claims processing contractors and the need for remaining contractors to absorb the workload. Finally, HCFA recently restructured its organizational units to better focus on its mission and is experiencing the kind of disruptions common to organizational transitions. Against this backdrop, the themes that emerged from our individual interviews and focus groups with HCFA managers centered on (1) distribution of agency resources, (2) need for specialized expertise, (3) loss of institutional experience, and (4) reorganization issues. “Robbing Peter to pay Paul” was the expression used to characterize one of the major themes from our focus groups. Specifically, managers were concerned that because of the concentrated efforts to implement BBA and solve computer problems that could arise in the year 2000, the quality of other work might be compromised or tasks might be neglected altogether. However, managers also noted that whereas some BBA-related tasks are completely new—such as conducting an open enrollment period for Medicare+Choice plans—and therefore add to the workload, others merely formalize work that was already underway but impose deadlines for completion, such as developing prospective payment methods for reimbursing several types of health care providers. staff members dedicated to contractor oversight currently has two; the others, they said, had been reassigned to work on managed care issues. This concerns us in light of our work on Medicare program management. Over the past several years, we have reported that HCFA has not adequately ensured that contractors are paying only medically necessary or otherwise appropriate claims. Similarly, the HHS Inspector General’s fiscal year 1996 financial audit found contractor oversight weaknesses. For example, some contractors selected for audit could not readily verify total Medicare expenditures, including paid claim amounts, to ensure that amounts were accurate, supported, and properly classified; did not adequately document accounts receivable; and did not have adequate internal controls over the receipt and disbursement of cash. Further, HCFA does not have a method for estimating the amount of improper Medicare payments; for fiscal year 1996, the Inspector General estimated that HCFA made about $23 billion in inappropriate payments. Managers also expressed a common concern about the staff’s mix and level of skills. They noted that HCFA’s traditional approach of hiring generalist staff and training them largely on the job is no longer well suited to the agency’s need to implement recent reforms expeditiously. Instead, managers are beginning to identify the need for staff with specialized technical expertise, such as computer system analysts, survey statisticians, data analysts, market researchers, information management specialists, managed care experts, and health educators. In our discussions, several managers placed “appropriate skill sets” at the top of their wish lists. As an illustration, the Medicare+Choice program introduces new health plan types and requires the dissemination of information about the plans to beneficiaries in 1998. Called the Medicare+Choice Information Fair, this nationwide educational and publicity campaign will be the first effort of its kind for HCFA. Managers were concerned that staff without prior experience will need to pull together information that describes and evaluates the merits of various plans. data systems. They also cited the need for specialists in contracting, facilities management, and telecommunications. Many senior and midlevel managers and experienced technical staff have retired in recent years or are eligible to retire soon. Almost 40 percent of the organization has turned over in the past 5 years. Many were said to have spent their entire careers focused on a particular aspect of the Medicare program. A common concern in our discussions was the erosion of experienced staff to perform a variety of tasks, such as writing regulations and developing payment systems. Managers cited the loss of experienced staff as a problem for developing and implementing the various prospective payment systems mandated by BBA. They also noted that developing one new payment system would have been manageable, but losses of expert staff make it difficult to implement multiple new payment systems concurrently. For example, experienced staff are needed to perform such technical tasks as those we mentioned in our October statement before this Subcommittee, including collecting reliable cost and utilization data to compute the new prospective payment rates, developing case mix adjusters, auditing cost reports to avoid incorporating inflated costs into the base rates, and monitoring to guard against providers’ skimping on services to increase profits. Our focus group participants emphasized that it will be difficult to replace its experienced staff in the short term. Although HCFA is planning to hire new people, the time typically needed for recruiting, hiring, and orienting new employees is considerable. Managers commented that new employees, although highly educated and motivated, sometimes need extensive on-the-job training to replace lost expertise. In July 1997, HCFA restructured its entire organization. The new design reflected the agency’s intent to, among other things, (1) combine activities to redirect additional resources to the growing managed care side of the program, (2) acknowledge a shift from HCFA’s traditional role as claims payer to a more active role as purchaser of health care services, and (3) establish three components focused on beneficiaries, health plans and providers, and Medicaid and other activities conducted at the state level. It also established technical and support offices to assist these components. (See HCFA’s organization chart in app. I.) In announcing the planned reorganization, the Administrator explained that as Medicare has evolved over the years, new programs and projects were layered onto existing structures. Over time, he noted, this became cumbersome and confusing. Many managers we spoke with considered the reorganization to be theoretically sound. Some also told us that it was long overdue, because HCFA’s structure encouraged work on narrow issues within self-contained groups—an approach that did not benefit from the expertise existing across the agency. However, a consensus of focus group participants and high-level officials believed that the timing of the reorganization’s implementation is unfortunate. They explained that they are currently facing full agendas with tight deadlines, which add to the stresses associated with any organizational change. Managers described their difficulties in establishing new communication and coordination links within units as well as across the agency. For some, new efforts to coordinate have proved time-consuming to the point of being counterproductive. Managers commented that sign-off sheets formalizing coordination have enough names to take on the appearance of a staff roster. They noted that the situation was particularly acute in light of the fact that people have not yet moved to the actual location of their new units. Managers in one division said staff were scattered in as many as seven places around HCFA’s building. HCFA now hopes to have staff relocated by late spring, although this plan appears to be optimistic. We observed that managers appeared to be clear on top management’s expectations for completing BBA-related activities and for making sure that contractors’ claims processing systems would comply with the millennium changes. They were less certain, however, about the agency’s strategy for meeting other mission-related work. of its workload that would enable the agency’s senior decisionmakers to consider whether resources are, in fact, adequate or properly distributed and which activities could be at risk of being neglected. One example that came to our attention concerned the legislative mandates for reporting to the Congress on specific activities and programs. Currently, neither top management nor the Office of Legislation compiles a list of reports due and their deadlines. Unit managers are concerned because, although they are aware that certain reports for which they are responsible will be late, there is no systematic way to keep top management informed. Top management, in turn, cannot decide to heighten the priority for a particular report or develop a strategy to mitigate the consequences of others being late. The illustration above and our discussions with agency officials suggest that while HCFA may be ready to assert its BBA-related resource needs, it is not likely to be in a position to adequately justify the resources it seeks to carry out its other Medicare program objectives. This observation calls to mind our July 1997 report on the adequacy of HHS’s draft strategic plan under the Government Performance and Results Act. We noted that the plan failed to address certain major management challenges, including Medicare-related problems. Specifically, the plan did not address long-standing concerns about Medicare’s existing claims processing systems or HCFA’s efforts to acquire a billion-dollar integrated database system. In addition, it did not address the issue of information security that was identified in the fiscal year 1996 financial statement audit of HCFA, specifying that systems weaknesses created the risk of unauthorized access to sensitive medical history and claims data. HCFA is an agency facing many challenges. Even before BBA made major changes, Medicare was a vast and complex program. Volumes of reports by us and others demonstrate, in numerous areas, HCFA’s need to address program vulnerabilities. Because of the risks associated with a program of Medicare’s magnitude, the need for HCFA to be vigilant cannot be overstated. struggling to carry out Medicare’s numerous and challenging activities. In addition, they assert that the loss of experienced staff has further diminished HCFA’s capacity. Nevertheless, senior managers do not appear to be adequately informed about the status of the full range of Medicare activities or associated resource needs. Under these circumstances, HCFA seems to be focusing most of its energy on important deadlines and pressures, but other critical functions may be receiving back-burner attention. We have work underway to assess the status of HCFA’s efforts to implement aspects of HIPAA and BBA and modernize the agency’s information systems. We will also continue to monitor the progress of HCFA’s reorganization efforts. Mr. Chairman, this concludes my statement. I will be happy to answer your questions. Medicare: Effective Implementation of New Legislation Is Key to Reducing Fraud and Abuse (GAO/HEHS-98-59R, Dec. 3, 1997). Medicare Fraud and Abuse: Summary and Analysis of Reforms in the Health Insurance Portability and Accountability Act of 1996 and the Balanced Budget Act of 1997 (GAO/HEHS-98-18R, Oct. 9, 1997) and related testimony entitled Recent Legislation to Minimize Fraud and Abuse Requires Effective Implementation (GAO/T-HEHS-98-9, Oct. 9, 1997). Medicare Automated Systems: Weaknesses in Managing Information Technology Hinder Fight Against Fraud and Abuse (GAO/T-AIMD-97-176, Sept. 29, 1997). Medicare Home Health Agencies: Certification Process Is Ineffective in Excluding Problem Agencies (GAO/T-HEHS-97-180, July 28, 1997). Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare Managed Care: HMO Rates, Other Factors Create Uneven Availability of Benefits (GAO/HEHS-97-133, May 19, 1997). Medicare (GAO/HR-97-10) and related testimony entitled Medicare: Inherent Program Risks and Management Challenges Require Continued Federal Attention (GAO/T-HEHS-97-89, Mar. 4, 1997). Medicare: HCFA Should Release Data to Aid Consumers, Prompt Better HMO Performance (GAO/HEHS-97-23, Oct. 22, 1996). Medicare: Millions Can Be Saved by Screening Claims for Overused Services (GAO/HEHS-96-49, Jan. 30, 1996). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/HEHS-95-171, Aug. 8, 1995). Medicare: Increased HMO Oversight Could Improve Quality and Access to Care (GAO/HEHS-95-155, Aug. 3, 1995). Medicare: Inadequate Review of Claims Payments Limits Ability to Control Spending (GAO/HEHS-94-42, Apr. 28, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Health Care Financing Administration's (HCFA) ability to meet growing program management challenges, focusing on: (1) HCFA's new authorities under recent Medicare legislation; (2) HCFA managers' views on the agency's capacity to carry out various Medicare-related functions; and (3) the actions HCFA needs to take to accomplish its objectives over the next several years. GAO noted that: (1) substantial program growth and greater responsibilities appear to be outstripping HCFA's capacity to manage its existing workload; (2) legislative reforms have increased HCFA's authority to manage the Medicare program; (3) simultaneously, however, other factors have increased the challenges HCFA faces, including the need to make year 2000 computer adjustments and develop a new, comprehensive information management strategy; manage transitions in its network of claims processing contractors; and implement a major agency reorganization; (4) in addition, officials report that the expertise to carry out HCFA's new functions is not yet in place and that HCFA has experienced a loss of institutional knowledge through attrition; (5) in this environment, agency managers are concerned that some of their responsibilities might be compromised or neglected altogether because of higher-priority work; (6) HCFA's approach for dealing with its considerable workload is incomplete; (7) heretofore, the agency lacked an approach--consistent with the requirement of the Government Performance and Results Act of 1993 to develop a strategic plan--that specified the full range of program objectives to be accomplished; (8) HCFA has developed a schedule for responding to recent legislative reforms but is still in the process of detailing the staffing and skill levels required to meet reform implementation deadlines; and (9) while addressing new mandates, the agency also needs to specify how it will continue to carry out its ongoing critical functions.
DFAS, as DOD’s central accounting agency, is responsible for recording and processing accounting transactions; paying vendors, contractors, and military and civilian employees; preparing reports used by DOD managers and by the Congress; and preparing DOD-wide and service-specific financial statements required by the Chief Financial Officers Act. Organizationally, DFAS is under the direction of the Under Secretary of Defense (Comptroller). Table 1 illustrates the enormous scope and importance of DFAS’s reported fiscal year 2002 financial operations. DFAS’s fiscal year 2003 IT budgetary request was approximately $494 million. Of that amount, $353 million relates to the operation and maintenance of existing DFAS systems and the remaining $141 million is for the modernization of systems. The purpose of each DFAS project we reviewed is highlighted below. DFAS Corporate Database/DFAS Corporate Warehouse (DCD/DCW). DCD and DCW were originally separate initiatives. DCD was initiated in October 1998, and was to be the single DFAS database, meaning it was to contain all DOD financial information required by DFAS systems and would be the central point for all shared data within DFAS. To accomplish this goal, DCD would crosswalk detailed transaction data from nonstandard finance and feeder systems into a standard format. Further, once the department implemented standard systems, the need to perform these crosswalks would be eliminated. In February 2001, the project’s scope was revised after DFAS realized that crosswalks of detail transaction data were cumbersome and cost prohibitive. DFAS is planning to crosswalk detailed transaction data only when information from multiple systems must be aggregated to satisfy a cross-service need such as the working capital fund activities. DCW was initiated in July 2000 to provide a historical database to store and manage official DFAS information for analysis and generation of operational reports and queries. In November 2000, the DFAS CIO combined DCD/DCW into one program. In March 2001, DCD/DCW was designated as a major automated information system. Defense Procurement Payment System (DPPS). DFAS determined the need for DPPS in April 1995. DPPS was intended to be the standard, automated information system for contract and vendor pay authorization and addressing deficiencies associated with overpayments, negative unliquidated obligations, and unmatched disbursements—all of which are long-standing problems in DOD. DPPS also was to incrementally replace eight contract and vendor systems. In October 1995, the DFAS Director approved proceeding with defining and evaluating the feasibility of alternative concepts and assessing the relative merits of these concepts. In November 1996, the Office of the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence)—DOD’s CIO— designated DPPS a major automated information system. DFAS awarded a contract in June 1998 for the acquisition of a system that was intended to address DOD’s contract and vendor pay deficiencies. Defense Standard Disbursing System (DSDS). Disbursing activities for DOD are largely accomplished through systems that were designed 15-20 years ago. In 1997, DFAS launched DSDS to be the single, standard DFAS automated information system for collecting, processing, recording, and reporting disbursement data and transactions for the military services and defense agencies. These disbursing functions are currently being provided by multiple automated information systems and manual activities at various DFAS locations. Defense Departmental Reporting System (DDRS). In April 1997, DFAS initiated DDRS to be the standardized departmental reporting system. DDRS has two phases. The first phase—DDRS-AFS (Audited Financial Statements)—is intended to be a departmentwide financial reporting system. The second phase—DDRS-Budgetary—is intended to establish a departmentwide budgetary reporting system. Among other things, DDRS is intended to reduce the number of departmental reporting systems and standardize departmental general ledger processes. These four projects are part of the DFAS Corporate Information Infrastructure (DCII) program. According to DFAS, DCII is intended to facilitate cross-functional, integrated processes; promote standardized data and reporting; facilitate standardized business practices; reduce cost of operations; and provide timely information for decision making. Figure 1 depicts a high-level view of the interrelationships among these four system projects. DOD and DFAS have an established acquisition management and oversight process for acquiring, operating, and maintaining business systems. Among other things, this process requires project managers to provide cost, schedule, and performance data to the DFAS Chief Information Officers/Business Integration Executive (CIO/BIE) Council—DFAS’s IT investment board—prior to scheduled milestone reviews. These milestones are intended to be decision points for determining whether a project should continue in the current phase of the system life-cycle, proceed to the next phase, be modified, or be terminated. The results of these reviews are to be set forth in a system decision memorandum which is to be signed by the milestone decision authority. The milestone decision authority for DSDS and DDRS is the Director, DFAS. The DOD CIO is the milestone decision authority for DCD/DCW and DPPS. We and the DOD Inspector General have continued to report on a variety of long-standing management problems for modernizing DOD’s IT systems. Three recent system endeavors that have fallen short of their intended goals illustrate these problems. They are the Standard Procurement System, the Defense Travel System, and the Defense Joint Accounting System. These efforts were aimed at improving the department’s financial management and related business operations. Significant resources—in terms of dollars, time, and people—have been invested in these three efforts. Standard Procurement System (SPS). In November 1994, DOD began the SPS program to acquire and deploy a single automated system to perform all contract management-related functions within DOD’s procurement process for all DOD organizations and activities. The laudable goal of SPS was to replace 76 existing procurement systems with a single departmental system. DOD estimated that SPS had a life-cycle cost of approximately $3 billion over a 10-year period. According to DOD, SPS was to support about 43,000 users at over 1,000 sites worldwide and was to interface with key financial management functions, such as payment processing. Additionally, SPS was intended to replace the contract administration functions currently performed by the Mechanization of Contract Administration Services, a system implemented in 1968. Our July 2001 report and February 2002 testimony identified weaknesses in the department’s management of its investment in SPS. Specifically: The department had not economically justified its investment in the program because its latest (January 2000) analysis of costs and benefits was not credible. Further, this analysis showed that the system, as defined, was not a cost-beneficial investment. The department had not effectively addressed the inherent risks associated with investing in a program as large and lengthy as SPS because it had not divided the program into incremental investment decisions that coincided with incremental releases of system capabilities. Although the department committed to fully implementing the system by March 31, 2000, this target date had slipped by over 3 ½ years to September 30, 2003, and program officials have recently stated that this date will also not be met. Defense Travel System (DTS). In July 2002, the DOD Inspector General raised concerns that DTS remained a program at high risk of not being an effective solution in streamlining the DOD travel management process. The report stated that “The Defense Travel System was being substantially developed without the requisite requirements, cost, performance, and schedule documents and analyses needed as the foundation for assessing the effectiveness of the system and its return on investment.” The report further noted there was increased risk that the $114.8 million and 6 years of effort already invested will not fully realize all goals to reengineer temporary duty travel, make better use of IT, and provide an integrated travel system. Additionally, the DOD Inspector General reported that DTS was to cost approximately $491.9 million (approximately 87 percent more than the original contract cost of $263.7 million) and DOD estimates that deployment will not be completed until fiscal year 2006, approximately 4 years behind schedule. Defense Joint Accounting System (DJAS). In 1997, DOD selected DJAS to be one of three general fund accounting systems. The other two general fund systems were the Standard Accounting and Reporting System and the Standard Accounting and Budgetary Reporting System. As originally envisioned, DJAS would perform the accounting for the Army and the Air Force as well as the DOD transportation and security assistance areas. Subsequently, in February 1998, DFAS decided that the Air Force could withdraw from using DJAS, because either the Air Force processes or the DJAS processes would need significant reengineering to permit use of a joint accounting system. As a result, the Air Force started its own general fund accounting system—General Fund and Finance System—which resulted in the development of a fourth general fund accounting system. In June 2000, the DOD Inspector General reported that DFAS was developing DJAS at an estimated life-cycle cost of about $700 million without demonstrating that the program was the most cost-effective alternative for providing a portion of DOD’s general fund accounting. More specifically, the report stated that DFAS had not developed a complete or fully supportable feasibility study, analysis of alternatives, economic analysis, acquisition program baseline, or performance measures, and had not reengineered business processes. As part of its ongoing business systems modernization program, and consistent with our past recommendation, DOD is creating a repository of information about its existing systems environment. As of October 2002, DOD reported that its current business systems environment consisted of 1,731 systems and system acquisition projects. In particular, DOD reported that it had 374 systems to support civilian and military personnel matters, 335 systems to perform finance and accounting functions, and 310 systems that produce information for management decision making. Table 2 presents the composition of DOD business systems by functional area. As we have previously reported, these numerous systems have evolved into the overly complex and error prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. The department has recognized the uncontrolled proliferation of systems and the need to eliminate as many systems as possible and integrate and standardize those that remain. In fact, three of the four DFAS projects we reviewed were intended to reduce the number of systems or eliminate a portion of different systems that perform the same function. For example, DPPS was intended to consolidate eight contract and vendor pay DDRS is intended to reduce the number of departmental reporting systems from seven to one; and DSDS is intended to eliminate four different disbursing systems. Similarly, DTS is intended to be the DOD-wide travel system. According to data reported by DOD, currently there are 32 travel systems operating within the department. For fiscal year 2003, DOD has requested approximately $26 billion in IT funding to support a wide range of military operations as well as DOD business system operations. As shown in figure 2, the $26 billion is spread across the military services and defense agencies. Each receives its own funding for IT investments. The $26 billion supports three categories of IT—business systems, business systems infrastructure, and national security systems (NSS)—the first two of which comprise the 1,731 business systems. DOD defines these three categories as follows: Business systems—used to record the events associated with DOD’s functional areas. Such areas include finance, logistics, personnel, and transportation. Business systems infrastructure—represents the costs associated with the operations of the department’s business systems. Such costs would include transmission lines, network management, and information security. National Security System (NSS)—intelligence systems, cryptologic activities related to national security, military command and control systems, and equipment that is an integral part of a weapon or weapons system, or is critical to the direct fulfillment of military or intelligence mission. As shown in table 3, approximately $18 billion—the nearly $5.2 billion for business systems and the $12.8 billion for business systems infrastructure—relates to the operation, maintenance, and modernization of DOD’s 1,731 business systems. As we have reported, while DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint—or enterprise architecture—in place to guide and direct these investments. Our review of practices at leading organizations showed they were able to provide reasonable assurance that their business systems addressed corporate—rather than individual business units—objectives by using enterprise architectures to guide and constrain investments. Consistent with our recommendation, DOD is now working to develop a financial management enterprise architecture, which is a positive step. Further, Section 1004 of the National Defense Authorization Act for Fiscal Year 2003 directs DOD to develop an enterprise architecture not later than May 1, 2003, and that a transition plan accompany the architecture that delineates how the architecture will be implemented. The act also directs that we provide an assessment to the congressional defense committees as to whether DOD has complied with the provisions of Section 1004. DOD management and oversight authorities for the four case study projects are DFAS, the DOD Comptroller, and the DOD CIO. They permitted each project to proceed despite the absence of the requisite analysis to demonstrate that the projects will produce value commensurate with the costs being incurred. For example, an economic analysis has yet to be prepared for DCD/DCW and the other three projects did not have economic analyses that reflected the fact that project costs, schedules, and/or expected benefits had changed materially. Table 4 highlights these cost increases and schedule delays. In the case of DPPS, the estimated costs had increased by $274 million and the schedule had slipped by almost 4 years. In December 2002, following our discussions with DOD Comptroller officials, the DOD Comptroller terminated DPPS after 7 years of effort and an investment of over $126 million. In making this decision, the DOD Comptroller noted that the project was being terminated due to poor program performance and increasing costs. The Clinger-Cohen Act of 1996 and Office of Management and Budget (OMB) guidance provide an effective framework for IT investment management. They emphasize the need to have investment management processes and information to help ensure that IT projects are being implemented at acceptable costs and within reasonable and expected time frames and that they are contributing to tangible, observable improvements in mission performance. DOD policy also reflects these investment principles by requiring that investments be justified by an economic analysis. More specifically, the policy states that the economic analysis is to reflect both the life-cycle cost and benefit estimates, including a return- on-investment calculation, to demonstrate that the proposed investment is economically justified before it is made. After 4 years of effort and an investment of approximately $93 million, DOD has yet to economically justify that its investment in DCD/DCW will result in tangible improvement in DOD financial management operations. Consistent with the Clinger-Cohen Act, DOD and DFAS systems acquisition guidance requires that certain documentation be prepared at each milestone within the system life-cycle. This documentation is intended to provide relevant information for management oversight and in making decisions as to whether the investment of resources is cost beneficial. A key piece of information—the economic analysis—was never completed for the DCD/DCW project. In May 2000, the Director, DFAS, granted approval to continue with development of DCD with a condition that a cost benefit analysis be completed by June 2000. DFAS completed a draft cost benefit analysis for DCD in October 2000. This document was not finalized and in November 2000, DCD/DCW were combined into one program. Since that time, DCD/DCW has continued without a valid, well-supported economic justification to support continued investment in DCD/DCW. DCD project management officials stated that the economic analysis has not been finalized because they were unable to agree on how to compute the return on investment and demonstrate that benefits exceeded costs. In March 2001, DCD/DCW was designated a Major Automated Information System, and as such, DOD’s Office of Program Analysis and Evaluation (PA&E) is required to assess the economic analysis and provide any recommendations to the DOD CIO. However, after approximately 2 years, the economic analysis still has not been developed and PA&E officials stated that it did not anticipate receiving the economic analysis until May 2003. At the same time, as highlighted in figure 3, the cost and schedule of this project have continued to increase over the years. Additionally, the planned functionality of DCD has been drastically reduced since the original concept was set forth. Originally, DCD was to contain all DOD financial information required by DFAS systems, making it the central point for all shared data within DFAS. To accomplish this goal, DCD was to crosswalk detailed transactions from nonstandard finance and feeder systems into a standard format, pending the acquisition and implementation of standard feeder systems. In February 2001, the scope of the DCD project was revised after DFAS realized, through testing of Air Force detailed transactions from feeder systems, that the planned crosswalks were cumbersome and cost prohibitive. Currently, DFAS is planning to crosswalk detailed transaction data only when information from multiple systems must be aggregated to satisfy a cross-service need such as the working capital fund activities. This will result in the originally envisioned capability not being provided. Additionally, DCD/DCW will continue to rely on the error-plagued data in the feeder systems and will not produce financial records that are traceable to transaction-level data. According to the DOD Inspector General, DCD was a high-risk effort because there was no assurance that DCD and other financial management systems would standardize DOD business processes; reduce the number of finance, accounting, and feeder systems; reduce costs; and produce accurate and auditable financial information. Until the economic analysis is finalized, DOD does not know if its investment in DCD/DCW is justified and the decision to move to the next milestone will continue to be delayed. Nevertheless, DOD continues to spend funds to perform tasks in anticipation of milestone approval being received. In fiscal year 2002, according to DFAS officials, approximately $36 million was spent on DCD/DCW. DOD had developed an economic analysis for each of the remaining three projects. However, these analyses had not been updated to reflect schedule delays, cost increases, and changes in scope that have occurred— each of which has an impact on the projected benefits that were originally justified. Nevertheless, as shown in table 5, investment in each project continues. The investment of resources in a system project should be conditional upon analytical justification that the proposed investment will produce commensurate value. As called for in OMB guidance, analyses of investment costs, benefits, and risks should be (1) updated throughout a project’s life cycle to reflect material changes in project scope and estimates and (2) used as a basis for ongoing investment selection and control decisions. To do less presents the risk of continued investment in projects on the basis of outdated and invalid economic justification. In the case of DPPS, PA&E questioned the validity of the economic analysis developed by DFAS. Since DPPS is classified as a major automated information system, the economic analysis is to be reviewed by PA&E. In its May 1998 assessment of the economic analysis, PA&E questioned areas such as the validity of the estimated savings and the ability to implement DPPS within the original estimated cost and schedule. According to DOD officials, these issues were resolved, but they could not provide any documentation to substantiate their position. The DOD CIO subsequently granted permission to continue the project. Over the years, as shown in figure 4, the DPPS effort has been marked by significant increases in cost and schedule delays. The original full operational capability date of April 2002 slipped to December 2005—a delay of almost 4 years—with the estimated cost almost doubling to $552 million. In December 2002, following our discussion with DOD Comptroller officials of DPPS cost increases and schedule slippages, the DOD Comptroller terminated DPPS. In making this decision, the DOD Comptroller noted that the project was being terminated due to poor program performance and increasing costs. With regard to DDRS, the economic analysis used to justify this initiative was developed in October 1998—over 4 years ago. At that time, it was estimated that DDRS would cost $111 million and be fully operational by April 2000. However, based upon information provided by DFAS, and as shown in figure 5, DDRS has experienced increased cost and schedule delays. However, the economic analysis has not been updated to reflect the known changes in the project’s costs and schedule. Moreover, the intended capability of DDRS as originally envisioned has been reduced. For example, DDRS is no longer intended to provide the capability to build an audit trail so that financial data can be tracked back to its transaction-based support, as originally planned. The Federal Financial Management Improvement Act of 1996 requires that agency financial management systems comply with federal financial management systems requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. Systems meeting these requirements should be able to produce auditable financial statements and otherwise have audit trail capability. However, DDRS system users will have to rely on the audit trail capabilities of feeder systems in order to trace individual transactions to their source documents. As we have previously reported, the data from the feeder systems, which are outside the control of DFAS and provide approximately 80 percent of the data that DOD needs for financial reporting purposes, are not reliable. Additionally, until DCD is operational, DDRS will be receiving data from the feeder systems in order to prepare the department’s financial reports on the results of its operations. Therefore, DOD’s financial reports produced by DDRS will (1) continue to be incomplete and inaccurate and thus not useful for decision-making purposes and (2) remain unable to withstand the scrutiny of a financial audit. For DSDS, an economic analysis was prepared in September 2000. However, it has not been updated to reflect material changes in the project. For example, as shown in figure 6, the full operational capability (FOC) date at the time the economic analysis was prepared was February 2003. However, according to information provided by DFAS, the current FOC date is December 2005—a schedule slippage of almost 3 years. Such delays postpone the delivery of promised benefits. DFAS has stated that the cost information is being updated to support a Milestone C decision, which they anticipate will occur in early fiscal year 2004. Additionally, DSDS delivery of promised benefits depends upon the DCD/DCW being implemented on time. However, as previously discussed, DCD/DCW implementation has been fraught with difficulties, which has a corresponding adverse effect on DSDS schedule delays. For example, DCD/DCW project management officials are in the process of addressing 102 requests for requirement changes. According to the DCD/DCW program manager, the date for resolving these changes and approving the Operational Requirements Document is November 2003. Until this process is completed, affected systems integration testing for other DCD/DCW dependent systems, such as DSDS, cannot be finalized. Further, according to DFAS officials, the continued operation of existing legacy systems may result in an increase to the DSDS life-cycle cost estimate by approximately $14 million for each 6-month delay. This would quickly erode the savings of $171 million that DFAS estimated in September 2000, and reconfirmed in January 2003. Without an updated economic analysis to justify continued investment in DDRS and DSDS, DOD does not have reasonable assurance that continued investment will result in commensurate improvement in the financial management operations of the department. DOD’s oversight over the four DFAS projects we reviewed has been ineffective. Investment management responsibility for the four projects rests with DFAS, the DOD Comptroller, and the DOD CIO. In discharging this responsibility, each has allowed project investments to continue year after year, even though the projects have been marked by cost increases, schedule slippages, and capability changes. As a result, DOD has invested approximately $316 million in the four projects without adequately knowing if these efforts will resolve some of DOD’s financial management difficulties—the rationale upon which each initiative was undertaken. In fact, as previously noted, after an investment of over $126 million and 7 years of effort, the DOD Comptroller terminated DPPS in December 2002. GAO’s Information Technology Investment Management (ITIM) maturity framework defines critical processes pertaining to IT investment management and oversight. Among other things these processes provide for establishing investment decision-making bodies responsible for selecting and controlling IT investments by (1) understanding, for example, each project’s expected return on investment and associated costs, schedule, and performance commitments, (2) regularly determining each project’s progress toward these expectations and commitments, and (3) taking corrective actions to address deviations. Additionally, the Clinger- Cohen Act and OMB guidance similarly emphasize the need to have investment management processes and information to help ensure that IT projects are being implemented at acceptable costs and within reasonable and expected time frames and that they are contributing to tangible, observable improvements in mission performance (i.e., that projects are meeting the cost, schedule, and performance commitments upon which their approval was justified). Organizationally, within DOD, the Comptroller has overall management and oversight responsibility for DFAS’s activities—including system investments. However, DOD Comptroller officials told us that they were unaware of the cost increases and schedule slippages on the projects until we brought them to their attention. Further, these officials said that they do not review DFAS’s system investments to ensure that they are meeting cost, schedule, and performance commitments, stating that DFAS is responsible for ensuring that projects stay on target in terms of cost, schedule, and performance. Additionally, they told us that their review is limited to a review of budgetary information and budget exhibits, and that they compare the current year budget request to the previous year’s request to determine if any significant funding increases are being requested for the coming fiscal year. If the budget request is generally consistent from year to year, they said that they do not raise questions about the project. According to these officials, the review of DFAS’s fiscal year 2003 budget did not result in the identification of issues that warranted further review. While the DOD Comptroller is the responsible authority for DFAS activities, DFAS is also responsible for ensuring that its proposed investments will result in systems that are implemented at acceptable costs and within reasonable and expected time frames. To fulfill this responsibility, DFAS established the CIO/BIE Council to oversee system investments. As outlined in the CIO/BIE Council charter, members of the council are responsible for, among other things, advising the Leadership Council—DFAS’s senior decision-making body—on IT investment decisions. The CIO/BIE Council membership includes representatives of DFAS’s business lines, such as accounting services and commercial pay, as well as IT management. In order to assure that the roles, responsibilities, and authorities of the IT investment board are well defined and that board processes are clear, the ITIM Framework states that an IT investment process guide should be created to direct IT investment board operations. While DFAS has endeavored to give the CIO/BIE a role in the acquisition management and oversight process, it has not provided clear, consistent guidance to describe that role and the associated operating procedure. Though the council charter does mention the CIO/BIE Council’s responsibilities, it does not adequately describe them, address the council’s authority, or describe how the council is to fulfill its responsibilities. The DFAS 8000 series also addresses CIO/BIE responsibilities (DFAS 8000.1-R, Part C). However, the 8000 series does not describe how the CIO/BIE is expected to execute its responsibilities, including providing corporate oversight and reviewing capital budget proposals. The lack of clear definition of responsibilities and authority limits the council’s ability to effectively perform oversight- related activities. For the four IT investment projects we reviewed, we found no evidence that the CIO/BIE effectively monitored the cost, schedule, or performance goals of the four projects. As previously noted, the DOD CIO is responsible for overseeing major automated information systems. As such, this office is responsible for ensuring that the investments being made in DCD/DCW and DPPS are justified. However, the DOD CIO did not effectively exercise this authority. In regard to DPPS, the DOD CIO was designated the milestone decision authority in November 1996. While DOD CIO officials told us that they were aware of the problems with DPPS, they were unable to provide any documentation that indicated they had raised concerns with the DPPS effort. DCD/DCW was not brought under the purview of the DOD CIO until March 2001— approximately 2½ years after the project began. DOD CIO officials expressed concerns about the viability of DCD/DCW and questioned DFAS’s decision to move forward absent an economic analysis. However, they were unable to provide us with documentation that indicated they had carried out their oversight responsibilities and independently determined whether DCD/DCW was a viable investment. According to DOD CIO officials, despite being the milestone decision authority for major projects, they have little practical authority in influencing component agency IT projects. As such, they said they try to work with the program managers to ensure that all of the required documentation for passing the next milestone is prepared, but the department’s culture, which rests organizational authority and funding control with the components, precludes them from exercising effective IT investment oversight. The comments of the DOD CIO officials support the fact that the current stovepiped, parochial management of DOD’s IT investments has led to the previously discussed proliferation of business systems. As we previously reported, DOD’s organizational structure and embedded culture have made it difficult to implement departmentwide oversight or visibility over information resources. Similarly, we recently reported that DOD does not yet have the departmental investment governance structure and process controls needed to adequately align ongoing investments with DOD’s architectural goals and direction. Instead, DOD continues to allow its component organizations to make their own investment decisions, following different approaches and criteria. We reported that this stovepiped decision-making process has contributed to the department’s current complex, error prone environment of over 1,700 systems. In particular, DOD has not yet established and applied common investment criteria to its ongoing IT system projects using a hierarchy of investment review and funding decision-making bodies, each composed of representatives from across the department. DOD also has not yet conducted a comprehensive review of its ongoing IT investments to ensure that they are consistent with its architecture development efforts. Until it does these things, DOD will likely continue to lack effective control over the billions of dollars it is currently spending on IT projects. To address this problem we recommended that DOD establish a series of investment review boards, each responsible and accountable for selecting and controlling investments that meet defined threshold criteria, and each composed of the appropriate level of executive representatives, depending on the threshold criteria, from across the department. We also reiterated our open recommendations governing limitations in business system investments pending development of the architecture. DOD is investing billions of dollars annually in hundreds of systems that perform the same function spread across numerous DOD components. As we have previously reported, this proliferation of systems has resulted in part because DOD’s embedded culture and parochial operations have permitted each of the military services and DOD agencies to manage and oversee their IT investments apart from one another. It has also occurred because DOD has not effectively managed its investments in IT business systems, as our past work and the DOD Inspector General work have demonstrated. As a result, DOD runs a high risk that hundreds of millions of dollars will continue to be invested annually in modernization efforts that will not result in improvements in the department’s operations. In each of the four system projects we discuss in the report, DOD has invested millions of dollars without economically justifying its investments, in large part because those entities responsible for managing and overseeing these investments have not required such justification despite schedule slippages, cost overruns, and reductions in planned capability. Urgent need for effective investment control is exemplified by DPPS—$126 million for a terminated project. More vigorous oversight of DPPS could have precluded the substantial investment in this failed effort. Until it has effective investment management and oversight, DOD will not have reasonable assurance that its continued investment in the remaining three projects discussed in this report, as well as its other system projects, are justified. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to limit funding in the DFAS Corporate Database/ Corporate Warehouse, the Defense Standard Disbursing System, and the Defense Departmental Reporting System until the DOD Comptroller, in collaboration with the Assistant Secretary of Defense (Command, Control, Communications & Intelligence), and the Director, Program Analysis and Evaluation, demonstrates on the basis of credible analysis and data that continued investment in these three projects will produce benefits that exceed costs. We further recommend that the Secretary of Defense, in light of the department’s ongoing efforts to modernize its business systems, direct the Under Secretary of Defense (Comptroller) to evaluate all remaining DFAS IT projects and ensure that each project is being implemented at acceptable costs, within reasonable time frames, and is contributing to tangible, observable improvements in mission performance. DOD provided written comments on a draft of this report. DOD concurred with our recommendations and identified actions it planned to take to ensure that future investments in DFAS’s systems are justified. For example, the Under Secretary of Defense (Comptroller) noted that the review of DCD/DCW, DDRS, and DSDS would be completed by June 15, 2003. Additionally, the Under Secretary of Defense (Comptroller) stated that all systems would be reviewed as part of the department’s effort to establish a financial management enterprise architecture governance structure. As discussed in our February 2003 report, the governance structure is intended to provide DOD the means to gain control over its IT investments. However, as noted in our report, we have not verified or evaluated the extent to which the planned governance structure will address our recommendation. DOD comments are reprinted in appendix II. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to the Chairman and Ranking Minority Member, Senate Committee on Armed Services; Chairman and Ranking Minority Member, Senate Appropriations Subcommittee on Defense; Chairman and Ranking Minority Member, House Armed Services Committee; Chairman and Ranking Minority Member, House Appropriations Subcommittee on Defense; Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; Chairman and Ranking Minority Member, House Committee on Government Reform; the Director, Office of Management and Budget; the Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Command, Control, Communications & Intelligence); and the Director, Defense Finance and Accounting Service. Copies of this report will be made available to others upon request. The report will also be available on GAO's Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov or Randolph C. Hite at (202) 512-3439 or hiter@gao.gov. Major contributors to this report are acknowledged in appendix III. To obtain an overview of DOD’s current business systems environment we met with representatives of the then Financial Management Modernization Program Office to obtain information on the number of systems that are part of the current systems environment. We also reviewed DOD’s $26 billion fiscal year 2003 IT budget request to determine what portion of the budget relates to DOD business systems. Additionally, we reviewed the IT budget to determine the reported operations, maintenance, development, and infrastructure costs for DOD’s business systems. To determine if DOD was effectively managing and overseeing its IT investments, we focused on the four system projects previously noted. To assist us in our evaluation, we used our Information Technology Investment Management (ITIM) framework. The ITIM identifies critical processes for successful IT investment and organizes these processes into a framework of increasingly mature stages. We focused on the Stage 2 critical processes of IT project oversight and IT investment board practices based on DFAS’s self assessment that it was at Stage 2. Figure 7 shows ITIM’s five stages of maturity. In addition, we also evaluated DOD’s and DFAS’s guidance on systems acquisition, as it relates to life-cycle management and milestones for proceeding to the next phase of the system acquisition process. To verify application of the critical processes and practices, we selected projects that (1) were in different life-cycle phases of systems development (2) required oversight by a DOD authority outside of the DOD Comptroller, such as the Office of the Assistant Secretary of Defense (Command, Control, Communications & Intelligence)—DOD’s CIO, and (3) supported different DFAS business areas such as disbursements and departmental reporting. For these four projects we reviewed documentation, such as mission needs statements, acquisition program baseline updates, and project management plans. According to DOD, it provided estimates for DCD/DCW and DDRS in constant dollars and DPPS and DSDS in escalated dollars. We also reviewed and analyzed charters and meeting minutes of the DFAS investment oversight boards and working groups. To supplement our document reviews, we interviewed senior DFAS officials in the CIO and Systems Integration Offices, as well as the program managers for the four projects. We also met with officials in the offices of the DOD Comptroller and DOD CIO to obtain an understanding of their specific duties and responsibilities in approving, reviewing, and overseeing investments in the four DFAS systems modernization projects. We conducted our work at DFAS Headquarters; the Office of the Under Secretary of Defense (Comptroller); the Office of the Secretary of Defense Program Analysis and Evaluation; and the Office of the Assistant Secretary of Defense (Command, Control, Communications & Intelligence) from November 2001 through January 2003, in accordance with U.S. generally accepted government auditing standards. We did not verify the accuracy and completeness of the cost information provided by DFAS for the four projects we reviewed. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments on a draft of this report from the Under Secretary of Defense (Comptroller), which are reprinted in appendix II. In addition to the individuals named above, key contributors to this report included Beatrice Alff, Joseph Cruz, Francine DelVecchio, Lester Diamond, Jason Kelly, J. Christopher Martin, Stacey Smith, and Robert Wagner. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The Department of Defense's (DOD) long-standing financial management and business systems modernization problems result in a lack of information needed to make sound decisions, hinder the efficiency of operations, and leave the department vulnerable to fraud, waste, and abuse. Such problems led us in 1995 to put financial management and business systems modernization at DOD on our list of high risk areas in the federal government, a designation that continues today. GAO was asked to (1) provide information on the number and cost of DOD's current business systems and (2) determine if DOD is effectively managing and overseeing selected accounting system investments. DOD estimated that it had 1,731 business systems for its day-to-day operations as of October 2002. As GAO previously reported, these systems have evolved over time into the overly complex, error prone, duplicative, stovepiped environment that exists today. To support the operation, maintenance, and modernization of its business systems, the department requested approximately $18 billion for fiscal year 2003. Funding is only part of the solution to improving DOD's current system environment. A key ingredient to success is effectively managing and overseeing these investments. DOD has invested approximately $316 million in four key Defense Finance and Accounting Service (DFAS) projects. However, DOD has not demonstrated that this substantial investment will markedly improve DOD financial management information needed for decision-making and financial reporting purposes. In fact, the DOD Comptroller terminated one project in December 2002, after an investment of over $126 million, citing poor program performance and increasing costs. Continued investment in the other three projects has not been justified because requisite analyses of the costs, benefits, and risks of each one do not reflect cost increases and/or schedule delays. DOD oversight of the four DFAS projects has not been effective. Collectively, DFAS, the DOD Comptroller, and the DOD Chief Information Officer share investment management responsibility for these four projects. However, these DOD oversight entities have not questioned the impact of the cost increases and schedule delays and allowed the projects to proceed absent the requisite analytical justification.
IQA establishes a process that allows the public to help ensure the quality of information disseminated by federal agencies. IQA consists of two major elements. The first element of IQA required OMB to develop and issue government-wide guidelines by the end of fiscal year 2001. These guidelines were to provide policies and procedures for federal agencies to use for “ensuring and maximizing quality, objectivity, utility, and integrity of information (including statistical information),” that they disseminate. The second element required covered federal agencies to develop IQA guidelines by the end of fiscal year 2002. These guidelines were to establish administrative mechanisms allowing “affected persons to seek and obtain correction of information maintained and disseminated” by the agencies. The guidelines were to also require agencies to periodically report to the Director of OMB on the number and nature of IQA complaints and how such complaints were handled. IQA builds on previous federal efforts to improve the quality of information, including OMB Circular A-130 and the Paperwork Reduction Act of 1980, as amended. OMB Circular A-130 establishes a policy for the management of federal information resources. Two of the purposes of the Paperwork Reduction Act were to improve quality and use of federal information and provide for the dissemination of public information in a manner that promotes the utility of the information to the public and makes effective use of information technology. OMB’s Office of Information and Regulatory Affairs (OIRA) develops and oversees the implementation of government-wide policies in the areas of information technology, privacy, and statistics. In this capacity, OIRA developed the government-wide IQA guidelines and helped agencies to meet the Act’s requirement that the agencies develop their own guidelines. OMB issued guidance to agencies to clarify how agencies were to satisfy the law and otherwise implement IQA. The guidance required agencies to develop and post IQA guidelines and related information on their websites. An October 2002 OMB memorandum describing the implementation of IQA guidelines noted that it represented the first time that the executive branch has developed a government-wide set of information quality guidelines, including agency-specific guidelines tailored to each agency’s unique programs and information. Agencies’ guidelines, which were to follow OMB’s model, were to include administrative mechanisms that allow “affected persons” to request correction of information that they did not consider correct. We reported in August 2006 that expanded oversight and clearer guidance by OMB could improve agencies’ implementation of the Act. We found that OMB had issued government-wide guidelines that were the basis for other agencies’ own IQA guidelines. We also reported that OMB required agencies to post guidelines and other IQA information to their websites and required agencies to provide information to OMB on the number and nature of correction requests they received and how such correction requests were resolved. We found that 14 of the 15 cabinet agencies, the Environmental Protection Agency (EPA), and 4 other independent agencies we reviewed had developed IQA guidelines and posted them on their websites. Of these 19 cabinet and independent agencies with guidelines, we found that 4 had information quality links on their home pages while other agencies’ IQA information was difficult to locate online. Moreover, 44 of 86 additional independent agencies that we examined had not posted their guidelines and may not have had them in place at the time. Consequently, users of information from those agencies may not have known whether agencies had guidelines or how to request correction of agency information. OMB also had not clarified guidance to agencies about posting IQA-related information, including guidelines to make that information more accessible. We also found in our 2006 report that in fiscal year 2003, the Federal Emergency Management Agency and two other agencies used IQA to address flood insurance rate maps, website addresses, photo captions, and other administrative matters. However, in fiscal year 2004, these agencies changed their classification of these requests from being IQA requests. Instead they processed them using other administrative processes that were in place prior to IQA implementation. As a result, we found that the total number of all IQA requests dropped from more than 24,000 in fiscal year 2003 to 62 in fiscal year 2004. We recommended that OMB (1) identify agencies without IQA guidelines and work with them to develop and implement IQA requirements and (2) clarify guidance to agencies on improving the public’s access to online IQA information. In response to our report, OMB stated it would work with agencies as they develop and implement information quality measures and would also continue to work with agencies to improve their dissemination of IQA information. Further, in December 2009, OMB, in an executive memorandum to heads of executive departments and agencies, issued an Open Government Directive (1) establishing deadlines for action that, among other things, encouraged agencies to advance their open government initiatives (including IQA) ahead of those deadlines and (2) calling for each agency to take prompt steps to expand access to information by making it available online in open formats. According to IQA information posted on the 30 agency websites in our review, 16 agencies reported receiving 87 IQA correction requests from fiscal years 2010 through 2014 (see table 1). The other 14 agencies in our review did not post any IQA correction requests during the period. Agencies reported receiving the highest number of correction requests (26) in fiscal year 2010, with the lowest number (13) coming in fiscal year 2014. Several agencies, including the Departments of Education, Housing and Urban Development, and Labor, the Federal Reserve Board, and the Office of Science and Technology Policy, reported receiving 1 correction request during the 5-year period. Three agencies— the Departments of Health and Human Services (HHS) and Interior (Interior) and EPA—received 70 percent (61 of 87) of the correction requests during fiscal years 2010 through 2014. These three agencies were also the only ones that reported receiving IQA correction requests during each of the 5 fiscal years. For the entire period, Interior received the highest number of correction requests (26), followed by EPA (21), and HHS (14). IQA officials at several agencies told us that they receive relatively few IQA requests and provided a number of reasons. For example, EPA officials stated that the quality of data EPA disseminates is currently more robust due to the consideration of the diversity of viewpoints provided by the public and EPA’s opportunity to review pertinent information that may not have been previously considered during the pre-dissemination process. Department of Commerce (Commerce) officials attributed their agency’s low number of IQA requests to the fact that the agency has few highly influential scientific assessment projects and that most of its research is relatively noncontroversial, with the exception of research related to climate change. HHS officials said that it is not surprising that IQA administrative corrective mechanisms are not resulting in a large number of IQA correction requests because many correction requests are for minor edits to agency information. A former OIRA administrator opined that when several federal courts held that the IQA is not subject to judicial review, most of the momentum behind IQA was lost, and that, as a result of these rulings, outside parties do not submit very many IQA correction requests. In August 2004, the OIRA Administrator issued a memorandum to the President’s Management Council directing that agencies post all information quality correspondence, including a copy of each correction request, the agency’s formal response(s), and any communications regarding appeals on agency web pages to increase the transparency of the process. The memorandum also directed agencies to provide a few sentences describing the request and any subsequent responses. Finally, the memorandum stated that agencies also needed to establish processes for updating their information quality web pages regularly. In addition to posting copies of the IQA correction requests on their websites, agencies are required to report the number and nature of correction requests the agency receives to the Director of OMB and how such requests were resolved. OMB has provided a summary of this agency-reported IQA data since its implementation in annual reports to Congress since 2003. We found discrepancies between the IQA data we found on agency websites and the IQA data reported to OMB by agencies. Eight agencies who reported receiving IQA correction requests did not post on their website the same number of IQA correction requests that they reported to OMB. In most instances where we identified discrepancies, the number of IQA correction requests agencies posted on their websites was lower than the number of IQA correction requests they reported to OMB. Table 2 provides specific numbers of discrepancies in IQA correction requests received for fiscal years 2010 through 2014. One CFO Act agency, the Department of Transportation, reported to OMB that it had received an IQA correction request, but did not post the IQA correction request on its website as of November 2015. As stated earlier, OMB guidance requires agencies to post correction requests and agency responses on their websites. OMB staff told us that they issue an annual data call to agencies requesting information on IQA correction requests received. According to OMB, agencies are expected to accurately report their IQA activities, including the number of requests received. OMB staff stated that if there are discrepancies between what OMB received from agencies and what the agencies post on their websites, then there is a miscount or a disconnect on the agency side. Although OMB’s guidance is not prescriptive on the time frames for agencies to post this information, it states that agencies need to establish “processes for updating their information quality web pages on a regular basis.” OMB staff told us that some agencies posted correction requests and responses online soon after sending out the agency responses. For example, HHS officials told us that they post correction requests soon after they are received and do not wait until a response is prepared. According to OMB staff, other agencies waited until the end of the fiscal year to post all relevant documents at the same time. In addition, they told us that agencies often have changes in the staff assigned to report IQA data to OMB that may contribute to late postings, and although the data are eventually posted online, they are sometimes provided months after the request was received and responded to. Agency officials from the six agencies that we selected for further review offered various explanations concerning their data discrepancies, including the time frames for online postings of IQA data. For example, officials from the Department of Agriculture (USDA) stated that the agency does not specify time frames for posting correction requests. However, USDA officials stated that, in response to our inquiries, moving forward the agency will require its component agencies and staff offices to post all correction requests and their responses to their component agency’s website no later than 60 days after the correction requests are received. USDA officials stated that each component agency maintains its own website and updates it accordingly. Since we initially contacted USDA concerning the data discrepancies, USDA officials have informed us that the Food and Nutrition Service and Rural Development component agencies have updated their websites to reflect the number of correction requests received. In addition, the officials stated that the Forest Service, the Office of the Inspector General, and the Animal and Plant Health Inspection Service are in the process of making the necessary updates to their websites. Officials from EPA stated that their discrepancy we identified was a result of a joint correction request sent to both OMB and EPA where OMB served as the lead agency. Thus, OMB posted the correction request on its website rather than EPA, but EPA included the correction request in its total IQA request number for fiscal year 2013 to OMB. Interior officials told us that they have not designated specific time frames for posting correction requests online. They explained, however, that one of their component agencies recently split into two separate agencies and as a result, one of these agencies is in the process of developing its own information quality program. According to agency officials, this component agency received and responded to a 2014 IQA correction request, but as of October 2015 had not posted the information on its website. Interior officials stated they expected the data discrepancy issue we identified to be resolved by the end of this fiscal year. OMB staff agreed that agencies sometimes have challenges in accurately and timely reporting and posting IQA corrections requests received and that they are communicating with agencies to address any discrepancies. A 2009 OMB memorandum on open government states that, the “timely publication of information is an essential component of transparency.” The memorandum adds that “delays should not be viewed as an inevitable and insurmountable consequence of high demand.” However, the memo does not provide guidance on what is considered timely publication. We found that 3 of the 9 agencies that reported fiscal year 2014 correction requests to OMB had not posted IQA correction requests and responses a year or more after the end of the fiscal year. Timely reporting of IQA data would increase the transparency of the process and allow the public to view all current correction requests, agency responses to those requests, and any appeals. Doing so would also allow the public to track the status of correction requests that may be of particular interest. In our review of the written correction requests received by the agencies, we found that most requesters who submitted IQA correction requests self-identified as part of the submission process. As shown in table 3, we found that during fiscal years 2010 through 2014, 58 percent (50 of 87) of the correction requests originated from trade associations and advocacy organizations. Trade associations that submitted correction requests represented several different types of industries including, for example, the Western Energy Alliance which represents more than 450 companies engaged in exploration and production of oil and natural gas in the West, and the Pacific Coast Shellfish Growers Association whose membership is composed of shellfish growers in California, Oregon, Washington, Alaska and Hawaii. Advocacy organizations represented several different interests, including the San Juan Citizens Alliance, which is concerned with public land issues, and the Washington Area Bicyclist Association, whose mission is to create a healthy, more livable region by, among other things, promoting bicycling for fun, fitness, and affordable transportation. Private citizens submitted the next largest number of correction requests at 18 percent (16 of 87). We found that each of the 4 IQA correction requests submitted to the Federal Communications Commission originated from private citizens. Businesses, such as electricity producer PacifiCorp, submitted 15 percent (13) of the IQA correction requests. Local governments, such as California’s County of Siskiyou Board of Supervisors submitted 7 percent (6) of the correction requests. Each of the 6 IQA correction requests submitted by local governments was directed to Interior. There were 68 different requesters among the 87 IQA correction requests received during fiscal years 2010 through 2014. Although the majority of requesters submitted 1 request, several submitted more. Of those requesters submitting multiple requests, 6 submitted requests to more than one agency. For example, Public Employees for Environmental Responsibility, an advocacy organization, submitted 6 correction requests in total: 2 to EPA, 1 to Commerce, 1 to the Consumer Product Safety Commission (CPSC), 1 to the General Services Administration, and 1 to Interior during fiscal years 2010 through 2014. We analyzed each of the 87 IQA correction requests posted on agencies’ websites and categorized the requests into two categories—data and administrative. The majority of correction requests received by the 16 agencies during fiscal years 2010 through 2014 (66 of 87 requests, or about 76 percent) questioned either agencies’ use of underlying data or agencies’ interpretation of the data. The following IQA requests received by agencies from fiscal years 2010 through 2014 illustrate the diversity of IQA correction requests involving data. On November 12, 2013, an advocacy organization stated that CPSC disseminated a product recall announcement based on inaccurate data; specifically claims of defects in design, warnings, and instructions. Among other things, the requester asked that CPSC disclose the statistical and scientific metrics used to determine that the subject posed “a very serious hazard.” On March 13, 2014, CPSC stated that the nature of the correction request was the subject of an ongoing adjudicative proceeding. Thus, CPSC made no corrections. On June 11, 2010, a trade association sent a correction request to both EPA and the Department of Housing and Urban Development (HUD) on, among other things the accuracy of data used in a public service advertising on childhood lead poisoning prevention. It requested that both agencies withdraw their participation in and sponsorship of the advertisements. On December 30, 2011, EPA and HUD issued a joint response letter stating that the quality of the information included in the childhood lead prevention advertisements was thoroughly reviewed. Thus, neither agency made corrections. We found that some IQA correction requests (18 of 87 or 21 percent) were administrative in nature. Examples of these correction requests include, among other things, typographical changes or other text revisions to update agency documents and websites. On December 23, 2010, a private citizen submitted a correction request identifying patent images that he believed to be incorrectly labeled with another patent number in an online database. He requested that the U.S. Patent and Trademark Office within the Department of Commerce (Commerce) correct the images. On January 6, 2011, that office stated the requested correction had been made in full, and that the correct patent had been rescanned and reloaded to the database. On November 14, 2011, a business submitted a correction that identified two errors—a typographical error and the omission of information to a Final Rule published in the Federal Register—and requested that EPA make corrections to both. On February 14, 2012 EPA agreed with the typographical error and stated that a data table was inadvertently removed from the published information. EPA stated that it was preparing a regulatory fix intended to reinstate the portions of the table that were inadvertently removed from the final rule. Of the 87 IQA correction requests agencies received, agencies determined in 59 cases (68 percent) that the request did not warrant any change to the original document or data in question (see table 4). Agencies made full corrections in 11 cases and made partial corrections for 15 of the IQA correction requests received. Two correction requests were still pending as of November 2015. The IQA correction mechanism includes procedures for requesters to appeal initial agency decisions. During fiscal years 2010 through 2014, requesters appealed agency decisions in 19 IQA cases. IQA guidelines allow requestors to file for reconsideration if they disagree with an agency’s initial response. Of the 19 appealed cases, agencies made no corrections to 15, rejected 1 because the appeal was not submitted within the specified time frame, dismissed 1 as it was withdrawn by the requester, and had not made final decisions in the last 2 cases as of November 2015. IQA is one of several processes available to the public for requesting corrections of agency information. In addition to IQA, other administrative mechanisms for correcting information available to the public include notice-and-comment rulemaking and peer reviews. We previously reported in 2006 that some agencies had the flexibility to respond to correction requests through various processes, because those processes were in place prior to IQA. For example, we reported that the Federal Emergency Management Agency no longer classified requests to correct flood insurance rate maps as IQA requests. Instead, the agency addressed flood insurance rate map correction requests by using a correction process it had implemented prior to the enactment of IQA. In this review, we found that one-fourth (15 of 59) of the IQA correction requests that resulted in no corrections were processed through an administrative mechanism other than the dedicated IQA request for correction process. According to OMB staff, agencies may respond to correction requests through the applicable administrative process. For example, agencies may process correction requests using notice-and- comment for rulemaking under the Administrative Procedure Act in instances where the request concerns a proposed rule and the comment process is still open. Processing such requests under IQA, as we reported in 2006, could impact rulemaking outside of the rulemaking process by affecting when or if an agency initiates a rulemaking. The following is an example of an agency response to a correction request submitted under IQA that the agency determined should be addressed through the rulemaking process: On July 1, 2010, a non-profit organization submitted a request to EPA to “rescind and correct online and printed information regarding alleged greenhouse gas emissions reductions resulting from ‘beneficial use’ of coal combustion waste products.” On February 16, 2011, EPA responded that many of the specific documents in question served as background technical support materials for EPA’s proposed rulemaking to address the risks from the disposal of coal combustion residuals generated by electric utilities and independent power producers. As a result, the agency would address the issues of the correction request through the rulemaking process for the rule. The peer review process allows the public an opportunity to provide comments and to question an agency’s use of data before it actually disseminates the information. The following is an example of an agency response to a correction request submitted under IQA that EPA determined should be addressed through public comments during the peer review process. On August 20, 2010, EPA received a correction request from a private citizen requesting EPA to, among other things, correct information used to develop the Draft Benthic Total Maximum Daily Load Development for Accotink Creek, Virginia. On November 15, 2010, EPA responded that the public comment response process would be used to address the concerns outlined in the correction request. EPA stated that all public comments would be considered during the revision of the draft document. EPA also stated that to “avoid duplicate actions that would interfere with the ongoing Total Maximum Daily Load Development process, we will not use the EPA Information Quality Guidelines Request for Correction process to respond” to the correction request. The public may not be aware of the different administrative processes agencies have available to address correction requests submitted under IQA. As a result, agencies’ IQA staff may be tasked with responding to a number of correction requests outside of the dedicated IQA request for correction process. Including explanations and links on agencies’ IQA websites to other available correction processes that might be more appropriate to the public’s needs could help increase efficiencies for all available information correction processes. Although OMB staff told us that agencies should, in their response to public correction requests, state whether those agencies plan to address the requests through other administrative processes, current OMB IQA guidance does not address this issue. However, we found that at least one agency has included in its online IQA guidance information for submitting correction requests outside of IQA. EPA included additional information on its IQA web page that informs the public on how to report and correct EPA website data errors as well as how to seek correction on information for which EPA has sought public comment. Agencies cited other reasons for not addressing a number of correction requests submitted under IQA. These included requests related to cases under litigation, requests too broad in nature (not specific), and requests the agencies deemed to lack merit. In addition, agencies did not recognize correction requests where the data in question were contained in a document not subject to IQA (such as a press release or a document not created by the agency). Specific examples follow. In 2011, a private citizen requested the National Oceanic and Atmospheric Administration (NOAA), which operates within Commerce, to modify information about the location where Tropical Storm Kirsten made landfall in 1966 in Mexico. NOAA responded that the information in question was not subject to the requirements of IQA as the data were considered to be archival (data disseminated by NOAA before October 1, 2002, are considered to be archival information). On March 30, 2011, an advocacy organization submitted a request to the National Park Service, which operates within Interior, to correct information that the requestor deemed as “unfounded scientific conclusions” in a report on allegations of scientific misconduct at Point Reyes National Seashore. On June 6, 2011, Interior responded that the document in question was a report of an investigation undertaken by the Office of the Solicitor. The investigation looked to resolve allegations of scientific misconduct on the part of employees. Also, the “report was generated as part of the adjudicative process of this personnel matter; it is not subject to review under the IQA.” OMB staff told us they rely heavily on their own website to disseminate IQA, OMB-specific, and government-wide guidance. OMB’s information quality website includes guidelines from OMB that describe its policy for ensuring the quality of information that it disseminates to the public. The guidelines also establish the administrative procedure by which an affected person may obtain correction of information disseminated by OMB. In addition, OMB includes links the public and other interested parties can use to locate individual agency information quality guidelines, government-wide information quality guidelines, and OMB’s annual reports to Congress. OMB’s reports to Congress, included on a separate OMB web page from IQA guidelines, include brief updates on agency reporting under the government-wide information quality guidelines. As we previously stated, agencies are required by IQA to report to OMB annually on the number and type of correction requests received, as well as their respective responses. Although not required, since 2003 OMB has published agency-reported IQA data from the previous fiscal year in an annual report to Congress. OMB also makes this information available on its website but the data are dispersed across multiple web pages which could make the information hard to find and could contribute to user confusion. For example, OMB provides links to the annual reports on its website where the public and interested parties may access the information (see figure 1). However, there is no central location on OMB’s website where the IQA data are located, for example, in a table or some other format by year or agency. Instead, interested parties would need to go to each separate annual report link, search for IQA data, collect the data, and create their own table to review IQA data government-wide from year to year. Enabling the public to better access information is one of the principles of the President’s digital government strategy. According to the strategy, the federal government must fundamentally shift how it thinks about digital information. To drive this shift, agencies must, among other things, be customer-centric to focus on customer needs. This means that quality information should be accessible, current, and accurate at any time. Federal digital services guidelines direct agencies to publish digital information so that it is easy to find and access. These guidelines are aimed at helping federal agencies improve their communications and interactions with customers through websites. Although OMB has made government-wide IQA data available in its reports to Congress, finding and compiling such information may take several steps, potentially making it more difficult to access and find, thus hindering transparency. OMB officials acknowledged that consolidating and centralizing IQA information on OMB’s website could improve transparency and access to its IQA data. In addition to posting correction requests and agency responses on agency websites, agencies are required by IQA to post their IQA guidelines and administrative mechanisms by which affected persons could petition for correction of inaccurate agency information. Twenty- eight of 30 agencies posted the required IQA documents online as of November 2015. However, the Department of Defense did not include administrative mechanisms on its website. In addition, we were unable to find the Federal Housing Finance Agency’s IQA guidelines anywhere on its website. OMB concurred with our review of these agencies’ IQA information and told us it would work with the agencies to improve the information provided on their websites, but as of December 2015, they had not completed that process. Until that step occurs, the public may be unaware of the steps the agencies would take upon receiving a correction request, or even how to submit a correction request. In addition to the required IQA information, some agencies’ websites included additional features that reflect customer-centric leading web practices identified by the President’s digital government strategy such as posting IQA information on a single website to ease accessibility and identifying points of contact online. For example, the Department of Labor’s website includes points of contact at its 21 component agencies. Such information enables the interested public with questions regarding IQA to more easily identify agency officials (see figure 2). OMB’s guidance on posting IQA correction requests states that agencies need to establish processes for updating their information quality web pages on a regular basis but does not define regular basis. We identified agency websites where information was outdated or web links were broken. Specifically, 9 of 30 agencies posted either outdated information or included broken hyperlinks (see figures 3 and 4). Consequently, the public may be unable to access these agencies’ IQA guidelines and correction requests. Ensuring that online content is accurate is one of the guidelines for federal digital services. Easy access to current guidance could also facilitate opportunities for affected parties and stakeholders to provide feedback on those documents. We identified five agencies that did not include on their websites any information about IQA correction requests. As a result, it is not clear by reviewing these agencies’ websites whether or not the agencies had received such requests during fiscal years 2010 through 2014. Specifically, we could not identify any language stating whether or not the Departments of Energy, Homeland Security, Justice, and Transportation, and the Office of Personnel Management had received correction requests as of November 2015. In addition, we found that as of November 2015: The Consumer Product Safety Commission’s website included links to the IQA correction requests the agency had received. However, there was no text indicating whether or not the agency had received IQA corrections requests for years where no correction requests were posted or whether that information was simply missing. The Department of Agriculture’s website included links to IQA data reports for fiscal years 2010 through 2013, but had no information regarding fiscal year 2014. The Department of Housing and Urban Development’s website did not include IQA data reports for fiscal years 2012 through 2014. The Federal Housing Finance Agency’s website did not include IQA information for fiscal year 2014. As noted earlier, OMB’s guidance states that agencies also need to establish processes for regularly updating their information quality websites. However OMB staff told us that if an agency has not received any IQA requests in a given fiscal year, they are not required to report that information on their websites. Without that acknowledgement however, it may be unclear to the public whether an agency has received IQA correction requests but has not posted them or whether the agency has in fact not received any requests. OMB staff agreed that clearly stating whether or not agencies had received IQA correction requests could improve the transparency of IQA. Even when agencies posted IQA information on their website as OMB required, such information is sometimes outdated making it difficult for users to know whether agencies have received correction requests or how to request correction of agency information. OMB staff acknowledged that additional OMB guidance that specifies time frames for agencies to post information on IQA requests received, requires explanations and links to other agency information correction processes, and provides suggestions for improving the usability of agency websites would be useful. Agency officials at six selected agencies—the Departments of Agriculture (USDA), Commerce, Health and Human Services (HHS), Interior, and Transportation, and the Environmental Protection Agency (EPA)—took a range of actions as part of their efforts to implement the IQA correction process and to better track and address correction requests received, such as the following examples. According to EPA officials, EPA’s centralized IQA process has provided greater oversight on correction requests from receipt to final response. EPA has developed internal process maps that outline the steps needed to address correction requests. Once EPA receives a correction request, EPA officials enter the request into a tracking database. Then, an acknowledgement receipt is dispersed. EPA officials then identify who within the agency is responsible for the information in question, and forward the request to the appropriate program office or region that schedules scoping meetings to review the request and draft a response. In the meantime, EPA notifies OMB of the correction request. The Department of Health and Human Services (HHS) also has a centralized IQA correction process. The agency’s Office of the Assistant Secretary for Planning and Evaluation in the Office of the Secretary manages and coordinates the IQA process and administers the HHS information quality website and is the agency’s point of contact with OMB. The HHS component or office within HHS that originated the challenged information is responsible for developing and sending the agency’s response. Interior’s IQA correction process is decentralized. Within each of Interior’s component agencies, Bureau Information Quality Coordinators address IQA correction requests and coordinate with Interior’s Information Quality Coordinator on the response. Officials from 3 of the 6 selected agencies also reported challenges in implementing IQA. Department of Transportation officials told us it has been a challenge to retain IQA institutional knowledge amidst staff turnover. EPA and Interior officials both stated that allocating the necessary time to properly respond to IQA correction requests was challenging. For example, EPA officials said that the amount of time it takes to respond to a correction request can take an appreciable part of a full-time employee’s efforts during busy periods. EPA officials said additional review time and attention are required because responses to corrections must be reviewed through EPA’s internal processes for concurrence, as well as with OMB. Interior officials also told us they spend a considerable amount of time addressing the often complex and/or lengthy IQA requests as well as obtaining the necessary reviews and concurrence of the agency response. We found that agency responses to IQA requests and an appeal have taken 2 years or longer to resolve. Although both EPA and Interior officials cited time spent in addressing correction requests to be a challenge, neither agency was able to provide estimates of agency or employee hours spent in the process. Further, none of our selected agencies had information about the actual workload or the number of staff days for responding to IQA correction requests. As a result, the impact of the IQA correction process on the selected agencies could not be accurately measured because the agencies do not have mechanisms in place to track the effects of implementing IQA. We previously reported that agency IQA officials believe addressing IQA requests is considered to be part of their agencies’ day-to-day business, and because of the multifaceted nature of some requests, allocating time and resources on specific issues or linking work exclusively to IQA requests would be difficult. According to OMB staff, there is not a specific amount of time that is considered too long for agency responses to correction requests. They explained that IQA correction requests may take a long time for some agencies due to the extensive review that is required to make a final agency decision. OMB staff stated that they did not want to be prescriptive in IQA guidance by adding administrative time requirements to an agency specific process. The officials added that taking a long time to respond to an IQA correction request was not necessarily a bad thing. It may indicate an extensive and comprehensive review by the agency and discussion of the information in question. Officials at our selected agencies told us they believe IQA has improved the quality of data disseminated by their agencies. For example, EPA officials told us that the quality of data disseminated by EPA is more robust due to the consideration of the diversity of viewpoints provided by the public, and the agency’s opportunity to review pertinent information that may not have been obtained by EPA. Interior officials reported to us that the IQA and peer review standards have greatly assisted in the dissemination of quality information. They stated that their guidelines give “teeth” to the objectives and requirements of quality information. According to HHS officials, the IQA process has proved to be a useful mechanism for the public to raise issues of concern to federal agencies that publicly disseminate information. OMB staff told us that the IQA process has improved agency information quality policies even though the correction request metrics may not show it. They explained that while it’s important for IQA correction numbers to be seen by the public, it is also important that the public is aware that these numbers are only a small piece of the benefits of IQA. IQA guidelines and peer reviews are all about pre-dissemination review, transparency, and ensuring that only information with good quality is released by agencies. IQA allows businesses, trade associations, advocacy organizations, the public, and others to submit requests to agencies to make corrections to agency disseminated information. Some of the 30 agencies in our review reported receiving relatively few IQA requests from fiscal years 2010 through 2014. Agencies determined that the majority of correction requests received did not warrant any changes. Processes other than IQA are available to request corrections of agency information and agencies addressed a number of correction requests through administrative mechanisms other than the dedicated IQA request for correction process. Agencies in our review have developed their own guidelines and administrative mechanisms for implementing IQA. OMB and agencies rely on their websites to disseminate guidance and also provide information regarding results of correction requests. However, we found that OMB had not consolidated all IQA data in one centralized location on its website. We also found instances where IQA required information was missing from agency websites or where information was outdated or incomplete. To be effective, guidance documents should be accessible to their intended audiences and corrective processes should be transparent. This is consistent with guidelines for federal digital services. OMB has the opportunity to build on its efforts to improve the transparency of the IQA process. For example, by consolidating summaries of agency IQA information, working with agencies to ensure all IQA requirements are met, and providing additional guidance about posting accessible, user- oriented information on agency websites, OMB could help increase the public’s access to and confidence in that information, thereby helping to further the goal of disseminating quality information. To better ensure agencies fulfill their requirements, including implementing IQA guidelines and helping to promote easier public access to IQA information on agency websites, we recommend that the Director of OMB take the following actions: Consolidate and centralize on OMB’s IQA guidance website a government-wide summary of requests for correction submitted under the IQA. Work with the Department of Defense and the Federal Housing Finance Agency to help ensure that they post their IQA administrative mechanisms and IQA guidance online. Provide additional guidance for agencies to help improve the transparency and usability of their IQA websites to help ensure the public can easily find and access online information about agency IQA implementation. Such guidance should include specific time frames for agencies to post information on the IQA correction requests they have received, including making it clear when agencies have not received IQA requests; instructions for agencies to include a statement on their IQA websites that the agencies may address correction requests through other administrative processes; instructions for agencies to include, when responding to correction requests, whether those agencies plan to address the request through another administrative processes, and if so, which process they will use; and suggestions for improving usability of agencies’ websites including fixing broken links. We provided a draft of this report to the Director of the Office of Management and Budget. In oral comments received on December 1, 2015, OMB staff discussed our findings, conclusions, and draft recommendations. They provided technical comments, which are incorporated into the report where appropriate. In response to this discussion, we made minor revisions to the draft and recommendation language to more accurately reflect the role of agencies in responding to correction requests along with OMB’s role in overseeing these activities. The OMB staff stated they agreed with our modified recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of OMB and other interested parties. In addition, the report will be available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2757 or GoldenkoffR@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this study were to (1) identify the number, source, and final disposition of IQA correction requests received by the 24 Chief Financial Officer (CFO) Act and other agencies for fiscal years 2010 through 2014; (2) assess the extent to which the 24 CFO Act and other agencies that received correction requests made IQA information publicly available; and (3) identify how selected agencies have implemented IQA. To address the first objective, we searched the websites of the 24 Chief Financial Officer (CFO) Act agencies and identified 10 of these that had posted correction requests and responses online for fiscal years 2010 through 2014. We selected 2014 as an ending point for a 5-year analysis because 2014 was the most recent, complete fiscal year of data available. To identify other agencies that had received correction requests during the same time frame, we reviewed the Office of Management and Budget’s (OMB) annual reports to Congress for fiscal year 2010 through fiscal year 2013 and identified those agencies outside of the 24 CFO Act agencies that reported receiving Information Quality Act (IQA) correction requests. OMB provided us with agency-reported data for fiscal year 2014 because the report to Congress had not yet been issued. From this, we identified an additional 6 non-CFO Act agencies that posted IQA correction requests and responses on their websites—the Consumer Product Safety Commission, Federal Communications Commission, Federal Housing Finance Agency, Federal Reserve Board, Office of Management and Budget, and the Office of Science and Technology Policy. We reviewed relevant OMB and agency documents, including IQA guidelines and agencies’ annual reports to OMB, examined requests and appeals to correct agency information, and reviewed OMB’s and agencies’ websites. To supplement the documentary evidence obtained, we interviewed agency officials responsible for IQA in their respective agencies. We also interviewed current and former OMB staff to provide additional context on IQA. During the course of our review, we compared agency IQA data posted on their websites with IQA data agencies reported to OMB and identified discrepancies. We discussed the discrepancies with OMB staff and agency officials and included their responses within the report. We determined that OMB and agency data were sufficiently reliable to provide a general indication of the numbers of correction requests received. Although agencies have other processes to correct agency disseminated information, we evaluated only information related to the IQA correction mechanism. We assessed relevant agency IQA documents—including guidelines, requests and appeals, agency decisions, and related documents—found on the 16 agency websites that posted correction requests during our identified time frame. To supplement and verify the accuracy and completeness of this information, we interviewed OMB IQA staff. Moreover, to better understand specific aspects of IQA requests and how agencies addressed them, as well as to illustrate specific points, we reviewed in detail all of the correction requests posted on agency websites to the extent such information was available online. Two analysts independently assessed each agency’s correction request and final agency response to determine requester type, request category, agency response and justification for response, and resolved all discrepancies. To categorize the sources of the requests by type of entity, such as business, trade association, or advocacy organization, we relied on information and descriptions the requester provided in the correction requests. Specifically, the majority of requesters self-identified as one of the following types of requesters—trade association/advocacy organization, business, private citizen, local government, or anonymous— in their correction requests to the agencies. However, when such information was not available, we searched the requester’s name online and used the descriptions found therein to make our determination as to the type of entity. To determine the final disposition of IQA requests and any appeals, we reviewed related agency documents, including interim agency correspondence, to determine whether or not the agency committed to make a correction(s) in response to the request. We determined a correction was a partial correction if the agency made at least one change based on the request, for example adding clarifying language or additional references. To address the second objective, we conducted an analysis of the 24 CFO Act agencies’ websites, as well as the six other agencies identified in objective one as having received IQA correction requests during our selected timeframe, using internal site search engines and search terms, such as “information quality,” “correction request,” and “IQA guidelines,” to determine whether they had IQA guidelines and other IQA information online. We identified and used IQA search terms and steps to review and find information on agency publicly available web pages consistent with best practices guidance for search engine optimization from digitalgov.gov’s website. We also used OMB’s Open Government Directive in assessing IQA guidance documents. We compared the information found on the websites to IQA requirements outlined in OMB guidance to agencies on posting IQA documents. We also reviewed other OMB and relevant government guidance on design features to make government-wide information and data accessible. When we found instances where agencies had not posted the required guidelines or administrative mechanisms, we contacted OMB staff for verification. To identify IQA processes and challenges agencies face in implementing IQA, we selected a non-generalizable sample of six agencies -– the Departments of Agriculture, Commerce, Health and Human Services, Interior, and Transportation, and the Environmental Protection Agency -– to obtain illustrative examples of how they approached and implemented IQA. We selected these agencies based in part on the number (both high and low to include a range) of IQA correction requests the agencies had received from fiscal years 2010 through 2014. We also included one agency (Department of Transportation) based on the relatively high number of peer reviews conducted during the same time frame. We interviewed OMB and agency officials responsible for addressing IQA correction requests to gather their perceptions on the overall IQA process. We conducted this performance audit from November 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Organizations That Filed Information Quality Act Correction Requests during Fiscal Years 2010 through 2014 Federal agency receiving request and filer Consumer Product Safety Commission Public Employees for Environmental Responsibility Public Employees for Environmental Responsibility The Association of Proprietary Colleges Department of Health and Human Services Capital Strategy Consultants, Inc. Center for Regulatory Effectiveness (2) International Premium Cigar and Pipe Retailers Association Styrene Information and Research Center (2) Federal agency receiving request and filer Pacific Coast Shellfish Growers Association Pavement Coatings Technology Council (2) United States Association of Reptile Keepers and Pet Industry Joint Advisory Council Western Energy Alliance (2) Environmental Protection Agency American Coatings Association American Chemistry Council (2) Artisan EHS Consulting, LLC The Competitive Enterprise Institute and ActionAid USA Halogenated Solvents Industry Alliance, Inc. International Platinum Group Metals Association Organic Arsenical Products Task Force and Wood Preservative Science Council Public Employees for Environmental Responsibility (2) U.S. Chamber of Commerce (2) Walter Coke, Inc. W.R. Grace & Co. Conn. Robert Goldenkoff, (202) 512-2757 or goldenkoffr@gao.gov. In addition to the contact named above, Clifton G. Douglas, Jr. (Assistant Director), Dewi Djunaidy (Analyst-in-Charge), Joseph Fread, Lisette Baylor, Michele Fejfar, Ellen Grady, Farrah Graham, Andrea Levine, and Stewart Small made key contributions to this report.
IQA, passed in fiscal year 2001, required OMB to issue government-wide guidelines by the end of that fiscal year to ensure the quality of information disseminated by federal agencies. OMB issued guidance to agencies to clarify how agencies were to satisfy the law and otherwise implement IQA. The guidance required agencies to develop and post IQA guidelines and related information on their websites. GAO reported in 2006 that expanded oversight and clearer guidance by OMB could improve agencies' implementation of IQA. GAO was asked to conduct an updated study on IQA. This report (1) identifies the number, source, and final disposition of IQA correction requests received by the 24 Chief Financial Officers (CFO) Act and other agencies for fiscal years 2010 through 2014 and (2) assesses the extent to which the 24 CFO Act and other agencies that received correction requests made IQA information publicly available, among other objectives. GAO obtained data on IQA guidelines and other IQA-related information from the 24 CFO Act agencies and 6 additional agencies that reported receiving IQA correction requests for fiscal years 2010 through 2014. GAO also reviewed agency websites and interviewed OMB and agency officials. Of the 30 agencies in GAO's review, 16 reported on their respective websites receiving a total of 87 Information Quality Act (IQA) correction requests from fiscal years 2010 through 2014, while 14 agencies did not post any requests during this time. Three agencies—the Environmental Protection Agency, Department of Health and Human Services, and Department of Interior—reported receiving 61 of the 87 requests. Agencies are required to post all IQA correspondence, including a copy of each correction request and the agencies' formal response on their websites. However, 8 agencies who reported receiving IQA correction requests did not post on their website the same number of IQA correction requests that they reported to the Office of Management and Budget (OMB). In most cases, agencies indicated that the discrepancies were due to the time frames for posting information to their respective websites. OMB officials said they are communicating with agencies to address these discrepancies. GAO found that trade associations and advocacy organizations (50 of 87) submitted the most IQA correction requests, followed by private citizens (16), and businesses (13). GAO also found that IQA correction requests either (1) questioned agencies' use of or agencies' interpretation of data used or (2) cited administrative errors. For example, a trade association questioned the accuracy of data used in public service advertising on childhood lead poisoning prevention. Agencies did not make the requested corrections in 59 of the 87 IQA correction requests. IQA is one of several processes available to the public for requesting corrections of agency information. In one-fourth (15 of 59) of the requests where agencies determined that no change should be made, agencies addressed those requests through an administrative mechanism other than the dedicated IQA request for correction process. OMB posts IQA information online, including links to agency-specific IQA guidelines; however, there is no central location on OMB's website where a user could access all IQA data, making specific IQA data more difficult to find and hindering transparency of the process. Twenty-eight of the 30 agencies in GAO's review posted the required IQA information online as of November 2015. The Department of Defense's (DOD) posted IQA information did not include the administrative mechanisms needed to submit a correction request to the agency as required. The Federal Housing Finance Agency's (FHFA) online information did not include its required IQA guidance. Without this information, the public may be unaware of the steps the agencies would take upon receiving a correction request, or even how to submit a correction request. OMB staff stated they would work with the agencies to improve the information on their websites, but as of December 2015, they had not completed that process. Ensuring that online content is accurate is one of the guidelines for federal digital services. These guidelines are aimed at helping federal agencies improve their communications and interactions with customers through websites. GAO found at least five agencies did not include any information regarding correction requests and other agencies' posts included outdated information or contained broken hyperlinks. The Department of Energy's web page includes a link to its IQA processes but as of November 2015 the page to submit correction requests online was under construction. OMB requires agencies to post information quality correspondence on agency websites to increase the transparency of the process but has not provided specific guidance to agencies for posting accessible, user-oriented information, including specific time frames for posting information, explanations of and links to other available correction processes, and other suggestions for improving website usability. Providing such guidance will help increase transparency and allow the public to view all IQA related information including correction requests, appeal requests, and agency responses to those requests. GAO recommends that OMB (1) consolidate and centralize on its website a summary of IQA correction requests, (2) work with DOD and FHFA to help ensure they post required IQA administrative mechanisms and guidance online, and (3) provide additional guidance to help improve the transparency and usability of IQA websites to ensure the public can easily find and access online information. OMB agreed with these recommendations.
Approximately 16 percent of air cargo transported to, from, or within the United States is shipped on passenger aircraft, while the remainder is transported on all-cargo aircraft. Overall, approximately 20 million pounds of cargo is transported on domestic and inbound passenger aircraft daily. This cargo ranges in size from 1 pound to several tons and in type from perishable commodities to machinery. Air cargo can include such varied items as electronic equipment, automobile parts, clothing, medical supplies, fresh produce, and human remains. As seen in figure 1, cargo can be shipped in various forms, including unit load devices (ULD) that allow many packages to be consolidated into one large container or pallet that can be loaded onto an aircraft, wooden skids or crates, and individually wrapped/boxed pieces, known as loose or break bulk cargo. Participants in the air cargo shipping process include shippers, such as individuals and manufacturers of various product types; freight forwarders, such as a company that accepts packages and ships them on behalf of individuals or manufacturers; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and air carriers that load and transport cargo. A shipper may take or send its packages to a freight forwarder that in turn consolidates cargo from many shippers onto a master air waybill—a manifest of the consolidated shipment—and delivers the shipment to air carriers for transport. A shipper may also send freight by directly packaging and delivering it to an air carrier’s ticket counter or sorting center, where the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. TSA’s responsibilities for securing air cargo include establishing security requirements governing domestic and foreign passenger air carriers that transport cargo, and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections by transportation security inspectors (TSI)—staff within TSA responsible for aviation or cargo security inspections—and, in coordination with DHS’s S&T Directorate, for guiding research and development of air cargo security technologies. Of the over $5.2 billion provided to TSA for aviation security in fiscal year 2010, approximately $123 million is for air cargo security as called for in the Conference Report for the DHS Appropriations Act, 2010. Of this amount, TSA was directed to use $15 million to test, evaluate, and deploy screening technologies; to expand canine teams operated by TSA by transferring 35 teams from those operated by local law enforcement; to deploy technologies to screen skids and pallets; and to increase the number of TSIs who oversee participants in the newly developed Certified Cargo Screening Program (CCSP)—a voluntary cargo screening program for freight forwarders, shippers, and other air cargo industry participants. For fiscal year 2011, TSA has requested approximately $118 million for its air cargo security efforts. U.S. and foreign air carriers, freight forwarders, and certified cargo screening facilities (CCSF)—industry stakeholders that have joined the CCSP—are responsible for implementing TSA security requirements, including maintaining a TSA-approved security program that describes the security policies, procedures, and systems the air carriers, freight forwarders, and CCSFs must implement to ensure compliance. These requirements include measures related to the acceptance, handling, and screening of cargo; training of employees in security and cargo screening procedures; testing for employee proficiency in cargo screening; and access to cargo areas and aircraft. Air carriers, freight forwarders, and CCSFs must also abide by security requirements imposed by TSA through security directives and amendments to security programs. In addition to TSA, CBP and foreign governments play a role in securing inbound cargo. Unlike TSA, which requires screening prior to departure, CBP determines the admissibility of cargo into the United States and is authorized to inspect inbound air cargo for terrorists or weapons of mass destruction upon its arrival in the United States. Foreign governments may also impose their own security requirements on cargo departing from their airports. The 9/11 Commission Act specifies that air cargo screening methods include X-ray systems, explosives detection systems (EDS), explosives trace detection (ETD), explosives detection canine teams certified by TSA, physical search together with manifest verification, and any additional methods approved by the TSA Administrator. However, solely performing a review of information about the contents of cargo or verifying the identity of the cargo’s shipper does not constitute screening for purposes of satisfying the mandate. Figure 2 shows some approved screening methods. TSA has made progress in meeting the 9/11 Commission Act air cargo screening mandate as it applies to domestic cargo, and has taken several key steps in this effort, such as increasing the amount of domestic cargo subject to screening, creating a voluntary program—the CCSP—to allow screening to take place at various points along the air cargo supply chain, and taking steps to test air cargo screening technologies, among other actions. However, TSA faces several challenges in fully developing and implementing a system to screen 100 percent of domestic air cargo. For example, shipper participation in the CCSP has been lower than targeted by TSA. Furthermore, TSA lacks information to help ensure that it has the inspection resources it may need to provide effective oversight of CCSP entities. In addition, there is currently no technology approved or qualified by TSA to screen ULD pallets or containers, and TSA is working to complete qualification testing of several air cargo screening technologies to provide reasonable assurance of their effectiveness. Questions also exist about the reliability of the data used to calculate screening levels reported by TSA. Moreover, in-transit cargo—such as cargo that is transferred from an inbound to a domestic passenger flight—is not currently required to undergo physical screening. Finally, TSA has not developed a contingency plan to address CCSP participation and screening technology challenges, which could be implemented should TSA’s current efforts not be sufficient to achieve the 100 percent screening mandate without impeding the flow of commerce. TSA has taken several steps to address the air cargo screening mandate. TSA increased the amount of domestic cargo subject to screening. Effective October 1, 2008, several months prior to the first mandated deadline of 50 percent screening by February 2009, TSA established a requirement for 100 percent screening of nonexempt cargo transported on narrow-body passenger aircraft. In 2008, narrow-body flights transported about 24 percent of all cargo on domestic passenger flights. Effective February 1, 2009, pursuant to the 9/11 Commission Act, TSA also required air carriers to ensure the screening of 50 percent of all nonexempt air cargo transported on all passenger aircraft. Furthermore, effective May 1, 2010, air carriers were required to ensure that 75 percent of such cargo was screened. Although screening may be conducted by various entities, according to TSA regulations, each air carrier must ensure that the screening requirements are fulfilled. Furthermore, TSA eliminated or revised most of its screening exemptions for domestic cargo. For example, TSA eliminated the screening exemptions for palletized shrink-wrapped skids, effective February 2009, and for sealed pharmaceuticals and certain electronics, effective September 2009. As a result of the elimination of exemptions, most domestic cargo is now subject to TSA screening requirements. However, TSA is retaining several of the screening exemptions that apply to sensitive cargo. TSA created a voluntary program to facilitate screening throughout the air cargo supply chain. Since TSA concluded that relying solely on air carriers to conduct screening would result in significant cargo backlogs and flight delays, TSA created the voluntary CCSP to allow screening to take place earlier in the shipping process, prior to delivering the cargo to the air carrier (see fig. 3). Under this decentralized approach, air carriers, freight forwarders, shippers, and other entities each play an important role in the screening of cargo. Under the CCSP, facilities at various points in the air cargo supply chain, such as shippers, manufacturers, warehousing entities, distributors, third-party logistics companies, and freight forwarders that are located in the United States, may voluntarily apply to TSA to become CCSFs. Once in the program, they are regulated by TSA. According to TSA officials, sharing screening responsibilities across the air cargo supply chain is expected to minimize the potential increases in cargo transit time, which could result if the majority of screening were conducted by air carriers at the airport. While the CCSP allows for a number of entities to conduct air cargo screening, according to TSA regulations, air carriers are responsible for ensuring that all domestic cargo transported on passenger aircraft is screened. TSA officials stated that effective August 2010, unscreened domestic cargo would not be transported on passenger aircraft. TSA initiated the CCSP at 18 U.S. airports that process high volumes of air cargo, and then expanded the program to all U.S. airports in early 2009. CCSP participants were certified to begin screening cargo as of February 1, 2009. The shipper participants were regulated pursuant to an order, and the rules for freight forwarder participants were instituted through an amendment to their security programs. On September 16, 2009, TSA issued an interim final rule (IFR) that effective November 16, 2009, regulates the shippers, freight forwarders, and other entities participating in the CCSP. According to the IFR, to become a CCSF, a facility’s screening measures must be evaluated by TSA or a TSA-approved validation firm. Under its certification process, TSA requires each CCSF to demonstrate compliance with its security standards that include facility, personnel, procedural, perimeter, and information technology security. Prior to certification, the CCSP applicant must submit for review and approval its training programs related to physical screening, facility access controls, and chain of custody, among other things. CCSF applicants must also implement TSA-approved security programs and appoint security coordinators before they can become certified. CCSFs must ensure that certain employees have undergone TSA-conducted security threat assessments; adhere to control measures for storing, handling, and screening cargo; screen cargo using TSA-approved methods; and implement chain of custody requirements. Once certified, CCSFs must apply for recertification, including a new examination by TSA or a TSA- approved validator, every 36 months. As part of the current program, and using TSA-approved screening methods, freight forwarder CCSFs must screen 50 percent of cargo being delivered to wide-body passenger aircraft and 100 percent of cargo being delivered to narrow-body passenger aircraft. According to TSA, although shipper CCSFs are not required to screen shipments to be delivered to a passenger aircraft, when they choose to conduct screening, such shipments must be screened at 100 percent. In addition, each CCSF must deliver the screened cargo to air carriers while maintaining a secure chain of custody to prevent tampering with the cargo after it is screened. In fiscal year 2009, entities that were certified by TSA to participate in the CCSP were subject to annual inspections by TSIs and additional inspections at TSA’s discretion. According to the 2010 TSI Regulatory Activities Plan, the agency plans to conduct at least two comprehensive inspections a year (i.e., a review of the implementation of all air cargo security requirements) for each CCSP participant. In addition, the agency plans to conduct more frequent inspections based on each entity’s compliance history, among other factors. TSA is in the process of clarifying CCSF screening and training requirements. During the course of our site visit conducted in July 2009, we identified two instances where CCSFs misinterpreted CCSP screening requirements. For example, a freight forwarder representative with whom we spoke stated that the freight forwarder’s certified facilities have flexibility in the levels of cargo they have to screen, such as screening a percentage of cargo on some days while not screening any cargo on others. This interpretation is contrary to the view of senior TSA officials who are responsible for implementing the program, that freight forwarder CCSFs must screen a percentage of cargo on a daily basis, as required in their TSA-approved security programs. While the extent to which misinterpretation of the CCSP requirements occurs among program participants is unclear, the instances we identified indicated that freight forwarder CCSFs may not be applying TSA screening requirements consistently. When we brought this issue to the attention of a senior TSA official, he stated that the agency would benefit from strengthening and clarifying CCSP screening requirements. In March 2010, TSA officials reported that the agency has taken steps to clarify the requirements, though they did not specify what those steps were, and said the agency is planning to communicate these clarifications to relevant stakeholders. During our site visits conducted in June and July 2009, we also observed two cases where training materials used by freight forwarder CCSFs to educate their employees on the use of technology to screen cargo may not have been consistent with TSA screening procedures. For example, one freight forwarder representative we interviewed during our site visit stated that his company compiled training materials on how to screen cargo with ETD technology from public information found on the Internet. However, TSA has no way of knowing whether the public information gathered from the Internet or from other sources used to develop training materials is reliable or consistent with TSA policies and procedures. After we brought this issue to the attention of TSA officials, TSA reported that the agency plans to clarify the CCSF training requirements regarding how to use technology to screen air cargo. Specifically, TSA plans to update these requirements in amendments to the freight forwarder CCSF policies and procedures. TSA officials also stated that the agency is considering providing CCSFs with a TSA-approved technology training package or a list of approved training vendors that CCSP participants can use to facilitate the training of their employees. The agency is in the early stages of this effort and has not yet developed time frames for when this effort will be completed. TSA is conducting outreach efforts to air cargo industry stakeholders. Starting in September 2007, TSA began outreach to freight forwarders and subsequently expanded its outreach efforts to shippers and other entities to encourage participation in the CCSP. While industry participation in the CCSP is central to TSA’s approach to spread screening responsibilities across the U.S. supply chain and, ultimately, meet the screening mandate, attracting shippers and freight forwarders to join the program is challenging because of the effect of the economic downturn on their resources and cargo volume, and the perception by some in the shipping and freight forwarder industry that screening costs and delays associated with air carriers conducting cargo screening will be minimal. TSA is focusing its outreach on particular industries, such as producers of perishable foods, pharmaceutical and chemical companies, and funeral homes, which may experience damage to their cargo if it is screened by a freight forwarder or an air carrier. TSA officials stated that they reach out to entities through a combination of conferences, outreach meetings, Internet presentations, and information posted in numerous trade association newsletters and on Web sites. To enhance its outreach efforts, TSA established a team of 12 TSA field staff, or CCSP outreach coordinators, to familiarize industry with the air cargo screening mandate and the CCSP, as well as educate potential CCSP applicants on the program requirements. In addition, outreach coordinators are responsible for certifying cargo screening facilities. They visit the facilities of the CCSP applicants to assess their ability to meet program requirements and to address any deficiencies identified during the assessment. To complete the certification process, the outreach coordinator ensures that the facility has appropriate procedures and training in place to screen cargo. According to TSA officials, in February 2009, the agency also began using its air cargo TSIs in the field to conduct outreach. Officials from the one domestic passenger air carrier association and the one freight forwarder association with whom we spoke reported that TSA’s staff has been responsive and helpful in answering questions about the program and providing information on CCSP requirements. TSA is taking steps to test technologies for screening air cargo. The 9/11 Commission Act specifies that the permitted methods of air cargo screening are X-ray systems, EDS, ETD, explosives detection canine teams, physical search together with manifest verification, and any additional methods approved by the TSA Administrator. However, TSA is responsible for determining which specific equipment models are authorized for use by industry stakeholders. We reported in March 2009 that TSA and DHS’s S&T Directorate were pilot testing a number of technologies to screen air cargo. For example, to test select screening technologies among CCSFs, TSA created the Air Cargo Screening Technology Pilot in January 2008, and selected some of the nation’s largest freight forwarders to use these technologies and report on their experiences. The screening that pilot participants perform counts toward meeting TSA screening requirements and in turn the air cargo screening mandate. In a separate effort, in July 2009, DHS’s S&T Directorate completed the Air Cargo Explosives Detection Pilot Program that tested the performance of select baggage screening technologies for use in screening air cargo at three U.S. airports. TSA officials stated that the agency will be reviewing the pilot results and conducting additional testing on the technologies identified in the resulting S&T Directorate report. Furthermore, TSA initiated a qualification process to test the technologies that it plans to allow air carriers and CCSP participants to use in meeting the August 2010 screening mandate against TSA technical requirements. In November 2008, in addition to the canine and physical search screening methods permitted by TSA to screen air cargo, as an interim measure, TSA issued to air carriers and CCSFs a list of X-ray, ETD, and EDS models that the agency approved for screening air cargo until August 3, 2010. TSA approved these technologies based on its subject matter expertise and the testing and performance of these technologies in the checkpoint and checked baggage environments. In March 2009, TSA initiated a laboratory and field-based qualification testing process to ensure effectiveness of approved and other technologies in the air cargo environment and qualify them for use after August 3, 2010. Once the initial stage of the qualification testing process is accomplished, TSA’s policy is to add successful candidates to a list of qualified products for industry stakeholders to utilize when purchasing technologies. For example, TSA added X-ray technologies to the list of qualified products in October 2009. TSA recommends that industry stakeholders purchase technologies from a list of qualified products because the technologies that do not pass the qualification testing process within 36 months of becoming approved are to be removed from a list of products authorized to screen air cargo. After issuing the list of qualified products, TSA plans to conduct additional stages of qualification testing and evaluation to determine the suitability of the screening equipment in an operational setting. During the qualification testing process, TSA expects to utilize the results of the Air Cargo Screening Technology Pilot and conduct additional operational tests independent of the pilot. A description of several programs to test screening technologies for air cargo and their status is included in table 1. TSA expanded its explosives detection canine program. TSA has taken steps to expand the use of TSA-certified explosives detection canine teams. According to TSA, in fiscal year 2009, TSA canine teams screened over 145 million pounds of cargo, which represents a small portion of domestic air cargo. As of February 2010, TSA had 113 dedicated air cargo screening canine teams—operating in 20 major airports—and is in the process of adding 7 additional canine teams. TSA worked with air carriers to identify peak cargo delivery times, in order to schedule canine screening during times that would be most helpful to air carriers. TSA also deployed canine teams to assist the Pacific Northwest cherry industry during its peak harvest season from May through July 2009, to help air carriers and CCSFs handling this perishable commodity to meet the 50 percent screening requirement without disrupting the flow of commerce. TSA established a system to verify that screening is being conducted at the mandated levels. The agency established a system to collect and analyze data from screening entities to verify that requisite levels for domestic cargo are being met. Effective February 2009, TSA adjusted air carrier reporting requirements and added CCSF reporting requirements to include monthly screening reports on the number and weight of shipments screened. Based on reporting guidance issued by the agency, air carriers and CCSFs provided to TSA the first set of screening data in mid-March 2009, to be used as the basis for TSA’s quarterly reports to Congress. Under TSA’s current process, screening data are manually reviewed and analyzed to determine if the screening is conducted at the mandated levels. According to TSA officials, the agency plans to transition from a manual process to automated data collection, review, and analysis by mid-2010. Based on these preliminary data, TSA has determined that over 50 percent of air cargo (by weight and number of shipments) transported on domestic passenger aircraft has been screened since the 50 percent requirement went into effect. For fiscal year 2009, TSA submitted its 2nd Quarter report, due in May 2009, on October 2, 2009, verifying these screening levels. The 3rd Quarter report, due in August 2009, was submitted on January 7, 2010. The 4th Quarter report, due in November 2009, is undergoing Office of Management and Budget review. TSA is developing a covert testing program to identify security vulnerabilities in the air cargo environment. TSA conducts undercover, or covert, tests that are designed to approximate techniques that terrorists may use in order to identify vulnerabilities in the people, processes, and technologies that make up the aviation security system. TSA officials reported that the agency plans to carry out a covert testing program for the air cargo environment in two phases and will conduct tests at shipper, freight forwarder, and air carrier facilities. Both phases are scheduled to begin in 2010. TSA is in the early stages of developing the testing protocols and thus has not yet established a timetable for their completion. According to TSA officials, the agency plans to use the results of these tests to identify and remedy vulnerabilities in the air cargo system. In addition, TSA operates a risk-based Air Cargo Vulnerability Assessment program to identify weaknesses and potential vulnerabilities in the domestic air cargo supply chain. As of March 2010, TSA has conducted assessments at 33 U.S. airports and completed assessments at all domestic category X airports in December 2009. After completing these assessments, TSA stated that it will utilize the results to refine policy for air cargo security. TSA faces industry participation, oversight, technology, and other challenges, and could benefit from a contingency plan to identify alternatives for meeting the air cargo screening mandate. Although TSA is relying on the voluntary participation of industry stakeholders to meet the screening mandate, some stakeholders have not participated in the program at targeted levels. As shown in figure 4, TSA officials have estimated that an ideal mix of screening to achieve the 100 percent mandate as it applies to domestic cargo without impeding the flow of commerce would be about one-third of cargo weight screened by air carriers, one-third by freight forwarders, and one-third by shippers and independent CCSFs. The air carrier portion includes a small amount of screening by TSA canine teams and by TSIs at the smaller category II through IV airports. TSA officials emphasized that this estimated ideal mix is not precise but is intended to illustrate that balanced industry participation is needed to achieve the goals of the program. However, as of March 2010, the percentage of cargo reported as screened by shipper and independent CCSFs remained at 2 percent—far lower than the 33 percent TSA cites as the portion these entities should ideally screen. To achieve TSA’s ideal mix of screening by August 2010, shipper and independent CCSF screening efforts would need to increase by over sixteenfold. Moreover, as shown in figure 4, the total percentage of reported screened cargo rose on average by less than a percentage point per month (from 59 to 68 percent) from February 2009 through March 2010. At these rates, it is questionable whether TSA’s screening system will achieve 100 percent screening of domestic cargo by August 2010 without impeding the flow of commerce. Effective May 1, 2010, TSA requires that 75 percent of air cargo transported on passenger aircraft be screened. However, even if this requirement is met, an additional 25 percent of domestic passenger air cargo would still need to be screened in the 3 months prior to the August 2010 deadline, including some of the most challenging types of cargo to screen, such as ULD pallets and containers. In March 2010, TSA officials stated that they surveyed current CCSFs and CCSP applicants to estimate these air cargo industry stakeholders’ capacity for screening domestic cargo—this could help predict the industry’s success in meeting the 100 percent screening deadline. According to TSA officials, the survey indicated that current and prospective CCSFs have the potential capacity needed to screen cargo so that short-term delays at only the nation’s 18 major airports will result when the 100 percent deadline goes into effect. However, TSA did not have supporting documentation of the survey’s methodology or results. Thus, we were unable to independently verify TSA’s assertions or the rigor of TSA’s methodology and analysis. For example, it is unclear whether TSA’s survey and estimation took into account cargo that is inherently difficult to screen, such as ULD pallets or containers, or whether it focused primarily on loose cargo that is being screened with relative ease. It is also important to note that having the potential capacity to screen air cargo does not ensure that this capacity will be fully utilized to meet the air cargo screening mandate. In addition, TSA officials stated that they did not develop milestones to monitor CCSP progress because air cargo screening by industry stakeholders is driven by market forces that are beyond the control of the government and are impossible to forecast. Further, according to TSA officials, if the CCSP participants cannot contribute the amount of screening needed to achieve 100 percent screening by the August 2010 deadline, the air carriers will be responsible for screening any remaining unscreened cargo at the airport or ensuring that it does not fly on a passenger aircraft. However, according to officials from the two major air carrier industry associations and the one freight forwarder association with whom we spoke, if the volume of cargo is too great for air carriers to handle, it could significantly disrupt the air cargo industry because of delays from cargo screening at the airport and the shift of unscreened cargo to alternate modes of transportation, such as all-cargo aircraft or trucks. Officials from one major air carrier industry association further noted that this would particularly be a problem if the volume of large cargo configurations—such as ULD pallets or containers—that air carriers have to disassemble and screen is too great for air carriers to handle. As discussed earlier, according to TSA officials, these disruptions will be short term and limited to 18 major airports. However, these 18 airports process 65 percent of domestic cargo transported on passenger aircraft, which suggests that disruptions may be substantial. TSA’s rationale for creating the CCSP, and spreading screening responsibilities throughout the supply chain, is to mitigate the risks of these sorts of disruptions. However, these CCSP participation challenges demonstrate that TSA could benefit from developing a contingency plan, as we will discuss later, should it become clear that participation rates are not sufficient to achieve the screening mandate without impeding the flow of commerce. According to TSA officials, some shippers have expressed interest in the CCSP, particularly those in certain industries, such as the pharmaceutical industry, whose cargo would be compromised if opened and screened by others. However, TSA and industry officials reported that several factors, such as lack of economic and regulatory incentives, are contributing to low shipper participation levels. For example, TSA and the freight forwarder industry association official with whom we spoke reported that flexibility in applying current TSA screening requirements—such as the ability to screen only easier-to-screen cargo and leave more challenging cargo unscreened—and low cargo volumes have minimized screening- related cargo delays and cargo screening costs. For example, until the 100 percent screening mandate goes into effect in August 2010, air carriers may be able to meet TSA screening requirements by screening mostly loose or break-bulk cargo and not the more challenging and time- consuming cargo, such as ULD pallets and containers or large wooden crates. Officials from the domestic passenger air carrier association and freight forwarder industry association with whom we spoke reported that because of the difficult economic environment and flexibility stakeholders have in choosing what cargo to screen, most air carriers are not currently charging freight forwarders or shippers for cargo screening in order to attract and retain customers. As a result, TSA and the domestic passenger air carrier and freight forwarder industry association officials we interviewed stated that many shippers and freight forwarders are not incurring significant screening costs from air carriers, which decreases the financial pressure on the entities to join the CCSP and invest resources into screening cargo, factors that are making TSA’s outreach efforts more challenging. Moreover, the freight forwarder industry association official with whom we spoke stated that some industry participants may not be able to join the program because of potential program costs. TSA has estimated in the IFR that the total cost for industry participation in the CCSP will be about $2.2 billion over a 10-year period, though the agency has not provided per capita cost estimates for industry. The freight forwarder industry association official with whom we spoke reported that business models of large freight forwarders require them to purchase time-saving screening equipment so that screeners can avoid physically opening and examining each piece of cargo. However, TSA and this industry official agreed that the majority of small freight forwarders—which handle 20 percent of the cargo but make up 80 percent of the total number of freight forwarding companies—would likely find the costs of joining the CCSP, including acquiring expensive technology, hiring additional personnel, conducting additional training, and making facility improvements, prohibitive. Moreover, shippers that distribute products from other companies in addition to or instead of their own manufactured goods might also find it cost prohibitive to join the CCSP if they were required to purchase screening equipment. However, TSA officials stated that most shippers can incorporate physical searches into their packing and shipping processes to satisfy TSA screening requirements, thereby avoiding such capital expenses. TSA established the Air Cargo Screening Technology Pilot program to make some financial reimbursement available to large freight forwarders and independent CCSFs for the technology they have purchased. TSA reported that it targeted high-volume facilities (i.e., facilities processing at least 200 ULDs or their equivalent weight of approximately 500,000 pounds annually, shipments annually that contain cargo consolidated from multiple shippers) for the pilot in order to have the greatest effect in helping industry achieve the new screening requirements. As of February 2010, 113 CCSFs have joined the pilot. However, the majority of CCSFs do not ship large enough volumes of consolidated cargo to qualify for the pilot, and thus cannot receive funding for technology or other related costs. The freight forwarder industry association official with whom we spoke expressed concerns regarding the cost of purchasing and maintaining screening equipment. In response to concerns of medium and small freight forwarders that they might not be able to join the program because of potential costs, TSA officials stated that the agency is allowing independent CCSFs to join the CCSP and screen cargo on behalf of freight forwarders and shippers. In this scenario, small freight forwarders or shippers would not need to join the CCSP or purchase technology to avoid screening at the airport, but could send their cargo for a fee to an independent CCSF for screening. However, TSA and the freight forwarder industry association official with whom we spoke stated that the independent CCSFs are having difficulties attracting clientele in the current depressed economic environment. According to these officials, shippers and other supply chain participants might use independent CCSFs to screen their cargo once the 100 percent screening requirement goes into effect, if cargo volumes increase before that time or if cargo experiences screening delays. Many of the challenges in attracting industry participation in the CCSP are outside of TSA’s control, and agency officials stated that they are working to raise industry awareness of the benefits of joining the program through TSA’s ongoing outreach efforts. While TSA has amended its Regulatory Activities Plan to include inspections of CCSP participants, the agency has not completed its staffing study to assess its staffing needs and determine how many inspectors will be necessary to provide oversight of the additional program participants when the 100 percent screening mandate goes into effect. TSA’s compliance inspections range from reviews of the implementation of all air cargo security requirements (i.e., comprehensive) to a more frequent review of at least one security requirement (i.e., supplemental). TSA recognized that the creation of the CCSP added additional regulated entities to TSI oversight responsibilities, and incorporated additional inspection requirements into the TSI Regulatory Activities Plan. Beginning under the plan for fiscal year 2009, TSIs are to perform compliance inspections of new regulated entities, such as shippers and manufacturers, that voluntarily become CCSFs, as well as new inspections of freight forwarder CCSFs that are in addition to inspections related to their freight forwarder responsibilities. In addition to their pre-CCSP inspection responsibilities, under the plan for fiscal year 2010, TSIs are to conduct at least two comprehensive inspections a year for each CCSF to verify compliance with the program requirements. As of March 2010, TSA had 1,258 TSIs, of which 533 were dedicated cargo TSIs or cargo TSI canine handlers. The agency was authorized 50 new cargo TSI positions in fiscal year 2010 to provide additional oversight of CCSP operations. TSA officials reported that they have developed an interim program-level methodology to allocate these TSIs to airports based on CCSP participation and cargo volume, among other factors, and that they believe they have a sufficient number of inspectors to ensure compliance with the CCSP. However, the agency staffing study, which would determine the resources necessary to provide CCSP oversight, is not yet complete. According to TSA, the agency’s staffing study is continuing through fiscal year 2010 and is therefore not yet available to provide guidance in helping to plan for inspection resources needed to provide oversight. Complicating TSA efforts to determine the level of inspection resources needed is the extent to which market forces will affect CCSP participation and therefore how many additional CCSFs will join the program and thus be subject to TSA’s inspection requirements. As of March 1, 2010, 583 entities had joined the CCSP. Given this level of participation, TSA’s TSI workforce must conduct at least 1,166 comprehensive inspections of CCSFs per year. According to our analysis of TSA data, in the next year, inspectors will need to at least double their comprehensive inspections of CCSFs to reach this target. Moreover, according to our analysis of TSA data, approximately one-quarter to one-third of CCSFs have not received a comprehensive inspection. According to TSA officials, CCSFs that have never been inspected are deemed high risk and must be inspected by the following quarter. However, since TSA officials anticipate that CCSP participation will continue to grow, and that there could be a groundswell in CCSP participants as the 100 percent screening deadline approaches, TSIs may be challenged in dealing with the increased inspection workload once the screening mandate goes into effect in August 2010. For example, the IFR stated that about 5,600 entities are expected to join the CCSP. Based on these figures, TSA would be required to conduct 11,200 comprehensive inspections annually. TSA officials questioned the accuracy of this estimate, and stated that for workforce planning purposes, a more realistic near-term estimate for the number of CCSFs TSA is expected to oversee is the number of current CCSFs and CCSP applicants. However, TSA did not provide us this total figure. Moreover, as discussed earlier, TSA does not know how many CCSFs will join the program in the future, and does not plan to estimate the number of CCSP participants needed to meet the 100 percent screening mandate. Without this key information, it will be difficult for TSA to obtain a reasonable estimate of the number of inspectors that will be needed to oversee the CCSP participants— highlighting the need for a staffing study that considers various scenarios. In addition, according to TSA data, of the CCSF compliance inspections conducted from February 1, 2009, to December 29, 2009, some resulted in at least one violation of CCSF security requirements—and a percentage of these violations were screening related. Having the resources needed to provide effective oversight will be critical to ensuring that CCSFs are comprehensively inspected soon after being certified, in order to identify violations and provide TSA with some assurance that CCSFs are conducting their new screening activities in accordance with TSA requirements. As we reported in prior work, successful project planning should evaluate staffing implications. Since fiscal year 2008, TSA officials have reported on a planned TSI staffing study, and air cargo program officials stated that this study would include an analysis of the resources necessary to provide CCSP oversight and would incorporate information on the number of CCSFs to be inspected in order to assess workforce needs. Officials stated in March 2010 that the study would continue through fiscal year 2010. However, the agency has not established an estimated completion date or interim milestones (i.e., dates and related tasks) for completion of the study, and officials did not provide an explanation for why this has not yet occurred. Standard practices for program management call for the establishment of time frames and milestones. Creating time frames with milestones could help ensure completion of the staffing study, the results of which should better position TSA to ensure that the inspectors it needs are in place in order to oversee effective CCSF implementation of TSA security requirements. TSA faces challenges related to screening certain types of cargo, qualification testing of technology, and securing the chain of custody. Screening Cargo in ULD Pallets and Containers There is currently no technology approved or qualified by TSA to screen cargo once it is loaded onto a ULD pallet or container—both of which are common means of transporting air cargo on wide-body passenger aircraft. Cargo transported on wide-body passenger aircraft makes up 76 percent of domestic air cargo shipments transported on passenger aircraft. Prior to May 1, 2010, canine screening was the only screening method, other than physical search, approved by TSA to screen such cargo. Canine teams were deployed to 20 airports to assist air carriers with such screening. In addition, the 2009 S&T Directorate technology pilot study reported canine teams to be an effective method of screening ULD pallets and containers, but it identified an urgent need to find other effective methods for screening such cargo because of the shortage of available canine teams. TSA officials, however, still have some concerns about the effectiveness of the canine teams, and effective May 1, 2010, the agency no longer allows canine teams to be used for primary screening of ULD pallets and containers. Instead, TSA allows canines to conduct primary screening of only loose cargo and 48-by-48-inch cargo skids. Canine teams still may be used for secondary screening of ULD pallets and containers; however, secondary screening does not count toward meeting the air cargo screening mandate. TSA officials reported that they have conducted preliminary assessments of technologies that are capable of screening ULD pallets and containers but that commercially available technologies do not exist that effectively detect explosives in the amounts described in TSA standards. TSA officials stated that TSA continues to work with technology vendors on developing technologies that will be able to effectively screen ULD pallets and containers. In the interim, TSA officials indicated that the agency is encouraging industry stakeholders through the CCSP to screen such cargo earlier in the supply chain, before cargo is consolidated. However, according to representatives of the two major air carrier industry associations and the one freight forwarder association with whom we spoke, technology available to screen consolidated or palletized cargo, including cargo in a ULD, is critical in meeting the 100 percent screening mandate given that such cargo represents a primary means for transporting cargo transported on passenger aircraft. Moreover, while industry participation in the CCSP may help ensure that screening takes place earlier in the supply chain, which will help alleviate the challenges posed by ULD pallets and containers, to date, far fewer shippers have joined the CCSP than TSA anticipated, and these ULD pallets and containers currently make up about 76 percent of domestic air cargo transported on passenger aircraft, with no efficient method to screen them. These technology challenges suggest the need for TSA to consider alternative approaches to meet the screening mandate without unduly affecting the flow of commerce, as we will discuss later. TSA Is Working to Qualify Some Air Cargo Screening Technologies In addition, TSA is working to complete qualification testing of air cargo screening technologies; thus, until all stages of qualification testing are concluded, the agency may not have reasonable assurance that the technologies that air carriers and program participants are currently allowed to use to screen air cargo are effective. Qualification tests are designed to verify that a technology system meets the technical requirements specified by TSA. TSA qualified several X-ray technologies for purchase by industry stakeholders based on initial test results and qualified EDS technologies based on their past performance in other testing processes. TSA has not yet qualified ETD and other X-ray technologies that TSA allows program participants to use to screen air cargo. Once these technologies have been added to the list of qualified products, the agency is to conduct additional stages of qualification testing. TSA officials expressed confidence in the initial qualification test results because the commercial off-the-shelf technologies being used for cargo screening have a proven record in the passenger checkpoint and checked baggage environments. However, TSA acknowledged that if the results of additional stages of qualification testing do not meet its technical requirements, these technologies can be removed from the list of qualified products. Furthermore, because of the mandated deadlines, TSA is conducting qualification testing to determine which screening technologies are effective at the same time that air carriers are using these technologies to meet the mandated requirement to screen air cargo transported on passenger aircraft. For example, according to TSA, ETD technology will undergo the initial phase of qualification testing in the air cargo environment in 2010, although many air carriers and CCSFs are currently using it to screen air cargo. Moreover, technology reports and TSA officials disagree as to the effectiveness of ETD technology. For example, a pilot program completed by DHS’s S&T Directorate in July 2009 found that the ability of ETD technology to detect explosive threats in cargo by sampling the external surfaces of cargo shipments for explosive residue— the standard ETD protocol required by TSA—is poor. According to TSA officials, external sampling of cargo shipments is a method of screening preferred by freight forwarders and air carriers in order to avoid opening all cargo pieces, which can result in possible damage to the contents and significantly greater screening time. The pilot program recommended further research to evaluate the applicability and efficacy of external sampling using ETD systems, as well as other screening systems, to detect threats, such as explosives, in air cargo. However, TSA officials disputed the findings of this S&T Directorate study. They also stated that other S&T Directorate reports support the acceptance of ETD technology; however, we were unable to review these reports since this information was provided late in our review. The lack of consensus within DHS regarding the effectiveness of ETD technology in the air cargo environment suggests the need for additional study. Although TSA officials stated that simultaneous testing and use of technology by the industry is not ideal, they noted that this was necessary to meet the screening deadlines mandated by the 9/11 Commission Act. While we recognize that certain circumstances, such as mandated deadlines, require expedited deployment of technologies, our prior work has shown that programs with immature technologies have experienced significant cost and schedule growth. For example, we reported in October 2009 that TSA deployed a passenger checkpoint technology—the explosives trace portal (ETP)—to airports without proving its performance in an operational environment, contrary to TSA’s acquisition guidance, which recommends such testing. The agency purchased hundreds of these machines but was forced to halt their deployment because of performance, maintenance, and installation issues. All but 9 ETPs have been withdrawn from airports and 18 remain in inventory. TSA determined that the remainder of the ETPs was excess capacity. Since TSA plans to issue a list of qualified technologies before all stages of qualification testing are complete, the industry lacks assurance that the qualification status of technologies established by TSA for use after August 2010 will not change. Further testing could result in modifications to the list of qualified technologies authorized for use after August 3, 2010, and to the list of technologies approved by TSA for use through January 2012. TSA has reserved the option of revising the status of any particular technology or system in the event that its performance in an operational environment indicates that it is losing effectiveness or suitability to an unacceptable degree as it ages or that constantly evolving threats require new detection capabilities. The domestic passenger air carrier and freight forwarder industry association officials with whom we spoke expressed concerns about purchasing technology from the lists of approved and qualified technologies that are subject to change. TSA officials stated that the agency is accelerating its testing timeline and the release of the qualification testing results for these technologies to meet the screening deadlines mandated by Congress. For example, TSA originally planned to release the X-ray qualification results after completing all stages of qualification testing. Because of approaching deadlines, however, in December 2009 and based on initial test results, TSA announced the qualification of certain X-ray technologies. It is unclear, however, whether the challenges TSA faces in issuing a list of fully qualified screening technologies will allow the industry to make informed decisions about technology purchases to meet the screening requirements of the 9/11 Commission Act. Securing the Chain of Custody in the Air Cargo Shipping Process With regard to technology, another area of concern in the transportation of air cargo is the chain of custody between the various entities that handle and screen cargo before it is loaded onto an aircraft. TSA officials stated that the agency has taken steps to analyze the chain of custody under the CCSP, and has issued cargo procedures to all entities involved in the CCSP to ensure that the chain of custody of cargo is secure. We found that the procedures issued by TSA to the CCSFs include guidance on when and how to secure cargo with tamper-evident technology, but do not include standards for the types of technologies that should be used. TSA officials noted that they are in the process of compiling a list of existing tamper-evident technologies and their manufacturers. Once the list is complete, TSA plans to test and evaluate these technologies and issue recommendations to the industry. TSA has not yet set any time frames for issuing such recommendations because, according to TSA officials, they need to explore cost-effective technologies first. Securing the chain of custody for cargo screened under the CCSP takes on additional significance in light of the DHS Inspector General’s 2009 report findings that TSA could improve its efforts to secure air cargo during ground handling and transportation. For example, the report determined that industry personnel were accessing, handling, or transporting cargo without the required background checks. In addition, the report stated that TSA’s inspection process has not been effective in ensuring that requirements for securing air cargo during ground transportation are understood or followed. In response to the DHS Inspector General report, TSA concurred with the recommendation to improve the security threat assessment process and stated that the IFR requires recordkeeping for security threat assessments. TSA also concurred with the DHS Inspector General recommendation to revise the Regulatory Activities Plan to allow more time for inspectors to provide support and education to regulated entities to ensure that air cargo security requirements are understood and implemented. TSA reported that the fiscal year 2010 Regulatory Activities Plan addresses this concern. While TSA reported to Congress that industry achieved the February 2009 50 percent screening deadline as it applies to domestic cargo, questions exist about the reliability of the screening data, which are self-reported by industry representatives. TSA has been collecting and analyzing data from screening entities, such as air carriers, freight forwarders, and shippers, since March 2009 to verify that domestic screening is being conducted at the requisite levels. As of March 2010 TSA reported that 68 percent of domestic cargo by weight had been screened. After receiving data from screening entities, TSA performs preliminary data quality checks, such as viewing the data to identify missing or duplicate values. However, TSA does not have a mechanism to verify the accuracy of the data reported by the industry. TSA stated that as part of its compliance inspections for air carriers and CCSFs, TSIs check industry screening logs—which include information on how and by whom a specific shipment was screened—to verify that the required screening levels have been met. However, TSIs do not compare these screening logs to the reports that air carriers and CCSFs submit to TSA with their screening levels because according to senior TSA officials, such comparisons would be significantly burdensome to the industry. Specifically, senior TSA officials stated that the air carrier reports do not contain details on specific shipments, thus verification is not feasible. However, senior TSA officials agreed that it is important to verify the accuracy of the data reported by the industry through random checks or other practical means, and that greater coordination among TSA program and compliance officials is necessary to ensure that these checks are taking place. The Office of Management and Budget’s guidelines for ensuring quality of information call for agencies to develop procedures for reviewing and substantiating the integrity of information before it is disseminated. Given that TSA uses the data submitted by screening entities to verify its compliance with the mandate as it applies to domestic cargo and to report to Congress, ensuring the accuracy of the self-reported data is of particular significance. In order to do this, TSA could, for example, adopt a program similar to CBP’s compliance measurement program, which is used to determine the extent to which importers are in compliance with laws and regulations. As part of this program, CBP conducts regular quality reviews to ensure accuracy in findings and management oversight to validate results. Verifying the accuracy of the self-reported screening data could better position TSA in providing reasonable assurance that screening is being conducted at reported levels. Cargo that has already been transported on one leg of a passenger flight— known as in-transit cargo—may be subsequently transferred to another passenger flight without undergoing screening. For example, cargo transported on an inbound flight, a significant percentage of which is exempt from screening, can be transferred to a domestic flight without physical screening. According to TSA officials, though the agency does not have a precise figure, industry estimates suggest that about 30 percent of domestic cargo is transferred from an inbound flight. According to TSA officials, the agency had determined that additional screening of this cargo was not required, in part because an actual flight mimics a screening method that until recently was approved for use. A senior TSA official also stated that because in-transit cargo transferred from an inbound flight has flown under a TSA-approved passenger aircraft security program, it is in compliance with TSA screening requirements. However, a significant amount of inbound cargo is exempt from screening. In contrast, TSA’s policies and procedures require all cargo flown on domestic flights to be screened at 75 percent, effective May 1, 2010. As a result, despite being flown under a TSA-approved security program, in-transit cargo originating in foreign countries is not required to be screened at the same levels as cargo transported on domestic flights. Therefore, TSA lacks assurance that this cargo is being screened in accordance with 9/11 Commission Act required screening levels. In response to our questions as part of this review, TSA officials stated that transporting in-transit cargo without screening could pose a vulnerability, but as of February 2010, the agency was not planning to require in-transit cargo transferred from an inbound flight to be physically screened because of the logistical difficulties associated with screening cargo that is transferred from one flight to another. However, these logistical difficulties could be minimized if more cargo were screened prior to departure from a foreign location. Thus, addressing the potential security vulnerability posed by in-transit cargo is directly linked to TSA’s efforts to secure and screen inbound cargo, which is discussed later in this report. Although TSA officials stated that they plan to explore measures for screening in-transit cargo in the future, these officials did not provide documentation of these measures or information on milestones for their implementation. A successful project plan—such as a plan that would be used to establish such measures—should consider all phases of the project, and clearly state schedules and deadlines. Developing a plan with milestones that addresses how in-transit cargo will be screened in accordance with 9/11 Commission Act requirements could better position TSA to meet the mandate and reduce potential vulnerabilities associated with such cargo. Although TSA faces industry participation and technology challenges that could impede the CCSP’s success and the agency’s efforts to meet the 100 percent screening mandate, the agency has not developed a contingency plan that considers alternatives to address these challenges. As discussed earlier, as of December 2009, the percentage of cargo screened by shipper and independent CCSFs remains far lower than the percentage TSA cites as the portion these entities should ideally screen. Without adequate CCSP participation, industry may not be able to screen enough cargo prior to its arrival at the airport to maintain the flow of commerce while meeting the mandate. Likewise, without technology solutions for screening cargo in a ULD pallet or container—which makes up about 76 percent of cargo transported on domestic passenger aircraft—industry may not have the capability to effectively screen 100 percent of air cargo without affecting the flow of commerce. TSA is continuing to work with vendors on developing technology to effectively screen ULD pallets and containers, and in the interim, is encouraging industry stakeholders as part of the CCSP to screen such cargo earlier in the supply chain, before it is loaded onto ULDs, but such actions will not ensure that such cargo is screened. We have previously reported that a comprehensive planning process, including contingency planning, is essential to help an agency meet current and future capacity challenges. Alternatives could include, but are not limited to, mandating CCSP participation for certain members of the air cargo supply chain—instead of relying on their voluntary participation—and requiring the screening of some or all cargo before it is loaded onto ULD pallets and containers. Developing a contingency plan that addresses the participation and technology challenges that could impede the screening program’s success, and identifies alternate or additional security measures to implement in case the program is unable to effectively facilitate the screening of sufficient amounts of cargo prior to reaching air carriers at the airport, could better position TSA to meet the requirements in the air cargo screening mandate. With regard to the consideration of alternatives to the CCSP, TSA reported that it considered requiring air carriers to bear the full burden of the screening mandate and also considered creating TSA-operated screening facilities at airports, but determined that both strategies would result in severe disruptions to commerce because of limited airport space for screening. Representatives of the two major air carrier associations with whom we spoke stated that additional TSA screening by canine teams would be helpful, and industry stakeholders have also identified the option of using private companies to provide canine screening in order to expand the number of canines available for screening. According to TSA, the agency is considering whether to pursue this option because of concerns regarding certification of canines that have not been trained by TSA and are not handled by TSA staff. In addition, TSA officials stated that the agency does not plan to provide canine teams as a long-term primary screening method once the CCSP grows and industry develops more capacity to screen cargo, as industry, not the federal government, is responsible for screening air cargo under TSA’s regulations. TSA officials also stated that alternative or additional screening measures will not be necessary because unscreened cargo will simply not be transported on passenger aircraft, that is, “will not fly.” Although this approach would ensure that 100 percent of air cargo transported on passenger aircraft is screened, part of TSA’s mission is ensuring the flow of commerce. Not transporting unscreened cargo could place the air cargo transportation industry at risk of experiencing economic disruptions, including shifts of cargo to other modes of transportation, which could negatively affect the air cargo business. In order to help ensure that it fulfills its mission and meets the 9/11 Commission Act mandate, TSA could benefit from identifying alternative measures in a contingency plan, should it become clear that the CCSP will not achieve the screening mandate while maintaining the flow of commerce. TSA has made progress toward meeting the screening mandate as it applies to inbound cargo by taking steps to increase the percentage of inbound air cargo that has undergone screening. However, the agency faces several challenges in ensuring that 100 percent of inbound air cargo is screened, which will prevent it from meeting the mandate by the August 2010 deadline. While TSA is aware that it is unable to meet the screening mandate as it applies to inbound cargo, it has not yet determined when or how it will eventually meet the deadline. TSA has taken several steps to increase the percentage of inbound air cargo being screened. For example, TSA revised its requirements for foreign and U.S. air carrier security programs, effective May 1, 2010, to generally require air carriers to screen a certain percentage of shrink- wrapped and banded inbound cargo and 100 percent of inbound cargo that is not shrink-wrapped or banded. According to our analysis of TSA information, shrink-wrapped and banded cargo makes up approximately 96 percent of inbound cargo, which means that a significant percentage of inbound air cargo is not required to be screened. According to TSA, implementation of this requirement will result in the screening of 100 percent of inbound cargo transported on narrow-body aircraft since none of this cargo is shrink-wrapped or banded. Since TSA does not have the same regulatory reach to the supply chain in foreign countries as it does in the United States, it is taking a different approach to implementing the screening mandate as it applies to inbound cargo. This approach focuses on harmonizing its security standards with those of other nations. For example, TSA is working with foreign governments to increase the amount of screened cargo, including working bilaterally with the European Commission (EC) and Canada, and quadrilaterally with the EC, Canada, and Australia. As part of these efforts, TSA recommended to the United Nations’ International Civil Aviation Organization (ICAO) that the next revision of Annex 17 to the Convention of International Civil Aviation include an approach that would allow screening to take place at various points in the air cargo supply chain. According to TSA, ICAO’s Aviation Security Panel met in March 2010 to finalize revisions to Annex 17, including TSA’s proposed revision to add “screening” as a supply chain security concept. TSA has also supported the International Air Transport Association’s (IATA) efforts to establish a secure supply chain approach to screening cargo for its member airlines and IATA’s efforts to have these standards recognized internationally. In addition, TSA is working with CBP to leverage an existing CBP system, known as the Automated Targeting System (ATS), to identify and target elevated-risk inbound air cargo. ATS is a model that combines information from inbound cargo manifest lists and entry declaration information into shipment transactions and uses historical and other data to help target cargo shipments for inspection. While CBP currently uses ATS to identify cargo for screening once it arrives in the United States, according to officials, TSA has established a TSA-CBP working group to focus on using ATS to target inbound air cargo for possible screening prior to departure from foreign locations. TSA and CBP officials stated that the working group met regularly since June 2009, though agency officials did not specify how frequently they met. As of February 2010, TSA and CBP officials stated that they were conducting an exercise at Dulles International Airport for TSA to observe CBP’s use of ATS, understand the full capabilities of ATS, and determine whether ATS can assist TSA’s inbound air cargo screening efforts. TSA officials said that they were not in a position to provide time frames for completing the exercise since the effort is in the early stages. Should TSA determine that ATS is effective for targeting the screening of inbound air cargo, TSA plans for air carriers to conduct the screening of shipments identified as elevated risk prior to the cargo’s transport to the United States. The air carriers will also be responsible for providing TSA with the results. In discussing how a system to target certain, elevated-risk shipments for screening will fit into TSA’s overall plans to screen 100 percent of inbound air cargo, officials stated that ATS would provide an additional layer of scrutiny for all cargo entering the United States. To help assess the rigor and quality of foreign screening practices, TSA is also in the process of obtaining information from foreign countries on their respective air cargo screening levels and practices. According to officials, the agency has developed an assessment methodology in a question and answer format to collect information on each foreign country’s air cargo security practices, and it has used the new methodology to collect initial information from one country. TSA has indicated that it will use the methodology to identify key security practices and that the information collected will also help determine if these practices are comparable to TSA requirements, which will provide TSA with details that can help determine how foreign standards align with TSA standards. TSA officials indicated that the methodology used to collect the information is part of a larger process that will involve collecting initial information, analyzing what was received, and submitting additional questions to the foreign countries. TSA anticipates storing the information gathered in a database, which it has not yet created. TSA officials were unable to provide time frames for use of the assessment methodology or completing the database because the effort is in the early stages. While TSA has taken steps to increase the percentage of inbound cargo that has undergone screening, the agency faces several challenges in meeting the mandate. Consequently, TSA has stated that it will not be able to meet the screening mandate as it applies to inbound cargo. For example, in a March 4, 2010, hearing before the Subcommittee on Homeland Security, House Committee on Appropriations, in responding to questions, the Acting TSA Administrator stated that it could take several years before 100 percent of inbound cargo is screened. According to TSA, screening inbound air cargo poses unique challenges, related, in part, to TSA’s limited ability to regulate foreign entities. As such, TSA officials stated that the agency is focusing its air cargo screening efforts on domestic cargo and on screening elevated-risk inbound cargo as it works to address the challenges it faces in screening 100 percent of inbound cargo. Inbound air cargo is currently being screened at lower levels than domestic air cargo. For example, while TSA removed almost all its screening exemptions for domestic cargo, TSA requirements continue to exempt from screening a significant amount of shrink-wrapped air cargo transported to the United States, which represents about 96 percent of all inbound cargo. Effective May 1, 2010, TSA requires that a certain percentage of this cargo be screened. In April 2007, we reported that TSA’s screening exemptions for inbound cargo could pose a risk to the air cargo supply chain and recommended that TSA assess whether these exemptions pose an unacceptable vulnerability and, if necessary, address these vulnerabilities. TSA agreed with our recommendation, but beyond expanding its requirement to screen 100 percent of inbound air cargo transported on narrow-body aircraft and a certain percentage of inbound cargo that is shrink-wrapped or placed on banded skids, has not yet reviewed, revised, or eliminated screening exemptions for cargo transported on inbound passenger flights, and did not provide a time frame for doing so. We continue to believe that TSA should assess whether these exemptions pose an unacceptable security risk. TSA officials stated that once the modified ATS is in place, screening exemptions will be less relevant because air carriers will be more able to target the screening of elevated-risk cargo as an interim measure before 100 percent screening is achieved. However, the 9/11 Commission Act requires that all air cargo be physically screened and does not make exceptions for cargo that is not elevated risk. TSA faces challenges in meeting the 100 percent screening mandate as it applies to inbound air cargo. For example, although TSA is authorized under U.S. law to ensure that all air carriers, foreign and domestic, operating to, from, or within the United States maintain the security measures included in their TSA-approved security programs and any applicable security directives or emergency amendments issued by TSA, this authority is limited. Also, TSA has no legal jurisdiction over foreign nations. Specifically, TSA has been authorized by Congress to set standards for aviation security, including the authority to require that inbound cargo be screened before it departs for the United States. However, the agency also relies on foreign governments to implement and enforce—including conducting actual screening, in some cases—TSA’s regulatory requirements. Harmonizing TSA regulatory standards with those of foreign governments may be challenging because these efforts are voluntary and some foreign countries do not share the United States’ concerns regarding air cargo security threats and risks. TSA officials caution that if TSA were to impose a strict cargo screening standard on all inbound cargo, many nations likely would be unable to meet such standards in the near term. This raises the prospect of reducing the flow of cargo on passenger aircraft. According to TSA, the effect of imposing such screening standards in the near future could result in increased costs for international passenger travel and for imported goods and possible reduction in passenger traffic and foreign imports. According to TSA officials, this could also undermine TSA’s ongoing cooperative efforts to develop commensurate security systems with international partners. TSA’s ongoing efforts to harmonize security standards with those of foreign nations are essential to achieving progress toward meeting the 100 percent screening mandate as it applies to inbound air cargo. Identifying the precise level of screening being conducted on inbound air cargo is difficult because TSA lacks a mechanism to obtain actual data on all screening that is being conducted on inbound air cargo. TSA officials estimate that 55 percent of inbound cargo by weight is currently being screened and that 65 percent of inbound cargo by weight will be screened by August 2010. However, these estimates are based on the current screening requirements of certain countries and are not based on actual data collected from air carriers or other entities, such as foreign governments, on what percentage of cargo is actually being screened. For example, if a country requires that 100 percent of its cargo be screened, as the United Kingdom does, TSA counts all the cargo coming from that country as screened. While TSA officials stated that they discuss screening percentages with foreign government officials, the agency not conduct any additional data verification to assess whether screening conducted at, above, or below the required levels. In addition, because TSA’s efforts to complete assessments of other countries’ screening requirements are ongoing, the agency does not always know whether the screening requirements are consistent with TSA standards. The DHS Appropriations Act, 2009, requires TSA to report on the actual screening To improve data collection being conducted, by airport and air carrier. efforts, as of May 2010, TSA requires air carriers to report on their actual screening levels for inbound air cargo, and TSA officials stated that an automated cargo reporting tool would be operational in May 2010 for this purpose. The May 2010 security program changes only require air carriers to report on the screening that they conduct and not on the screening conducted by other entities in the air cargo supply chain to meet the air cargo screening mandate. TSA officials stated that it may be challenging to obtain screening data from some foreign governments and other entities that conduct cargo screening. As such, TSA officials also stated that the agency may still use estimates, such as the current screening requirements of certain countries, when reporting data to Congress. Officials could not is provide information on milestones or time frames for obtaining actual screening data for all inbound screening, including that conducted by air carriers and other entities in the air cargo supply chain, because the agency is still working to overcome inbound regulatory challenges. However, establishing time frames for implementing a plan is consistent with standard practices for program management. Finalizing a plan to obtain actual screening data could help TSA obtain greater assurance that mandated screening levels are being met. TSA has not yet determined how it will meet the screening mandate as it applies to inbound air cargo. Although TSA has taken steps to increase the percentage of inbound cargo transported on passenger aircraft that is screened, the agency has not developed a plan, including milestones, for meeting the mandate as it applies to inbound cargo. While TSA officials have stated that the agency does not expect to meet the mandate as it applies to inbound cargo by the August 2010 deadline, TSA has not provided estimates of when the mandate will be met or when steps toward its achievement will be completed. Moreover, the steps that the agency is taking to enhance inbound air cargo security do not fully support the 100 percent cargo screening mandate. For example, TSA is focusing on developing its ability to utilize CBP’s ATS to target elevated-risk cargo for screening. While we recognize this as a reasonable step to strengthen inbound air cargo security, TSA does not have a plan that articulates how this and other steps it is taking will fit together to achieve 100 percent screening. The 9/11 Commission Act requires the establishment of a system to screen 100 percent of cargo transported on passenger aircraft, including inbound cargo. As we have reported in our prior work, a successful project plan— such as a plan that would be used to establish such a system—should consider all phases of the project and clearly state schedules and deadlines. TSA reported that it is unable to identify a timeline for meeting the mandate for inbound cargo, stating that its efforts are long term, given the extensive work it must conduct with foreign governments and associations. However, interim milestones could help the agency provide reasonable assurance to Congress that it is taking steps to meet the mandate as it applies to inbound cargo. A plan that considers all phases of the project and clearly states schedules and deadlines could help position TSA to better measure progress it is making toward meeting the 9/11 Commission Act mandate as it relates to inbound air cargo and provide reasonable assurance that its efforts are implemented in a relatively timely manner. Meeting the August 2010 mandate to establish a system to physically screen 100 percent of air cargo transported on passenger aircraft is a daunting task. In August 2010, unscreened cargo will not be allowed to fly on passenger aircraft, but leaving behind such cargo could affect the flow of commerce. Although the CCSP should help TSA meet the mandate as it applies to domestic cargo, addressing certain challenges could strengthen agency efforts and help ensure the CCSP’s success. For example, TSA might benefit from developing a contingency plan should it become clear that participation levels are not sufficient to achieve the screening mandate without disruptions to the flow of commerce. Establishing milestones for completion of a staffing study to determine the number of inspectors needed to oversee CCSP participants could provide results that should better position TSA to obtain these inspection resources and help ensure that air carriers and CCSFs comply with TSA requirements. Moreover, the technology challenges TSA faces in screening cargo once it is loaded onto ULD pallets and containers highlight the need for a contingency plan in the event that industry stakeholders do not have the capacity to screen such air cargo. In addition, verifying industry-reported screening data could better position TSA in providing reasonable assurance that screening is being conducted at reported levels. Furthermore, developing a plan and milestones for screening in-transit cargo, which is not currently required to undergo physical screening, could help ensure that such cargo is screened in accordance with 9/11 Commission Act requirements and mitigate a risk to the air cargo transportation system. Developing a contingency plan that considers additional or alternative security measures will better position TSA to meet the mandate without disrupting the flow of commerce should it become clear that the challenges related to CCSP participation and screening technology will hinder the agency’s efforts. With regard to inbound air cargo, while TSA has taken some positive steps to increase the percentage of cargo that is screened, the agency could better address the challenges to screening this cargo. For example, finalizing its plans to obtain actual screening data for all inbound cargo screening, including time frames and milestones, could provide greater assurance that mandated screening levels are being met. In addition, determining how it will meet the screening mandate as it applies to inbound air cargo, including related milestones, could better position TSA in providing reasonable assurance that the agency is making progress toward meeting the screening mandate in a timely manner. To enhance efforts to secure the air cargo transportation system and establish a system to screen 100 percent of air cargo transported on passenger aircraft, we are recommending that the Administrator of TSA take the following five actions: Establish milestones for the completion of TSA’s staffing study to assist in determining the resources necessary to provide CCSP oversight. Develop a mechanism to verify the accuracy of all screening data, both self-reported domestic data and inbound cargo data, through random checks or other practical means. For inbound air cargo, complete the agency’s plan to obtain actual data, rather than estimates, for all inbound screening, including establishing time frames and milestones for completion of the plan. Develop a plan, with milestones, for how and when the agency intends to require the screening of in-transit cargo. Develop a contingency plan for meeting the mandate as it applies to domestic cargo that considers alternatives to address potential CCSP participation shortfalls and screening technology limitations. Develop a plan, with milestones, for how and when the agency intends to meet the mandate as it applies to inbound cargo. We provided a draft of our report to DHS and TSA on May 19, 2010, for review and comment. On June 23, 2010, DHS provided written comments from the department and TSA, which are reprinted in appendix I. In commenting on our report, TSA stated that it concurred with three recommendations, concurred in part with one recommendation, and did not concur with another recommendation. For the recommendations for which TSA concurred or concurred in part, the agency identified actions taken or planned to implement them. Although TSA concurred with part of our second recommendation, the actions TSA reported that the agency has taken do not fully address the intent of this recommendation. Regarding our first recommendation that TSA establish milestones for the completion of its staffing study to assist in determining the resources necessary to provide CCSP oversight, TSA concurred. TSA stated that as part of the staffing study, the agency is working to develop a model to identify the number of required TSIs and that this effort would be completed in the fall of 2010. If this model includes an analysis of the resources needed to provide CCSP oversight under various scenarios, it will address the intent of our recommendation. TSA concurred in part with our second recommendation that the agency develop a mechanism to verify the accuracy of domestic and inbound screening data, including obtaining actual data on all inbound screening. TSA concurred with the need to capture data for inbound cargo and stated that as of May 1, 2010, the agency issued changes to air carriers’ standard security programs that require air carriers to report inbound cargo screening data to TSA. However, as noted in this report, these requirements apply to air carriers and the screening that they conduct and not to the screening conducted by other entities, such as foreign governments. Thus, TSA will continue to rely in part on estimates to report inbound cargo screening levels. We recognize that it may be challenging for TSA to obtain cargo screening data from foreign governments; however, the agency could require air carriers to report on cargo screening for all inbound cargo they transport, including the screening conducted by foreign governments or other entities. This would be similar to air carriers’ domestic cargo screening reporting requirements which require air carriers to report on cargo that they screen as well as cargo screened by CCSFs. We continue to believe that it is important for TSA to obtain data for all screening conducted on inbound cargo so that it can provide assurance to Congress that this cargo is being screened in accordance with the 9/11 Commission Act screening mandate. TSA stated that verifying the accuracy of domestic screening data will continue to be a challenge because there is no means to cross-reference local screening logs—which include screening information on specific shipments—with screening reports submitted by air carriers to TSA that do not contain such information. We acknowledge TSA’s potential challenges in cross- referencing screening logs with screening reports and have modified the report to reflect this challenge. However, as noted in this report, TSA could consider a quality review mechanism similar to the compliance measurement program used by CBP, which includes regular quality reviews to ensure accuracy in findings and management oversight to validate results. TSA could also develop another mechanism for verifying the accuracy of the screening data through random checks—other than those of the screening logs—or other practical means. Doing so would address the intent of our recommendation. Given that the agency uses these data to report to Congress its compliance with the screening mandate as it applies to domestic cargo, we continue to believe that verifying the accuracy of the screening data is important so that TSA will be better positioned to provide reasonable assurance that screening is being conducted at reported levels. TSA concurred with our third recommendation that TSA develop a plan for how and when the agency intends to require the screening of in-transit cargo. TSA stated that the agency has implemented changes, effective August 1, 2010, that will require 100 percent of in-transit cargo to be screened unless it can otherwise be verified as screened. TSA’s action is an important step toward addressing the potential security vulnerability associated with in-transit cargo and if implemented effectively, will address the intent of our recommendation. Because this is a significant change and potentially operationally challenging, it will be important to closely monitor the industry’s understanding and implementation of this requirement to help ensure that 100 percent screening of in-transit cargo is being conducted. TSA did not concur with our fourth recommendation to develop a contingency plan for meeting the mandate as it applies to domestic cargo that considers alternatives to address potential CCSP participation shortfalls and screening technology limitations. TSA stated that a contingency plan is unnecessary since effective August 1, 2010, 100 percent of domestic cargo transported on passenger aircraft will be required to be screened. The agency also stated that there is no feasible contingency plan that can be implemented by TSA that does not compromise security or create disparities in the availability of screening resources. However, the agency noted that several alternatives are available to and are currently being exercised by industry. The agency also stated that TSA developed the CCSP in collaboration with industry stakeholders to alleviate the burden on airlines to screen 100 percent of cargo while still meeting the mandate. We disagree that a contingency plan is unnecessary and unfeasible. As noted in this report, although TSA’s approach would ensure that 100 percent of domestic cargo transported on passenger aircraft is screened, not transporting unscreened cargo could negatively affect the flow of commerce. In addition, while we recognize the CCSP as a positive and critical step toward achieving the screening mandate as it applies to domestic cargo, we continue to believe that there are feasible alternatives that TSA should consider to address potential CCSP participation shortfalls and screening technology limitations. Such alternatives discussed in this report include mandating CCSP participation for certain members of the air cargo supply chain and requiring the screening of some or all cargo before it is loaded onto ULD pallets and containers. Effective May 1, 2010, TSA embraced one of the alternatives cited in this report by requiring freight forwarder CCSFs to screen all cargo before it is loaded onto ULD pallets and containers. Expanding this requirement to additional industry stakeholders could be a feasible alternative to address both CCSP participation shortfalls and screening technology limitations. Moreover, although many industry stakeholders may support the CCSP, key partners in the program—shippers—have not joined the program at the levels targeted by TSA, thus jeopardizing its success. Therefore, we continue to believe that it is prudent that TSA consider developing a contingency plan for meeting the air cargo screening mandate without disrupting the flow of commerce. Finally, in regard to our fifth recommendation that TSA develop a plan for how and when the agency intends to meet the mandate as it applies to inbound cargo, TSA concurred and stated that TSA is drafting milestones as part of a plan that will generally require air carriers to conduct 100 percent screening by a specific date. If implemented effectively, this plan will address the intent of our recommendation. In addition, DHS noted in its written comments that CCSFs have reported to TSA that they have the capacity to screen nearly the entire remaining unscreened cargo volume and that air carriers have reported to TSA that they do not anticipate any major disruptions to the transport of air cargo on August 2010. We were not able to verify these assertions because TSA did not provide supporting documentation. It is also important to note that having the potential capacity to screen air cargo does not ensure that this screening will take place when the 100 percent screening mandate goes into effect in August 2010. TSA also provided us with technical comments, which we considered and incorporated in the report where appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 2 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, interested congressional committees, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report or wish to discuss these matters further, please contact me at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. In addition to the contact named above, Steve D. Morris, Assistant Director, and Rebecca Kuhlmann Taylor, Analyst-in-Charge, managed this review. Scott M. Behen, Erin C. Henderson, Elke Kolodinski, Linda S. Miller, Matthew Pahl, and Yanina Golburt Samuels made significant contributions to the work. David K. Hooper and Thomas Lombardi provided legal support. Stanley J. Kostyla assisted with design and methodology. Pille Anvelt and Tina Cheng helped develop the report’s graphics. John W. Cooney, Elizabeth C. Dunn, Richard B. Hung, Brendan Kretzschmar, and Amelia B. Shachoy also provided support.
Billions of pounds of cargo are transported on U.S. passenger flights annually. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) is the primary federal agency responsible for securing the air cargo system. The 9/11 Commission Act of 2007 mandated DHS to establish a system to screen 100 percent of cargo flown on passenger aircraft by August 2010. As requested, GAO reviewed TSA's progress in meeting the act's screening mandate, and any related challenges it faces for both domestic (cargo transported within and from the United States) and inbound cargo (cargo bound for the United States). GAO reviewed TSA's policies and procedures, interviewed TSA officials and air cargo industry stakeholders, and conducted site visits at five U.S. airports, selected based on size, among other factors. TSA has made progress in meeting the air cargo screening mandate as it applies to domestic cargo, but faces challenges in doing so that highlight the need for a contingency plan. TSA has, for example, increased required domestic cargo screening levels from 50 percent in February 2009 to 75 percent in May 2010, increased the amount of cargo subject to screening by eliminating many domestic screening exemptions, created a voluntary program to allow screening to take place at various points in the air cargo supply chain, conducted outreach to familiarize industry stakeholders with screening requirements, and tested air cargo screening technologies. However, TSA faces several challenges in developing and implementing a system to screen 100 percent of domestic air cargo, and it is questionable, based on reported screening rates, whether 100 percent of such cargo will be screened by August 2010 without impeding the flow of commerce. For example, shipper participation in the voluntary screening program has been lower than targeted by TSA. In addition, TSA has not completed a staffing study to determine the number of inspectors needed to oversee the screening program. Because it is unclear how many industry stakeholders will join the program, TSA could benefit from establishing milestones to complete a staffing study to help ensure that it has the resources it needs under different scenarios. Moreover, TSA faces technology challenges that could affect its ability to meet the screening mandate. Among these, there is no technology approved by TSA to screen large pallets or containers of cargo, which suggests the need for alternative approaches to screening such cargo. TSA also does not verify the self-reported data submitted by screening participants. Several of these challenges suggest the need for a contingency plan, in case the agency's current initiatives are not successful in meeting the mandate without impeding the flow of commerce. However, TSA has not developed such a plan. Addressing these issues could better position TSA to meet the mandate. TSA has made some progress in meeting the screening mandate as it applies to inbound cargo by taking steps to increase the percentage of screened inbound cargo--including working to understand the screening standards of other nations and coordinating with them to mutually strengthen their respective security efforts. Nevertheless, challenges remain and TSA does not expect to achieve 100 percent screening of inbound air cargo by the mandated August 2010 deadline. TSA officials estimate that air carriers are meeting the current mandated screening level of 50 percent of inbound cargo based on estimates rather than on actual data as required by law. Thus, TSA cannot verify if mandated screening levels are being met. In addition, the agency has not determined how it will eventually meet the screening mandate as it applies to inbound cargo; developing such a plan could better position TSA to assess its progress toward meeting the mandate.
The Social Security Act of 1935 authorized the SSA to establish a record- keeping system to help manage the Social Security program and resulted in the creation of the SSN. SSA uses the SSN as a means to track workers’ earnings and eligibility for Social Security benefits. Through a process known as enumeration, each eligible person receives a unique number, which SSA uses for recording workers’ employment history and Social Security benefits. SSNs are routinely issued to U.S. citizens, and they are also available to noncitizens lawfully admitted to the United States with permission to work. Lawfully admitted noncitizens who lack DHS work authorization may qualify for an SSN for nonwork purposes when a federal, state, or local law requires that they have an SSN to obtain a particular welfare benefit or service. In this case, the Social Security card notes that the SSN is “Not Valid for Employment.” As of 2003, SSA had assigned slightly more than 7 million nonwork SSNs. Over the years, SSA has tightened the requirements for assigning nonwork SSNs. In 1986, Congress passed the Immigration Reform and Control Act (IRCA), which made it illegal for individuals and entities to knowingly hire and continue to employ unauthorized workers. The act established a two- pronged approach for helping to limit the employment of unauthorized workers: (1) an employment verification process through which employers are to verify newly hired workers’ employment eligibility and (2) a sanctions program for fining employers who do not comply with the act. Under the employment verification process, workers and employers must complete the Employment Eligibility Verification Form (Form I-9) to certify that the workers are authorized to work in the United States. Those employers who do not follow the verification process can be sanctioned. SSA has two types of data useful to identifying unauthorized work— individual Social Security records and earnings reports. Its individual records, which include name, date of birth, and SSN, among other things, can be used to verify that a worker is providing the SSN that was assigned to a person of that name. These records are used in verification services that are available free of charge to employers on a voluntary basis. SSA’s earnings reports could also be used to identify some unauthorized work by reporting noncitizens who may have worked without authorization and employers who have a history of providing SSN/name combinations that do not match SSA records. SSA uses individual Social Security records in its Employee Verification Service (EVS) and the Web-based SSN Verification Service (SSNVS), which employers can use to assure themselves that the names and SSNs of their workers match SSA’s records. The services, designed to ensure accurate employer wage reporting, are offered free of charge. Employer use is voluntary. Although these systems only confirm whether submitted names and SSNs match, they could help employers identify workers who provide an SSN with fictitious information. Over the years, SSA has developed several different verification methods under EVS. For example, employers may submit lists of workers’ names and SSNs by mail on a variety of media, such as magnetic tapes or diskettes. Alternatively, employers may call a toll-free number or present a hard-copy list via fax, mail, or hand delivery to a local SSA office. SSA verifies the information received from employers by comparing it with information in its own records. SSA then advises the employer whether worker names and SSNs match. EVS offers the benefit of verifying name and SSN combinations for a company’s entire payroll. However, the system would not be able to detect a worker’s misuse of another person’s name and SSN as long as the name and SSN matched. Employers do not widely use this service. In an attempt to make verification more attractive to employers, in 2005, SSA implemented the Web-based SSNVS. It is designed to respond to employer requests within 24 hours. Requests of up to 10 worker names and SSNs can be verified instantaneously. Larger requests of up to 250,000 names can be submitted in a batch file, and SSA will provide results by the next business day. While this new system is attracting more employer interest, it is still not widely used. SSA also uses its records in a work eligibility verification system developed by DHS called the Basic Pilot, which offers electronic verification of work authorization for newly hired workers. Use of this program by employers is also voluntary, and the service has been available nationwide only since December 2004. Employers who agree to participate must electronically verify the status of all newly hired workers within 3 days of hire, using information that a new hire is required to provide. Under this program, an employer electronically sends worker data through DHS to SSA to check the validity of the SSN, name, date of birth, and citizenship provided by the worker. SSA records are used to confirm information on citizens. For noncitizens, SSA confirms SSN, name, and date of birth, then refers the request to DHS to verify work authorization status against DHS’s automated records. If DHS cannot verify work authorization status for the submitted name and SSN electronically, the query is referred to a DHS field office for additional research by immigration status verifiers. If SSA is unable to verify the SSN, name, and date of birth or DHS record searches cannot verify work authorization, a tentative nonconfirmation response is transmitted to the employer. After checking the accuracy of the information and resubmitting the information, if necessary, the employer must advise the worker of the finding and refer him or her to either DHS or SSA to correct the problem. During this time, employers are not to take any adverse actions against those workers related to verification, such as limiting their work assignments or pay. When workers do not contest their tentative nonconfirmations within the allotted time, the Basic Pilot program issues a final nonconfirmation. Employers are required to either immediately terminate employment or notify DHS of their continued employment. Like SSA’s verification services, the Basic Pilot is voluntary and is not widely utilized. As of January 2006, about 5,500 businesses nationwide had registered to participate, although a significantly smaller number of these are active users. Active participants have made about 4.7 million initial verification requests over a 5-year period (981,000 requests were made in fiscal year 2005). DHS reported on actions taken to address weaknesses in the program that had been identified during the early years of the program. They included delays in updating immigration records, erroneous nonconfirmations, and program software that was not user friendly. We subsequently reported on additional challenges, specifically, the capacity constraints of the system, its inability to detect identity fraud, and the fact that the program is limited to verifying work authorization of newly hired workers. SSA’s earnings records can also provide information on unauthorized work. There are two sets of data that are relevant to unauthorized work. The first set, the Nonwork Alien File, contains earnings reports for SSNs that were issued for nonwork purposes. The second set, the Earnings Suspense File, contains earnings reports in which the name and SSN do not match. Both could help identify some unauthorized work. SSA is required by law to provide its Nonwork Alien File to DHS since it suggests a group of people who are in the United States legally but may be working without authorization. Since 1998, SSA has provided DHS annual data on over half a million persons with earnings listed under nonwork SSNs. The file includes annual earnings amounts, worker names and addresses, and employer names and addresses as well. DHS has found this file to be of little use to enforcement activities, however. According to DHS officials, the file is currently not an effective tool for worksite enforcement due in part to inaccuracies in the data and the absence of some information that would help the department efficiently target its enforcement. In fact, because SSA only updates work authorization status at the request of the SSN holder, individuals in the file may now be U.S. citizens or otherwise legal workers who simply have not updated their status with SSA. Our ongoing work in this area suggests that a number of these records are indeed associated with people who later obtained permission to work from DHS. SSA policy is to update work authorization status when the SSN holder informs the agency of the status change and provides supporting documentation. Unless the individual informs SSA directly of the status change, SSA’s enumeration records will continue to show the person as unauthorized to work and will record his or her earnings to the Nonwork Alien File. Currently, the extent to which such noncitizens are included in the file is unknown, but SSA and DHS officials have both acknowledged that the file may include a number of people who are currently authorized to work. DHS officials said that the file would be of greater value if it contained DHS’s identifying numbers—referred to as alien registration numbers. According to DHS officials, because persons in the file do not have an identifier in common use by both agencies, they cannot automatically be matched with DHS records. As a result, DHS officials told us that they use names and birth dates to match the records, which can result in mismatches because names can change and numbers in birth dates may be transposed. SSA officials have said that generally they do not collect alien registration numbers from noncitizens. Collecting the alien registration number and providing it in the Nonwork Alien File is possible, they stated, but would require modifications to SSA’s information systems and procedures. They also noted that SSA would only be able to collect the alien registration number when noncitizens are assigned an SSN or when such an individual updates his or her record. As part of its procedures, SSA is required to verify the immigration status of noncitizens before assigning them an SSN, which requires using alien registration numbers. However, some noncitizens, such as those who have temporary visas, (e.g. students) may not have an alien registration number. In these cases, SSA would not be able to include the number in the Nonwork Alien File. The time it takes SSA to validate earnings reports and convey the Nonwork Alien File to DHS also makes the file less effective for worksite enforcement. When SSA finishes its various processes to ensure that the file includes the appropriate data, the reported earnings can be up to 2 years old. By that time, many of the noncitizens included in the file may have changed employers, relocated, or changed their immigration status, resulting in out-of-date data on individuals or ineffective leads for DHS agents. A DHS official told us that if the Nonwork Alien File were to contain industry codes for the reporting employers, DHS could target those in industries considered critical for homeland security purposes, which would be consistent with DHS’s mission and enforcement priorities. Having information about the industries the employers are in would help them better link the data to areas of high enforcement priority, such as airports, power plants, and military bases. Another SSA earnings file, referred to as the Earnings Suspense File, contains earnings reports in which the name and SSN do not match SSA’s records, suggesting employer or worker error or, potentially, identity theft and unauthorized work. We have reported that this file, which contained 246 million records as of November 2004, appears to include an increasing number of records associated with unauthorized work. SSA’s Office of the Inspector General has used the ESF to identify employers who have a history of providing names and SSNs that do not match. When SSA encounters earnings reports with names and SSNs that do not match, it makes various attempts to correct them using over twenty automated processes. However, about 4 percent of all earnings reports still remain unmatched and are electronically placed in the ESF, where SSA uses additional automated and manual processes to continue to identify valid records. Forty-three percent of employers associated with earnings reports in the ESF are from only 5 of the 83 broad industry categories, with eating and drinking establishments and construction being the top categories. A small portion of employers also account for a disproportionate number of ESF reports. For example, only about 8,900 employers—0.2 percent of all employers with reports recorded in the ESF for tax years 1985-2000—submitted over 30 percent of the reports we analyzed. Our past work has documented that individuals who worked prior to obtaining work authorization are a growing source of the unmatched earnings reports in the ESF that are later reinstated to a worker’s account. Once workers obtain a valid SSN, they can provide SSA evidence of prior earnings reports representing unauthorized employment prior to receiving their SSN. Such earnings reports can then be used to determine a worker’s eligibility for benefits. DHS officials believe that the ESF could be useful for targeting its limited worksite enforcement resources. For example, they could use the ESF to identify employers who provide large numbers of invalid SSNs or names and SSNs that do not match. They told us that these employers may knowingly hire unauthorized workers with no SSN or fraudulent SSNs and that employers who are knowingly reporting incorrect information about their workers might also be involved in illegal activities involving unauthorized workers. However, it is not clear that the ESF, which is much larger than the Nonwork Alien File, would be manageable or allow for targeted enforcement. The ESF contains hundreds of millions of records, many unrelated to unauthorized work, making it difficult to use for targeting limited resources. While the ESF may help identify some of the most egregious employers of unauthorized workers, in terms of poor earnings reporting, its focus is not on unauthorized workers. Our work has shown that most of the reinstatements from the file belong to U.S.-born citizens, not to unauthorized workers. In addition, because the ESF contains privileged taxpayer data, SSA cannot share this information with DHS without specific legislative authorization. SSA’s Office of the Inspector General has recommended that SSA seek legislative authority to share this data with DHS, but SSA responded that it is beyond the agency’s purview to advance legislation to amend the Internal Revenue Code in order to allow DHS access to tax return information. IRS officials have also expressed concern that sharing this data could decrease tax collections and compliance. We are examining the usefulness of SSA data to DHS for these subcommittees, and will consider ESF issues as part of this work. Improving the usefulness of the data could help ensure that limited enforcement resources are targeted effectively. SSA data could help identify areas of unauthorized work, but closer collaboration among SSA, IRS, and DHS can help to ensure that the most useful data are available in a form that can be used efficiently for enforcement. Under the current data-sharing arrangement, DHS officials believe the agency would have to invest significant resources to determine whether employers it targets are really hiring persons who are not work authorized. DHS has stated that determining which nonwork SSN holders are now authorized to work may not be cost-effective and would pull resources from other national security-related initiatives. Neither SSA nor DHS is able to easily and quickly update work status because they lack a common identifier for their records. Updating status without a common identifier may not be practical because different spellings or name variations confound large-scale matching efforts. For example, an August 2005 report from the SSA’s Office of the Inspector General highlights a substantial proportion of cases in which names were inconsistent between SSA and DHS. In at least six reports in recent years, SSA’s Office of the Inspector General has recommended or mentioned prior recommendations that SSA work with DHS to update information about work authorization. SSA officials maintain that it is their policy to make changes to the Social Security record only if the SSN holder initiates the changes and provides evidentiary documents from DHS. SSA further states that a “resolution of the discrepant information between DHS and SSA would require more than a simple verification.” Despite the many problems with the data, there are steps that could be taken to improve them. For example, the employers who submit the most earnings reports for nonwork SSNs might be good candidates for outreach and education about verifying work eligibility. SSA’s Office of the Inspector General officials suggested that DHS send letters to employers of persons with nonwork SSNs. These letters could encourage persons listed as having nonwork SSNs, who are now authorized to work, to update their records. The ESF also has the potential to provide useful information to DHS, but this information has protected tax status. Although some of the same difficulties that pertain to the Nonwork Alien File could also affect the usefulness of the ESF to DHS enforcement efforts, if these challenges could be overcome, authorizing transmittal of at least some of the ESF information to DHS might be warranted. Producing accurate, useful data will require substantial continued effort on the part of SSA, DHS, and the IRS: these efforts will be of little value, however, if the data are not used for enforcement and to stimulate changes in employer and employee behavior. We have reported previously that the IRS program of employer penalties is weak, because of limited requirements on employers to verify and report accurate worker names and SSNs; we have recommended that IRS consider strengthening employer requirements, a course that could over time improve the accuracy of wage data reported to SSA. We have also reported that, consistent with DHS’s primary mission in the post-September 11 environment, DHS enforcement resources have focused mainly on critical infrastructure industries in preference to general worksite enforcement. In such circumstances, coordination to leverage usable and useful SSA data is essential to ensure that limited DHS worksite enforcement resources are targeted effectively. The federal government likely can make use of information it already has to better support enforcement of immigration, work authorization and tax laws. The Earnings Suspense and the Nonwork Alien files have potential, but even the best information will not make a difference if the relevant federal agencies do not have credible enforcement programs. In fact, sharing earnings data to identify potential unauthorized workers could unnecessarily disclose sensitive taxpayer information if the data are not utilized by enforcement programs. To address unauthorized work more meaningfully, IRS, DHS and SSA need to work together to improve employer reporting, develop more usable and useful data sets for suspicious earnings reports, and better target limited enforcement resources. We look forward to contributing to this endeavor as we continue to conduct our work on using SSA data to help reduce unauthorized work. This concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Barbara Bovbjerg at (202) 512-7215. Other key contributors to this statement were Blake Ainsworth, Assistant Director; Lara Laufer, Analyst-in-Charge; Beverly Crawford; Susan Bernstein; Michael Brostek; Rebecca Gambler; Jason Holsclaw; Daniel Schwimer; Richard Stana; Vanessa Taylor; Walter Vance; and Paul Wright. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To lawfully work in the United States, individuals must have a valid Social Security number (SSN) and, if they are not citizens, authorization to work from the Department of Homeland Security (DHS). Noncitizens seeking work must provide both an SSN and evidence of work authorization to their employer. Yet individuals without these required authorizations have gained employment with false information. How these instances of unauthorized work can be identified or prevented challenges the federal agencies involved. Congress asked GAO to discuss how federal agencies can better share reported earnings data to identify unauthorized work. Specifically, this testimony addresses two issues: (1) the Social Security data that could help identify unauthorized employment and (2) coordination among certain federal agencies to improve the accuracy and usefulness of such data. The Social Security Administration (SSA) has two types of data that could be useful to reducing unauthorized work--individual Social Security records and earnings reports. Individual Social Security records, which include name, date of birth, and SSN, are used by SSA to provide verification services to employers wishing to assure themselves that the names and SSNs of their workers match SSA's records. SSA also uses Social Security records in a work authorization verification system developed by DHS called the Basic Pilot that offers electronic verification of worker status. These services are voluntary, and none are widely used by employers. SSA's earnings records provide additional information, which could be used as an enforcement tool to identify unauthorized work. Currently, SSA uses such records to produce two relevant files based on earnings records, which are the Nonwork Alien File and the Earnings Suspense File (ESF). The Nonwork Alien File contains earnings information posted to SSNs issued for nonwork purposes, suggesting that these individuals are working without authorization. The ESF contains earnings reports for which SSA is unable to match the name and SSN of the worker, suggesting employer error, SSN misuse, or unauthorized work activity. In addition, we have reported that the ESF, which contained roughly 250 million records as of December 2004, appears to include an increasing number of records associated with probable unauthorized work, but because of statutory constraints, the ESF is not available to DHS as an enforcement tool. Improving the usefulness of SSA data could help identify unauthorized work and ensure that limited enforcement resources are targeted effectively. Ensuring that the most useful data are available requires close coordination among the three federal agencies involved in collecting and using the data--SSA, the Internal Revenue Service (IRS), and DHS. We have previously recommended that IRS work with DHS and SSA as it considers strengthening its employer wage reporting regulations, as such action could improve the accuracy of reported wage data, and that DHS, with SSA, determine how best to use such wage data to identify potential illegal work activity. Efforts to improve data will only make a difference, however, if agencies work together to improve employer reporting and ensure they can conduct effective worksite enforcement programs.
Defense has long operated multiple telecommunications systems to meet an array of mission needs, ranging from the command and control of military forces to its payroll and logistics support functions. Because military services and other Defense agencies independently procured and operated their own networks, Defense’s communications environment has been fragmented and redundant. To improve the effectiveness and efficiencies of its military communications services, Defense began in 1991 to plan and implement DISN to serve as the Department’s primary worldwide telecommunications and information transfer network to support national security and defense operations. Defense’s DISN strategy focuses on replacing its older data communications systems, using emerging technologies and cost-effective acquisition strategies that provide secure and interoperable voice, data, video, and imagery communications services in support of military operations. Under Defense’s DISN concept, the military services and Defense agencies will still be responsible for acquiring telecommunications services for their local bases and installations, as well as deployed communications networks. DISA will be responsible for acquiring the long-haul services that will interconnect these base-level and deployed networks within and between the continental United States, Europe, and the Pacific. DISA’s current efforts focus on acquiring and implementing DISN CONUS services. For 10 years, Defense users obtained switched voice, data, video teleconferencing, and transmission services within the United States through the Defense Commercial Telecommunications Network (DCTN) contract with AT&T. The DCTN contract expired in February 1996. Since then, these services have been provided through a follow-on, sole-source DISN Transition Contract (DTC) with AT&T until Defense can fully implement its new DISN services. Defense estimates that DTC costs are approximately $18.5 million per month. In July 1995, we reported on Defense’s efforts to plan and implement DISN. At that time, we recommended that Defense ensure that DISN plans and program decisions were based on a validated statement of DISN’s operational requirements. By defining the minimal acceptable requirements for DISN as well as the critical technical characteristics, the operational requirements document would provide the basis for determining DISN’s effectiveness. We also recommended that Defense develop an estimate of the acquisition, operations, maintenance, and support costs for DISN over its life-cycle. While Defense concurred with these recommendations, it has not yet completed either action. Nevertheless, given the expiration of its DCTN contract in February 1996, and its desire to limit the term of the sole-source DISN Transition Contract, DISA is proceeding with its DISN implementation efforts and has issued four RFPs supporting DISN’s implementation: DISN Support Services - Global, to provide engineering, operations, network management, and other support services worldwide. DISN Switched/Bandwidth Manager Services - Continental United States (CONUS), to provide the capability to switch network traffic and provide bandwidth manager devices at designated service delivery points within the continental United States. DISN Transmission Services - CONUS, to provide access transmission services and transmission services connecting the bandwidth managers and switches provided under the switched/bandwidth manager contract, and to connect Defense installations with the DISN network. DISN Video Services - Global, to provide worldwide video teleconferencing through three video network hubs located in the continental United States. The timetable for receipt of proposals and contract awards is shown in table 1. DISA awarded the support services contract to Boeing Information Services, Inc., in June 1996, and awarded the switched/bandwidth manager services contract to MCI Corporation in August 1996. The evaluation of these proposals and subsequent contract awards addressed four factors: cost, technical, management, and past performance. DISA plans to award the video services contract on the same basis. Because transmission is a basic commodity service, Defense advised that it intends to award the transmission services contract primarily on the basis of lowest price. Defense plans full implementation of its DISN system within the continental United States by July 1997. The switched/bandwidth manager, transmission services, and video services acquisitions were subject to a bid protest in December 1995 by AT&T, which was adjudicated by the General Accounting Office (GAO). In this protest, AT&T argued that DISA arbitrarily refused to allow offerors to submit and have evaluated a single, comprehensive proposal, what AT&T termed an “integrated bid,” as an alternative to submitting individual proposals under each RFP. GAO’s decision, issued on May 1, 1996, upheld the legality of the acquisition strategy that DISA has followed. To obtain information about Defense’s acquisition strategy, and the steps taken by Defense in determining and selecting that strategy, we obtained and analyzed copies of the DISN solicitations from DISA staff in the Washington, D.C., area. We analyzed studies prepared by DISA staff during April and May 1995 that identified and evaluated DISN acquisition alternatives. We reviewed Defense’s DISN architecture and were briefed on steps taken to develop the DISN design by engineering staff at DISA’s Joint Interoperability and Engineering Organization, Center for Systems Engineering, in Reston, Virginia. In addition, in conducting our review, we used supporting documentation from our bid protest decision. To obtain information about the specific evaluation methods and factors used to select a DISN acquisition strategy, we interviewed several DISA officials including the DISN Program Manager and the DISN Contracting Officer in Arlington, Virginia. Our review was conducted from August 1996 through October 1996 in accordance with generally accepted government auditing standards. In developing its DISN acquisition approach, Defense considered several acquisition alternatives in April and May 1995 including one—using a single contractor to furnish a comprehensive set of services to the government—that is similar to the integrated approach that AT&T had advocated. Defense also evaluated the costs and benefits of separately acquiring component services with the government integrating those components itself, and other alternative approaches as well. In reviewing Defense’s analyses of alternatives, we found that Defense evaluated the advantages and disadvantages of each acquisition alternative in terms of relative cost and how it (1) met DISN requirements, (2) facilitated technology insertion and enhancement, (3) could be implemented within schedule constraints, and (4) supported Defense’s control of the network. DISA selected an acquisition strategy that divided the acquisition into four components with four separately awarded contracts. Under this plan, DISA, with the assistance of the support services contractor, would acquire, integrate, operate, and maintain the separate DISN components rather than employ a comprehensive service provider to integrate and operate DISN. Defense believed that breaking the program into functional components facilitated control over network interoperability, integration, surge capacity, technology insertion, and security. It also concluded that by breaking the program into pieces, more vendors could bid for contracts, thus increasing competition. Further, in Defense’s view, multiple contracts with frequent options made it easier to negotiate technological upgrades, and created incentives for vendors to maintain high standards of performance. Finally, Defense believed that the strategy encouraged vendors to offer their lowest prices on each separate contract instead of just offering prices that were averaged across the entire network. After issuing solicitations to implement this strategy, Defense received comments from industry contending that vendors could offer significant economies if they could submit one comprehensive, or integrated, bid for all of the business offered under the switched/bandwidth manager, transmission, and video services RFPs. Defense responded with an approach which staggers contract awards such that a vendor who wins the switched/bandwidth manager contract can use any economies that might accrue to its advantage when bidding for the remaining contracts. According to DISA, this approach enables the government to reap the potential cost savings of an integrated bid while maintaining maximum flexibility for cost-effective technical enhancements and continuing competition over the life of the program. Defense believes that it has selected the acquisition strategy that will yield the best value to the government over the course of DISN’s life cycle. However, Defense lacks the baseline information needed for us to ensure whether this is the case. We recommended in July 1995 that Defense ensure that the DISN approach was based on valid operational requirements and that it identify the additional life-cycle acquisition, maintenance, and support costs that would be incurred in developing and operating DISN. In making these recommendations, we concluded that without this important information, Defense would lack a starting point for ensuring that DISN facilities and services effectively and efficiently met their requirements. While Defense concurred with our recommendations, it has not fully implemented them. Given the current advanced state of the DISN acquisition and the need to replace the high-cost transition contract, we are not questioning the need to continue to move forward with DISN. However, Defense still needs this baseline information to gauge the performance of DISN as it is being implemented. DISN program officials in DISA and staff from the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence have told us that DISN’s requirements are known and documented because they are based on the requirements developed for Defense’s current communications systems. However, we believe that the operational requirements in the existing systems are not valid for DISN because they do not consider several important factors. First, with the growth of worldwide telecommunications networks such as the Internet, the information warfare threat to Defense, and thus the need for security requirements, has significantly increased in the past decade. For example, we recently reported that Defense may have experienced as many as 250,000 computer attacks last year and that Defense estimates that these attacks are successful 65 percent of the time. We also reported that the number of attacks is doubling each year, as Internet use increases along with the sophistication of computer attackers and their tools.Second, since the new strategy calls for diversifying contractors, integration risks are significantly higher than those accompanying the previous contract and system management is much more complex. Third, users now have greater expectations for network services as telecommunications technology has made significant strides in recent years. Taken together, these changes clearly demonstrate the need for Defense to document and validate with DISN users the operational requirements for the new strategy. By better establishing its operational requirements and life cycle costs for DISN, Defense would lay the groundwork for assessing whether the system is meeting its cost and performance goals. The next step would be to develop effective measures for tracking DISN’s progress against this baseline cost and performance information. Defense has not yet established any performance measures that would allow it to track whether DISN is meeting its objectives. Since Defense plans to begin implementing DISN CONUS in less than 8 months, the absence of these measures raises concerns that the Department will not be able to effectively manage DISN’s implementation and operation. Establishing good performance measures is not only critical because of the risks confronting the DISN program, it is central to the success of any significant information system undertaking. We have previously reported, for example, that successful organizations rely heavily upon performance measures to achieve mission goals and objectives, quantify problems, evaluate alternatives, allocate resources, track progress, and learn from mistakes. For service-oriented programs such as DISN, these may include such measures as the percent of mission improvements resulting from the new service in terms of cost, time, quality, and quantity; the percent of customers satisfied with certain telecommunications services; or the number of problems resolved within target times. Once the right measures are chosen, they help management target problem areas, highlight successes, and generally increase the rate of performance improvement through enhanced learning. Further, several statutory requirements call for Defense to define cost, schedule, and performance goals for major defense acquisition programs and for each phase of the acquisition cycle of such programs. These include the Federal Acquisition Streamlining Act (FASA) of 1994 and the recently enacted Clinger-Cohen Act of 1996. The requirement to establish program cost estimates and performance measures of operational effectiveness are also embodied in Defense acquisition guidance. At present, Defense is far from meeting any of these requirements. For example, even basic objectives, such as DISN’s ability to provide its users with the needed quality and volume of communications services, have not been validated by users and lack evaluation criteria upon which to measure success. Without this type of information, Defense has no way of knowing whether it will be spending billions of dollars acquiring, operating, and maintaining DISN facilities and services that efficiently and effectively meet its needs. Defense is striving to fully implement its DISN CONUS system by July 1997. However, it has yet to establish the basic cost and performance baseline information critical to laying the groundwork for assessing DISN’s success. We continue to believe that Defense should expeditiously implement our previous recommendation to develop and document DISN operational requirements and to identify DISN life cycle costs. In addition, Defense has not established performance measures that would determine how the implementation of this multibillion dollar initiative measures up to its cost and operational goals. Establishing such measures now for DISN would markedly improve DOD’s and the Congress’ ability to manage and oversee implementation of this system by providing the basis for independent analysis and evaluation. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to establish the objective measures needed to gauge DISN’s success. At a minimum, these measures should include the concerns of DISN customers and should correspond to the five factors—requirements, technology enhancement, schedule, management, and cost—that DISA used to select its acquisition strategy. We obtained written comments on a draft of this report and have incorporated those comments where appropriate. These comments are presented in appendix I. In commenting on the draft report, Defense concurred with our recommendation. We are encouraged that Defense intends to develop cost estimates and performance measures for major DISN components from this point forward. It is likewise important that Defense does so for the DISN-CONUS component currently being implemented. As stated in our report, these actions are critical in order for Defense to have an objective cost and performance baseline for measuring the success of this acquisition. As agreed with your office, we will send copies of this report to the Ranking Minority Member of the Senate Committee on Governmental Affairs, Chairman and Ranking Minority Member of the House Committee on Government Reform and Oversight, other interested congressional committees, the Secretary of Defense, and the Director of the Office of Management and Budget. Copies will be sent to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Linda D. Koontz, Associate Director Franklin W. Deffer, Assistant Director Kevin E. Conway, Senior Information Systems Analyst Mary T. Marshall, Information Systems Analyst Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to congressional request, GAO reviewed the steps taken by the Department of Defense (DOD) in selecting and implementing its acquisition strategy for the Defense Information System Network (DISN) Continental United States (CONUS), focusing on whether: (1) DOD considered alternative approaches, such as use of an integrated bid, in its selection of an acquisition strategy; and (2) the selected acquisition strategy will yield the best value to the government over DISN's life-cycle. GAO found that: (1) DOD considered several options prior to selecting an acquisition strategy for DISN, including an approach that would have involved using a single comprehensive service provider to furnish an integrated set of services to the government and another one that involved separately acquiring component services with the government integrating those components itself; (2) DOD considered the advantages and disadvantages of each option in terms of five factors: requirements; technology enhancement; schedule; management; and cost; (3) after evaluating its options and receiving industry comments on its draft request for proposals, DOD ultimately decided on an approach that calls for the Defense Information Systems Agency (DISA) to separately acquire and integrate component services itself, using contracts awarded on a staggered schedule; (4) DOD believes that this strategy will best meet national security needs at a reasonable cost; (5) in reviewing DOD's DISN efforts in 1995, GAO reported that DOD had yet to define the program's minimal acceptable requirements; (6) GAO also reported that DOD had not yet developed an estimate of what it would cost to acquire, operate, and sustain the DISN infrastructure; (7) without this information, DOD has no objective cost and performance baseline for measuring DISN's success; (8) without this baseline, GAO cannot determine whether the selected acquisition strategy will yield the best value to the government over the course of DISN's life cycle, which is estimated to be over 10 years; (9) once this baseline is developed, DOD must also establish effective measures for tracking DISN's progress; (10) at present, DOD is far from meeting federal requirements for establishing performance measures; and (11) by developing measures that focus on benefits, costs, and risks, DOD management can target problem areas, highlight successes, and ensure DISN meets its cost and performance goals.
From fiscal year 2000 to fiscal year 2007, agencies were to meet the energy goals established by two executive orders and a statute as shown in figure 2. Using energy data that agencies submit, DOE reports to Congress on agencies’ performance toward meeting these energy goals. According to DOE, for fiscal year 2007, the buildings subject to these energy goals consumed approximately one-third of the energy consumed by the federal government as a whole. Federal buildings obtain this energy from a number of different energy types, as shown in figure 3. According to 2007 national data from DOE’s Energy Information Administration, electricity generation consists of coal (49 percent), natural gas (21 percent), nuclear electric power (19 percent), hydroelectric power (6 percent), and other (5 percent). Carbon dioxide and certain other gases trap some of the sun’s heat in the earth’s atmosphere and prevent it from returning to space. The trapped heat warms the earth’s climate, much like the process that occurs in a greenhouse. Hence, the gases that cause this effect are often referred to as greenhouse gases. Fuel types vary in the amount of greenhouse gases that they emit. For example, the burning of coal and oil emits greater quantities of greenhouse gases during energy use than other fossil fuels, such as natural gas. Renewable energy is produced from sources that cannot be depleted and, unlike fossil fuels, most renewable sources do not directly emit greenhouse gases. According to draft data agencies provide to DOE, most of the 22 federal agencies reporting in fiscal year 2007 met the energy efficiency, greenhouse gas emission, and renewable energy goals. Some agencies used credits to meet the goals and would not have met the goals through reductions in energy intensity alone. Figure 4 shows the energy consumed, measured at the site where it is consumed rather than the source of the energy, in buildings that are subject to the energy goals, for 10 agencies with the highest energy consumption, in addition to the other 12 agencies reporting to DOE in fiscal year 2007. The other 12 agencies consumed a combined total of only about 4 percent of total site-delivered energy. Energy efficiency. As figure 5 shows, all but one agency met the 2007 energy efficiency goal laid out in E.O. 13423, which calls for a 6 percent reduction in energy intensity from a 2003 baseline. Among the agencies held to the goal, only the Railroad Retirement Board missed it, reducing energy intensity by 5.8 percent from its 2003 baseline. The Environmental Protection Agency (EPA) reduced energy intensity by 63.8 percent from a 2003 baseline, which was the largest reduction among the agencies. As a whole, the 22 agencies met the energy efficiency goal, with agencies cumulatively reducing energy intensity by 11 percent from 2003 levels. Use of credits for the purchase of renewable energy and source energy was common among agencies in 2007. USPS was the only agency that did not use any credits. Of the 21 agencies that used credits, 3 that met the energy efficiency goal with the credits would not have met the goal without them. EPA achieved the greatest percentage of its energy intensity reduction using credits—81.2 percent of its overall reduction in energy intensity came from the use of credits—representing about 5 percent of the total credits the federal government used. In contrast, about a third of DOD’s reduction in energy intensity came from credits, but this reduction accounted for over half of all the credits the federal government used because DOD is overwhelmingly the largest consumer of energy in the government. Almost one-third of the total reduction in energy intensity reported by agencies is attributable to the use of credits. Most agencies—21 of 22—used renewable energy purchase credits in fiscal year 2007. Five agencies also used source energy credits. For all agencies, renewable energy purchase credits accounted for about two- thirds of all credits used. Both types of credits were established under E.O. 13123. Source credits were aimed at helping the federal government reduce total energy use at the source of generation. According to DOE, renewable energy purchase credits were established to support the renewable energy industry. Although the credits were established to support federal energy policies, they do not reflect actual decreases in energy intensity. Greenhouse gas emissions. The same 21 of 22 agencies met the 2007 greenhouse gas emissions goal under E.O. 13423, which holds agencies to the same standard as the energy efficiency goal—a 6 percent reduction in energy intensity from a 2003 baseline. The same renewable energy purchase and source energy credits that count toward the energy efficiency goal also count toward the greenhouse gas emissions goal. Renewable energy. Seventeen of the 22 agencies met the fiscal year 2007 renewable energy goal, as figure 6 shows. This goal requires that at least 3 percent of total electric energy consumption come from renewable energy sources, with at least half of the required renewable energy an agency consumes coming from resources put into service after January 1, 1999. The departments of Health and Human Services, Justice, and State; the Social Security Administration; and USPS missed the goal. EPA achieved the greatest percentage of total electric consumption from renewable sources, with 153.5 percent. EPA was able to count more than 100 percent of its electricity consumption as renewable because it bought renewable energy certificates that exceeded the electricity it used, and because it received a small bonus for renewable energy generated on federal or Indian land. As a whole, the federal government met the renewable energy goal, with 4.9 percent of its electricity use coming from renewable sources and at least half of this energy coming from newer renewable sources; only about 3 percent of the renewable energy total is attributable to bonuses. Determining the extent to which agencies have made progress toward the goals over time is problematic due to key changes in the goals—as specified in statute and executive order—and how performance is measured. Performance can be compared across years when the way a goal is measured remains unchanged. After substantial change, however, there is no consistent measure against which to compare long-term progress toward the goals. Energy efficiency. Key changes in the energy efficiency goal since 2005 illustrate the difficulty in making comparisons. As figure 7 shows, EPAct 2005 made key changes in both building categories and baseline years, and also changed the percentage reduction and the year by which agencies should have reduced energy intensity by that percentage. These key changes make it problematic to compare agency performance against the goal before and after EPAct 2005 took effect. Although all but 1 of 22 agencies met the single energy efficiency goal in 2007 for buildings subject to the goal, according to draft DOE data, this performance cannot be directly compared with performance in 2005. In that year, only 8 of 17 agencies met the goal for standard buildings and 8 of 12 agencies met the goal for industrial and laboratory buildings. Difficulty in comparing agency performance against the goal mainly resulted because of the key changes in building categories and baselines. The change from two building categories—standard and industrial and laboratory—to only one category changed the total square footage included in the energy intensity calculation. Data on NASA’s performance against the energy efficiency goal in 2005 and 2007 show the difficulty in gauging progress after a key change to a goal. In 2005, the agency met the standard building goal by reducing energy intensity for those buildings by 30.4 percent against a 1985 baseline, exceeding the goal of 30 percent. It missed the industrial and laboratory building goal, reducing energy intensity for those buildings by 16.1 percent against a 1990 baseline, short of the goal of 20 percent. In 2007, NASA exceeded the goal for all buildings subject to the goal by reducing energy intensity by 17.6 percent against a 2003 baseline, well over the goal of a 6 percent reduction. However, because of changes in the baseline year and building categories, NASA’s performance against the goal in 2007 cannot be directly compared with its performance in 2005 or earlier. While we focused on how changes to measurement of the energy efficiency goal make assessing progress toward meeting the goal problematic, DOE also maintains actual energy intensity data for reporting agencies dating back to 1985. According to the data, agencies decreased energy intensity in all their buildings from 1985 to 2007 by approximately 14.3 percent. However, these data do not reflect the evolution of the energy efficiency goal during that period. For example, buildings that are excluded under the executive orders and EPAct 2005 are included in these totals. Greenhouse gas emissions. Similar comparative difficulties show up in examining progress toward the goal of reducing greenhouse gas emissions. Before 2007, under E.O. 13123, the goal called for reducing the amount of emissions by 30 percent by 2010 compared to a 1990 baseline. E.O. 13423 significantly changed how the federal government measures progress toward this goal. Now, the greenhouse gas emissions direction is measured using energy intensity against a 2003 baseline. Figure 8 shows the details of these changes. Performance against the greenhouse gas emissions goal may be compared from 2000 to 2006, when E.O. 13123 remained in place and the goal was measured in the same way. However, the key change in E.O. 13423 from greenhouse gas emissions to energy intensity means that it is problematic to compare agency performance in 2007—when all but 1 agency met the greenhouse gas emissions goal—with performance in 2005—when only 7 of 21 agencies were on track to meet the goal. For example, VA actually increased its greenhouse gas emissions in 2005 by 20.3 percent from its 1990 level, and was far from meeting the greenhouse gas emissions goal of a 30 percent reduction by 2010. In 2007, however, it met the emissions goal because it exceeded the energy efficiency goal. E.O. 13423 states that agencies are to reduce greenhouse gas emissions by reducing energy intensity. However, a reduction in energy intensity does not track directly with lower greenhouse gas emissions for two reasons. First, if an agency’s energy consumption increases but square footage increases at a greater rate, then energy intensity is reduced while greenhouse gas emissions will increase, assuming all else remains unchanged. Second, the level of greenhouse gas emissions depends on the type of fuel used to generate energy. However, energy intensity does not account for different fuel types. Rates of carbon intensity vary by energy type per Btu delivered, especially for electricity, depending on whether it is generated from a fossil fuel, nuclear, or renewable source. Consequently, if an agency’s square footage and energy consumption remain constant but the agency switches to sources that emit more greenhouse gases, such as switching from natural gas to coal, its energy intensity remains constant while greenhouse gas emissions increase. Conversely, switching from fossil-generated electricity to renewable electricity virtually eliminates greenhouse gas emissions. Although E.O. 13423 changed the measure for greenhouse gas emissions, DOE still estimates and reports greenhouse gas emissions by considering the sources used to produce energy and agency energy consumption. The imperfect relationship between energy intensity and greenhouse gas emissions shows up in DOE data: we found cases in which energy intensity decreased over time, but greenhouse gas emissions increased. According to draft DOE data, at the Department of Commerce, for example, from 2003 to 2007, energy intensity decreased by 22.3 percent while greenhouse gas emissions increased by 2.4 percent. Similarly, the National Archives and Records Administration’s energy intensity decreased by 18.7 percent over the period but greenhouse gas emissions increased by 21.5 percent. Although the National Archives and Records Administration’s and the Department of Commerce’s greenhouse gas emissions increased while energy intensity decreased, mostly attributable to increases in square footage of their building inventories, for the government as a whole greenhouse gas emissions decreased by 9.4 percent from 2003 to 2007 while energy intensity decreased by 11 percent. It is not clear why the administration changed from an absolute emissions measure to one tied to energy intensity. When we asked about using energy intensity as a proxy for greenhouse gases, an official with OFEE told us that it is the administration’s policy not to tie greenhouse gas emissions to a specific measure. Rather, it is the administration’s policy to encourage agencies to voluntarily partner with other groups to reduce emissions, and the administration believes emissions will decline without a quantifiable goal. Although energy intensity is an imperfect measure of greenhouse gas emissions, there is no scientific consensus on the best measure. Some organizations, such as the Energy Information Administration, a statistical agency of DOE which provides data, forecasts, and analyses, and the World Resources Institute, have used or proposed several alternatives for measuring greenhouse gas emissions. Such measures include reporting total emissions, as was the case for the previous greenhouse gas emissions goal under E.O. 13123, and using greenhouse gas intensity measures. Some greenhouse gas measures, like the current energy intensity measure based on square footage, attempt to account for expanding or shrinking production or mission. Other proposed measures have included calculating greenhouse gas intensity by dividing total greenhouse gas emissions by building square footage or by units of performance or output, such as million dollars of gross domestic product or economic output, kilowatt hour, customer, or dollar of revenue. DOE, in its annual reports to Congress, estimates emissions from energy use in buildings that are subject to the goal, and presents annual emissions in metric tons of carbon dioxide equivalent, and in terms of metric tons of carbon dioxide equivalent per gross square foot. None of the measures is perfect. For example, one agency official noted that an absolute emissions goal—as was used to measure emissions prior to the current measure—does not account for the fact that an agency may change its energy consumption or square footage to support its expanded or contracted work resulting from a change in mission. However, this absolute emissions measure allowed agencies to more easily track progress in reducing their total emissions. Imperfect metrics also are an issue at the international level. For example, one measure currently used by the Energy Information Administration is “emissions intensity,” measured in emissions in a given year divided by the economic output for that year, which accounts for changes in national output. As past GAO work has shown, a decrease in this intensity-based measure may result in increased greenhouse gas emissions. Renewable energy. Key changes in the renewable energy goal since 2005 also make comparisons over time problematic. While both EPAct 2005 and E.O. 13423 specified different ages of renewable sources counted toward meeting the energy goal, E.O. 13423 did not change the percentage required or time frames required of the agencies, as figure 9 shows. Further, forms of nonelectric renewable energy such as solar thermal, geothermal, and biomass gas do not count toward the EPAct 2005 goal. E.O. 13123 did count these forms of renewable energy toward its goal. Performance against the renewable energy goal may be compared from 2000 to 2006, when the goal remained unchanged. But the change in the age requirement for renewable sources makes it problematic to compare performance in 2007 with previous years. For example, although 17 of 22 agencies met the goal in 2007 and 10 of 20 met the goal in 2005, comparing performance in these 2 years is problematic because, with the 2007 goal, half of renewable energy came from sources in service from 1999 or later, but there is no source age specification for the other half. However, with the 2005 goal, all of the renewable energy came from energy sources in service in 1990 or later. Also, thermal renewable energy used in 2005 was not eligible to be counted toward the 2007 goal. Data on VA’s performance illustrate the difficulty in making comparisons when the age requirement for renewable energy sources has changed. In 2005, VA exceeded the goal of having 2.5 percent of its electricity consumption from renewable sources put into service since January 1, 1990, with 2.9 percent of its electricity consumption from these sources. In 2007, VA exceeded the new 3 percent goal, with 3.4 percent of its electricity from renewable sources, 1.8 percent from new sources put into service since 1999, and 1.6 percent from older eligible sources. Although VA increased its total renewable energy use, it is not clear whether its use from sources put into service since January 1, 1990, has increased or decreased, thereby making comparisons across the goals problematic. The prospects for meeting the energy goals in the future for the agencies we reviewed depend largely on overcoming four key challenges. First, long-term plans can help clarify priorities and help agency staff pursue shared goals, but the six agencies we reviewed had long-term plans for achieving energy goals that lacked several of the key elements that we have identified in our prior work that make such plans effective. Second, achieving long-term energy goals will require major initial capital investments, but it is difficult for such investments to compete with other budget priorities. To address this problem, federal officials increasingly rely on alternative financing mechanisms; while these mechanisms provide benefits, they also present challenges. Third, agencies we reviewed face challenges in obtaining sufficiently reliable data on energy consumption; however, most agencies have tools for ensuring data are reliable and have plans to more accurately capture energy data. Finally, sites may lack staff dedicated to energy management, and also may find it difficult to retain staff with sufficient energy expertise; lack of expertise could make it difficult to undertake alternative financing projects. Federal officials are participating in energy-related training courses and undertaking initiatives to hire, support, and reward energy management personnel. Long-term plans can help clarify organizational priorities and unify agency staff in the pursuit of shared goals. These plans also must be updated to reflect changing circumstances, and according to our previous work, plans should include a number of key elements, including (1) approaches or strategies for achieving long-term goals; (2) strategies that are linked to goals and provide a framework for aligning agency activities, processes, and resources to attain the goals of the plan; (3) identification of the resources needed; (4) strategies that properly reflect and address external factors; and (5) reliable performance data needed to set goals, evaluate results, and improve performance. Long-term plans with these elements help an agency define what it seeks to accomplish, identify the strategies it will use to achieve results, and determine how well it succeeds in achieving results and objectives. While none of the six agencies we reviewed could provide us with what we considered to be a comprehensive, long-term energy plan, agency officials did provide numerous planning documents, including budget documents, strategies for improving energy efficiency, energy program guidance, and agencywide energy policies for sites. For the purposes of our review, we considered any of these planning documents, if they discussed actions to be taken beyond 12 months, as long-term energy plans. However, we determined that the long-term energy plans for one or more of the six agencies lacked some of the following key elements for effective long-term energy planning: approaches or strategies for achieving long-term energy goals; strategies that linked energy goals and provide a framework for aligning agency activities, processes, and resources to attain the goals of the plan; identification of the required resources needed to achieve long-term energy goals; strategies that properly reflect and address external factors; and provisions for obtaining reliable performance data needed to set goals, evaluate results, and improve performance. Moreover, four of the six agencies’ long-term plans were not updated to reflect E.O. 13423, although two of these agencies noted that they are in the process of updating these plans. In addition, in April 2008, the USPS Inspector General’s office reported on the value of long-term energy plans and determined that USPS does not have a long-term energy management plan, and that without one USPS cannot effectively maximize its energy conservation efforts. The USPS Inspector General recommended the Postal Service develop and publish a National Energy Management Plan. This plan is expected to be published in early fiscal year 2009. While long-term planning generally is recognized as an important tool in achieving goals, federal agencies have not been required to have long-term plans for energy goals. To close this gap, DOE is drafting guidance for agencies to follow as they develop multiyear plans and long-term strategies for assessing the level of investment necessary to meet energy goals, their progress in meeting these goals, and the likelihood that they will achieve these goals by 2015. Our preliminary review of the draft guidance found that it appears to address all of the key elements we identified. According to DOE officials, this guidance will be published in final form upon completion of DOE internal review, as well as analysis and reconciliation with new planning requirements in the EISA 2007. In the interim, the six agencies are addressing long-term energy planning deficiencies in two ways. First, in recent years officials in agencies’ headquarters have used short-term plans to achieve energy goals in the near term. All of the agencies that reported to DOE were required to provide annual plans under E.O. 13123 that included guidance on energy requirements and strategies each agency is taking over the next year to meet these requirements. However, E.O. 13423 does not require agencies to provide these annual plans. Agencies also used other planning tools to achieve energy goals in the short term. For example, GSA sets annual regional targets and requires each region to submit plans on how it will achieve these targets. Agencies also submit budgetary documents requesting funds for specific energy projects. Officials at the sites we visited had used a number of short-term plans to achieve energy improvements, but did not know how they would meet long-term energy goals. In several cases, these officials stated, they are planning to meet future energy goals by completing individual projects in the near term. For example, officials at one GSA site reported that they typically plan energy projects on a year-to-year basis, depending on the available funds, and did not have a long-term energy plan. At one USPS site, officials said they have not yet documented a comprehensive, long- term plan highlighting the steps they have taken or intend to take to ensure they reach energy goals. In addition, officials at a DOE site stated that it is difficult to plan a long-term approach for achieving energy goals because the site’s mission is constantly evolving. Moreover, most military installations we visited did not have a long-term plan to achieve energy savings into the future and were instead developing individual projects to improve the energy efficiency in existing structures. Second, agencies are using energy audits as a way to identify potential energy savings and meet long-term goals. In the past, we have reported that energy audits are a key strategy for identifying and evaluating future energy projects, and officials at all the agencies we spoke with reported undertaking energy audits as a tool to identify and plan future energy projects. Since 1998, NASA has conducted reviews at each of its centers every 3 years to assess their energy and water management programs. The review requires center staff to participate in a self-assessment by responding to a set list of questions, confer with headquarters officials during a week-long site visit, and discuss review findings including recommendations. USPS currently is conducting energy audits for 60 million square feet of its 310 million square feet of facility space, which will identify close to 2 trillion Btus of potential savings upon completion. In 2007, VA conducted energy and water audits covering six regions and a total of 64 sites, or a total of 20 percent of its sites. During 2008, VA officials expect to audit 30 percent of its sites, which include 116 sites in seven regions. Energy audits are part of the Air Force’s energy program and were undertaken to identify additional energy-related projects, and act as measures of how to reduce energy consumption. While short-term planning and energy audits help guide agencies’ efforts toward meeting their goals in the near term, they do not address how the agencies will meet the goals through 2015. Meeting long-term energy goals will require major initial capital investment. According to DOE, to meet the energy goals under E.O. 13423, the federal government would have to invest approximately $1.1 billion annually (beginning in fiscal year 2008, based on fiscal year 2007 performance) through 2015 on energy-related projects. In addition, in June 2007, ASE reported that meeting federal energy goals will require an investment of approximately $11 billion from 2009 through 2015, or $1.5 billion annually. Paying for this investment up front with appropriated funds may be difficult for agencies because energy projects compete with other budget priorities. As figure 10 shows, from fiscal years 2000 through 2007, upfront funding ranged from approximately $121 million to $335 million annually—well below the $1.1 billion level of investment needed annually to meet future energy goals, according to DOE’s estimate. Furthermore, according to draft DOE data for fiscal year 2007, federal agencies will face an estimated $5.3 billion gap in appropriated funding for energy investment from fiscal year 2008 through 2015. Officials from all six agencies we reviewed cited budget constraints as a challenge to meeting future energy goals. For example, only 4 of the 10 military installations we visited have received upfront funding from DOD’s Energy Conservation Investment Program since 2003. Furthermore, several DOD installation officials told us that they no longer request funding for energy improvements because they do not believe upfront funding will be made available. In our previous work we similarly noted that agency officials had stopped requesting such funding. We also noted that paying for energy efficiency improvements with upfront funding is generally the most cost-effective means of acquiring them. Because the total amount of upfront funding is limited, federal officials increasingly rely on alternative financing mechanisms—such as contracts with private companies that initially pay for energy improvements and then receive compensation from the agencies over time from the monetary savings they realize from these projects—to meet energy goals. Seven of the 11 civilian sites and 9 of the 10 military installations we visited have used, are currently using, or are planning to use alternative financing to implement energy projects. Furthermore, in an August 2007 memo, the White House Council on Environmental Quality directed agency heads to enter into energy savings performance contracts (ESPC) and utility energy savings contracts (UESC) for at least 10 percent of annual energy costs to accomplish energy-related goals. It further directed them to report on progress toward finding and developing alternatively financed projects. Figure 11 shows the total amount of funding agencies received from upfront funding and alternative financing for UESCs and for ESPCs. As discussed earlier, most agencies met their fiscal year 2007 goals. However, for 2008 onward, if funding stays at the current level, there is an apparent gap between the amount received and the amount estimated to meet energy goals. According to agency officials, alternative financing mechanisms offer benefits but also present challenges. In terms of benefits, these mechanisms can be used to complete energy projects and meet federal energy reduction goals when upfront funding is not available. For example, DOD officials stated that alternative financing mechanisms are necessary for DOD to meet future energy goals and, in March 2008 testimony before the Subcommittee on Readiness, House Committee on Armed Services, the Deputy Under Secretary of Defense for Installations and Environment stated that ESPCs typically account for more than half of all site energy savings. Furthermore, according to DOD, the agency fell short of meeting past energy efficiency goals owing to a lapse in ESPC authority from October 2003 to October 2004. In addition, DOE officials noted that alternative financing mechanisms provide large energy savings per dollar spent and estimated that ESPC project savings generally exceed guaranteed energy savings by about 10 percent. In 2005, we reported that agencies cited other benefits from alternatively financed projects, such as improved reliability of the newer equipment over the aging equipment it replaced, environmental improvements, and additional energy and financial savings once the contracts have been paid for. Agency officials also noted several challenges associated with such projects. For example, VA officials noted that development, execution, and ongoing administration of alternative financing contracts add overhead costs that increase the total cost of the contract. Furthermore, according to DOD officials, overseeing these contracts requires a level of expertise not always available at individual installations, and such contracts often take a long time to implement. In addition, officials at a number of civilian sites commented that developing alternatively financed projects requires a steep learning curve and the process for developing a contract can be time consuming. Finally, officials at a few agencies noted that in using these alternative financing mechanisms, it is difficult to measure and verify energy savings and to manage contracts with lengthy payback periods. Our June 2005 report also showed that agencies entering into these alternative finance contracts could not always verify whether energy savings were greater than project costs and may yield lower dollar savings than if timely, full, and upfront appropriations had been used. In addition, in our December 2004 report, DOD officials commented that the costs of using such contracts was 25 percent to 35 percent above what costs would have been in using upfront funds for certain energy projects. Some agencies are undertaking initiatives to overcome the challenges associated with alternative financing. VA has created a central contracting center for energy projects, including alternatively financed projects. VA officials believe the center will offer a number of benefits, including the development of alternative financing expertise, increased accountability, greater agencywide awareness of these financing mechanisms, and standardization of the alternative financing process across VA. The Air Force, Army, and the Department of Navy have already centralized some functions in the process. The Air Force is working to further centralize these activities in order to decrease the number of staff needed to implement these contracts, and to review and approve all parts of the process in one location. Furthermore, DOE’s Federal Energy Management Program provides technical and design assistance to support the implementation of energy projects, including project facilitators who can guide site officials through the process of developing, awarding, and verifying savings from alternatively financed projects. Collecting and reporting reliable energy data is critical for agencies to assess their progress toward their goals and identify opportunities for improvement. According to DOE officials responsible for overseeing the collection and reporting of energy information for the federal government, there are no federal energy measurement or data collection standards, and each agency gathers information differently, using its financial systems data and estimating data when necessary through other means. For example, NASA and USPS officials reported that their agencies use utility payment information to measure and report energy use. Moreover, DOE officials stated that each site manager may use different means to measure and collect energy consumption, conservation, and cost data, including handwritten ledger sheets, software, cost averaging, and estimation techniques. Measuring data at federal buildings is difficult if individual buildings do not have meters. Sometimes an entire site is metered by the local utility for usage and billing purposes, but not all of the buildings on the site are metered individually. Accordingly, energy managers cannot always reliably determine the usage in a specific building or group of buildings. Without meters, energy teams may be unable to pinpoint buildings or areas that need to be improved or identify which energy projects have effectively achieved energy savings. In some instances, agencies’ federal energy data have not been reliable. DOE officials responsible for annually reporting to Congress on agencies’ progress toward energy goals acknowledge as much but stated that past year data are updated to correct inaccuracies discovered by the agencies. In April 2008, the USPS Office of Inspector General reported that USPS may be inaccurately reporting energy consumption data to DOE, and therefore cannot accurately determine its progress toward meeting the energy goals. Among other things, the Inspector General reported that USPS did not have a clear process for reporting data on sites’ square footage and was calculating energy consumption by dividing billed cost by an estimated or average cost per kilowatt-hour, which can differ significantly from actual consumption. In 2006, a NASA energy management review reported that one of its sites had in some cases entered incomplete and erroneous data into the database the agency uses to track its progress toward energy goals. A 2005 report from the VA Office of the Inspector General stated that the agency’s energy data were not reliable because staff inaccurately reported sites’ energy consumption and square footage. According to VA officials, VA implemented all of the recommendations in the report, including those addressing data reliability and, in September 2007, the VA Office of the Inspector General closed the report. Air Force officials stated that a thorough data review revealed data entry errors at approximately 5 percent of installations. Agencies use a variety of mechanisms to verify energy data. For example, according to the DOE official who compiles agency data for the annual report to Congress, agency data reports are checked for any obvious problems by comparing the agency’s energy information with their data from previous years to identify outliers. He also communicates with energy coordinators and compares unit price information with a site’s recorded energy costs to determine if the reported costs appear reasonable. Beyond these checks, DOE relies on agencies’ headquarters officials and the energy coordinators at sites to enter energy information for the sites and verify its accuracy. Many officials reported using quality control mechanisms to verify that current data match up with past records. These mechanisms include automatic database alerts, which notify officials of data that are outside specific ranges and thus could be errors. Under EPAct 2005, agencies are required to install advanced electrical meters by 2012, whenever practical, to help ensure more reliable information. Advanced meters are capable of providing real-time data that feed directly into an agency’s metering database, verifying savings from energy projects, and helping site officials to identify potential energy savings opportunities. According to the most recent OMB energy management scorecards, all six agencies we met with are meeting the milestones toward metering all appropriate sites by 2012. To advance energy goals, it is important to have dedicated, knowledgeable, energy efficiency staff to plan and carry out energy projects. Moreover, according to a June 2007 ASE report, such staff can focus on identifying and implementing efficiency projects. However, some sites we visited did not have a full-time energy manager. Instead, staff members were often assigned part-time responsibility for performing energy-related duties in addition to duties unrelated to energy management, such as managing site maintenance and providing technical support and mechanical design assistance for a site. For example, at one DOE site, six to seven different officials have part-time energy management responsibilities. At other sites, a GSA building manager stated that he spends approximately 15 percent to 20 percent of his time on energy goals, and a NASA energy manager reported devoting approximately one-third of his time. Finally, officials at a Navy installation reported that there is no on-site, dedicated energy manager and that the installation needs one if it intends to meet the energy goals. In visiting military installations, we found that full-time energy managers tended to engage in multiple energy reduction activities, while other installations without full-time or experienced energy managers tended not to have robust energy reduction programs. Furthermore, lack of expertise in energy management and high staff turnover may create challenges for negotiating and overseeing alternative financing mechanisms. Energy projects funded through alternative financing often require a high level of expertise in complex areas such as procurement, energy efficiency technology, and federal contracting rules. Many agencies told us that without experienced personnel, they face challenges in undertaking contracts that are necessary to meet energy goals. Officials from multiple agencies commented that high turnover rates exacerbate the difficulties associated with alternative financing. To address these challenges, VA officials stated that they recently hired almost 90 permanent facility-level energy managers who will cover all VA facilities and focus solely on energy issues. DOD officials also reported using resource efficiency managers—contractors that work on-site at federal facilities to meet resource efficiency objectives with the goal of meeting or exceeding their salaries in energy savings. In addition, federal officials are taking part in energy-related training courses and undertaking initiatives to reward and support energy management personnel. Many agencies reported receiving training on ways to improve energy efficiency from a variety of sources, including agency-offered internal training, training provided by DOE’s Federal Energy Management Program, and energy conferences. From fiscal years 2002 to 2006, agencies reported spending approximately $12.5 million to train more than 27,000 personnel in energy efficiency, renewable energy, and water conservation. In addition to training, the Federal Energy Management Program also recognizes outstanding accomplishments in energy efficiency and water conservation in the federal sector through an annual awards program. Furthermore, the White House annually honors federal agency energy management teams through the Presidential Awards for Leadership in Energy Management. Since 2000, these awards have recognized such teams for their efforts to promote and improve federal energy management and conservation and demonstrate leadership. The current metric for greenhouse gas emissions—one based on energy intensity—is not a satisfactory proxy for assessing agencies’ progress toward reducing these emissions. There is no consensus on a best measure at present; however, there are alternative measures that may better track agencies’ greenhouse gas emissions than the current measure based on energy intensity. Although the previous metric—one based on emissions— had limitations, it was more clearly linked to emissions and made it easier to assess progress toward reducing those emissions. The closer a metric is to approximating the level of emissions, the better agencies will be able to determine their progress in reducing greenhouse gas emissions. In addition, although the ability of agencies to use renewable energy purchase and source energy credits towards the goals may further certain federal energy policy objectives, it also may enable agencies to achieve compliance with the energy goals without actually changing agencies’ on- site energy use. Although most agencies were able to meet their energy goals for 2007, without a strong plan of action agencies may not be well positioned to continue to achieve energy goals over the long term, especially in light of budget constraints and the $1.1 billion that DOE has estimated that agencies will need each year to achieve future energy goals. Furthermore, they face challenges with having reliable data and retaining dedicated and experienced energy personnel and have not adequately planned how to address these challenges in the long term. Without guidance from DOE that clearly outlines the key elements for effective, long-term energy planning identified in this report that could address these challenges, agencies do not have the foundation they need to develop plans that will continually adapt to a changing energy environment. As a result, agencies are likely to find it increasingly difficult to ensure that they will meet energy goals in the future. We recommend that the Secretary of Energy take the following two actions. In conjunction with the Federal Environmental Executive and the Director of the Office of Management and Budget, re-evaluate the current measure for greenhouse gas emissions and establish one that more accurately reflects agencies’ performance in reducing these emissions to help determine whether agencies are making progress over time. To help agencies address the challenges they face in meeting energy goals into the future, finalize and issue guidance that instructs agencies in developing long-term energy plans that consider the key elements of effective plans identified in this report. We provided a draft of this report to the CEQ, DOD, DOE, GSA, NASA, OMB, USPS, and VA for their review and comment. In commenting on a draft of this report, NASA and USPS generally agreed with our findings, conclusions, and recommendations and provided written comments included as appendixes II and III, respectively. GSA responded by e-mail on September 8, 2008, stating that it concurred with our report. VA neither agreed nor disagreed with our report and provided written comments included as appendix IV. The Council on Environmental Quality, DOD, DOE, and OMB did not provide any comments on our draft. For those agencies who submitted technical and clarifying comments, we incorporated those as appropriate. In addition, VA expressed concern that it was not afforded the opportunity for an exit conference. However, we note that we offered the opportunity for such a meeting to the Office of Asset Enterprise Management, the office within VA responsible for energy management and designated by VA at the outset of our engagement as the main point of contact. Furthermore, the Office of Asset Enterprise Management provided written comments on a preliminary draft that we incorporated into the subsequent draft, as appropriate. We are sending copies of this report to interested congressional committees and Members of Congress and the Chairman of CEQ; the Administrators of GSA and NASA; the Director of OMB; the Postmaster General and Chief Executive Officer of USPS; and the Secretaries of Defense, Energy, and VA. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact Mark Gaffigan at (202) 512-3841 or gaffiganm@gao.gov, or Terrell Dorn at (202) 512-2834 or dornt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the extent to which agencies met energy efficiency, greenhouse gas emission, and renewable energy goals, we analyzed data on agencies’ performance in meeting these goals using draft agency energy data, as of July 2008, for fiscal year 2007, which were reported by the agencies to the Department of Energy (DOE) for use in DOE’s Annual Report to Congress on Federal Government Energy Management and Conservation Programs. We considered agencies to have met the energy efficiency goal for fiscal year 2007 if they reduced energy intensity by at least 6 percent from the 2003 baseline. We also met with officials from DOE to understand how the data are developed. To assess the agencies’ progress in each of these areas in recent years, we reviewed energy efficiency, greenhouse gas emission, and renewable energy goals, as established in current and previous statute and executive orders—the Energy Policy Act of 2005, Executive Order 13123, and Executive Order 13423. We also analyzed data on agencies’ performance in meeting the goals, as reported in DOE’s annual report to Congress for fiscal year 2005. Furthermore, we analyzed draft data from these annual reports for fiscal years 2006 and 2007. In addition, we met with officials from DOE, the Office of the Federal Environmental Executive, and the Office of Management and Budget to gain their perspective on the goals and an understanding of their roles in overseeing the statue and executive orders. In assessing agencies’ performance for 2007 and progress in recent years, we determined these data from DOE’s annual reports to be sufficiently reliable for our purpose, which was to convey what the agencies reported to DOE about the status of meeting the energy goals. To determine the extent to which the agencies are poised to meet future energy goals, we selected six agencies on the basis of several criteria, including the following: (1) energy consumed: of the agencies reporting energy data to DOE, these six agencies together accounted for nearly 94 percent of the energy consumed in standard buildings in fiscal year 2005; (2) level of investment in energy and utility savings performance contracts; (3) amount of renewable energy purchased, and self-generated; and (4) estimated carbon emissions. Because these six agencies accounted for nearly 94 percent of the energy consumed in standard buildings in fiscal year 2005, our findings for these agencies may have great implications for the federal government as a whole. We visited a minimum of two sites per agency to understand efforts toward meeting energy goals at the local level. To ensure that we had a variety of sites, we selected the sites on the basis of both high and low reductions in energy intensity from 2003 to 2006, geographic location, site size, and agency recommendation, among other criteria. The six agencies and the sites we visited are listed in table 1. We obtained and analyzed documentation and met with headquarters officials and officials responsible for energy management at the sites from the six agencies. In addition, we systematically reviewed these interviews to determine what primary challenges agencies face and the tools they use to meet energy goals. We used general modifiers (i.e., most, several, some, and a few) to characterize the extent to which agencies were facing and addressing the challenges we found. We used the following method to assign these modifiers to our statements: “most” and “many” represents four to five agencies, “several” and “some” represents three agencies, and “a few” represents two agencies. We also systematically reviewed documents and interviews to determine whether agencies’ long-term plans contained key elements as identified by our past work. For our review of agencies’ long-term energy plans, we reviewed planning documents obtained from agency officials that laid out agencies’ efforts to achieve the energy goals beyond 1 year. We also met with officials from the Alliance to Save Energy to get their perspective on challenges facing the federal government. Finally, we participated in DOE’s Webcast training on energy savings performance contracts offered by DOE and attended GovEnergy, an energy training workshop and exposition for federal agencies. We conducted this performance audit from May 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karla Springer, Assistant Director; Alisha Chugh; Matt Cook; Elizabeth Curda; Kasea Hamar; Carol Henn; Michael Kennedy; Brian Lepore; Marietta Mayfield; Jim Melton; Mehrzad Nadji; Ellery Scott; Jeremy Sebest; Rebecca Shea; Ben Shouse; Carol Herrnstadt Shulman; Barbara Timmerman; and Lisa Vojta made significant contributions to this report. We also would like to pay special tribute to our much-missed friend, colleague, and the analyst-in-charge of this engagement, Marcia Brouns McWreath, who passed away after a long illness. Even when not at full strength, Marcia continued to lead her team throughout the course of the job. While we miss Marcia for her leadership, kindness, selflessness, and sharp wit, we continue to be thankful that we had her with us during her more than 30-year career at GAO.
The federal government is the nation's single largest energy consumer, spending approximately $17 billion in fiscal year 2007. A number of statutes and executive orders have established and revised goals directing agencies to reduce energy consumption and greenhouse gas emissions--such as carbon dioxide, which results from combustion of fossil fuels and natural processes, among other things--and increase renewable energy use. GAO was asked to determine the extent to which (1) federal agencies met energy efficiency, greenhouse gas emission, and renewable energy goals in fiscal year 2007; (2) federal agencies have made progress in each of these areas in the recent past; and (3) six selected agencies are poised to meet energy goals into the future. For this review, GAO, among other things, conducted site visits for six agencies and reviewed the Department of Energy's (DOE) annual reports to Congress on federal energy management. Based on draft DOE data, most of the 22 agencies reporting to DOE for fiscal year 2007 met energy goals for energy efficiency, greenhouse gas emissions, and renewable energy. Specifically, all but one agency met the energy efficiency goal. Three of these agencies would not have met the goal through reductions in energy intensity--the amount of energy consumed per gross square foot--alone; they also used credits for the purchase of renewable energy or source energy to help meet the goal. Because the greenhouse gas emission goal is tied to the energy efficiency goal, the same number of agencies met the greenhouse gas emission goal, while 17 of the 22 agencies met the renewable energy goal. Determining the extent to which agencies have made progress over time toward the goals is problematic due to key changes in the goals--as specified in statute and executive order--and how progress is measured. For example, the energy efficiency goal changed the types of buildings included and the baseline year against which progress was measured. The greenhouse gas emissions goal also changed, from a measure of greenhouse gas emissions to a measure of energy intensity; this change makes it problematic to compare performance before and after the change. Moreover, GAO found that a goal based on energy intensity is not a good proxy for emissions because a reduction in energy intensity does not always result in lower greenhouse gas emissions. Although there is no consensus on a best measure at present, alternative measures are in use that may better track agencies' greenhouse gas emissions than the current measure based on energy intensity. Agencies' prospects for meeting energy goals into the future depend on overcoming four key challenges. First, the six agencies GAO reviewed--the departments of Defense (DOD), Energy (DOE), and Veterans Affairs (VA); the General Services Administration (GSA); the National Aeronautics and Space Administration (NASA); and the U.S. Postal Service (USPS)--had long-term plans for achieving energy goals that lacked key elements, such as plans that outline agencies' strategies that are linked to goals and provide a framework for aligning activities, processes, and resources to attain the goals of the plan. Second, investment in energy projects competes with other budget priorities, causing agency officials to increasingly rely on alternative financing mechanisms--contracts with private companies that pay for energy improvements. However, as past GAO work has shown, agencies entering into these contracts could not always verify whether money saved from using less energy was greater than projected costs and may yield lower savings than if timely, full, and upfront appropriations had been used. Third, agencies face challenges in obtaining reliable energy consumption data but are taking steps to collect more reliable data. Finally, facilities may lack staff dedicated to energy management and may find it difficult to retain staff with sufficient energy expertise; however, agency officials are participating in training and implementing initiatives for energy management personnel.
The National Park Service Organic Act of 1916 created the Park Service to promote and regulate the use of national parks, monuments, and reservations with the purpose of conserving the scenery, natural and historic objects, and wildlife therein and to leave them “unimpaired” for the enjoyment of future generations. The 1970 National Park System General Authorities Act, as amended in 1978, prohibits the service from allowing any activities that would cause derogation of the values and purposes for which the parks have been established. The combination of these two laws forms the basis of a mandate for Park Service managers to actively manage all park uses in a manner that protects park resources and values. Today, the Park Service comprises 388 units covering around 84 million acres in 49 states, the District of Columbia, American Samoa, Guam, Puerto Rico, Saipan, and the Virgin Islands. Figure 1 shows a map of the Park Service regions. National Parks are home to many unique and beautiful landscapes and open spaces that are venues for a variety of special event activities such as cultural programs, festivals, wedding ceremonies, and athletic events, as well as commercial filming and still photography. These special uses generally provide a benefit to an individual, group, or organization rather than the public at large. In order to protect park resources and the public interest, a special uses permit must be obtained from Park Service superintendents for these activities. Special uses permits regulate the amount, kind, time, and place of the proposed activity. The Park Service issues special uses permits for several different types of activities, including the two types we reviewed (1) special events, and (2) commercial filming and still photography. Special events permits are issued for a wide range of activities, including sports, pageants, celebrations, historical re-enactments, exhibitions, parades, fairs, and festivals. Commercial filming and still photography permits are issued for such activities as major motion picture filming, commercials, and magazine photo shoots. The Park Service has specific statutory authority to recover costs associated with special uses permits and to retain the funds recovered. The Park Service has guidance in place to collect costs associated with special event permits, including costs for commercial filming and still photography. In addition, it has been required by law to collect costs and location fees associated with filming activities for almost five years. The Park Service developed specific policy guidance for issuing permits and recovering costs for special park uses. This guidance includes detailed permitting criteria for special events and for commercial filming and still photography. Park Service superintendents are required to follow the established policy guidance, including numerous cost recovery requirements, when issuing permits. The cost recovery guidance generally requires the park units to recover costs associated with the permitted activity from the permittee. The Park Service has developed extensive policy guidance that park unit superintendents are to follow when issuing any Park Service special uses permits. In this regard, the superintendent at each park unit is responsible for reviewing, approving, and monitoring permitted activities and for assuring that such activities are consistent with the Park Service’s purpose: “to conserve scenery, natural and historic objects, and wildlife, and to provide for the enjoyment of the public while maintaining the natural and cultural resources and values of the national park system unimpaired for future generations.” The policy guidance also gives the superintendent discretion by directing that permits include “the terms and conditions that the superintendent deems necessary to protect park resources or public safety.” Permits establish conditions for the approved activity, such as location, date, time, and estimated number of participants. Special events within park units must meet basic criteria before a permit is issued, and Park Service policy guidance gives superintendents discretion when approving permits. The basic criteria for issuing a permit include that (1) there is a meaningful association between the park area and the event and (2) the event will contribute to visitor understanding of the significance of the park area. However, the determination of what is a “meaningful association” is generally left to the superintendent’s discretion. Some special event activities may be appropriate within certain park unit settings but not appropriate within others. For example, while the permitting of a rock concert in an urban park setting may be appropriate, the permitting of a rock concert at certain historical sites such as battlefields or cemeteries may not be appropriate. Also, in order to protect the park resources and the public’s health and safety, the policy guidance for special events provides strict limitations on certain uses, such as fireworks displays and the sale of food in the parks. Existing Park Service policies provide the superintendent with considerable discretion to determine the appropriateness of proposed advertisements. In 2003, the NFL kickoff event caused considerable controversy about the size, scale, scope, and location of advertising allowed during the event. In 2004, Congress passed legislation designed to strengthen and clarify commercial signage restrictions for the National Mall. This new legislation expressly prohibited the expenditure of funds in fiscal year 2004 for special uses permits on the National Mall unless the Park Service prohibited “the erection, placement, or use of structures and signs bearing commercial advertising.” However, discrete recognition of program sponsors was authorized. As a result, the Park Service has drafted additional policy guidance, applicable to all park units, pertaining to the use of signage recognizing program sponsors that will restrict the size, scale, scope, and location of corporate logos and other lettering. In general, the Park Service encourages filming and photography “when it will promote the protection and public enjoyment of park resources,” provided that the activity meets basic criteria, such as the activity will not cause unacceptable impacts to park resources. More specifically, the policy guidance outlines when a permit is and is not required. For example, a permit is required if the permitted activity involves the use of a model, set, or prop—such as a model holding a product for an advertisement photograph. However, no permit is required for visitors using a camera or recording device for their own personal use within normal visitation areas and hours. Some specific exceptions are included in the policy guidance— for example, a permit is never required for press coverage of breaking news. Also, superintendents, at their discretion, may grant the permittee access to a closed area of the park or permit the activity after normal visiting hours. Regardless of the specific type of commercial filming or still photography activity, the conditions specified in the permit must be followed. Park Service policy guidance generally requires park units to recover costs associated with managing special park uses, including special event and commercial filming and still photography activities, unless cost recovery is prohibited by law or otherwise exempted. This policy guidance is in line with federal law requiring recovery of costs for filming activities and Office of Management and Budget (OMB) Circular A-25, which established guidelines for federal agencies to assess fees for government services and for the sale or use of government property or resources. The OMB Circular states, “When a service (or privilege) provides special benefits to an identifiable recipient beyond those that accrue to the general public, a charge will be imposed (to recover the full cost to the Federal Government for providing the special benefit, or the market price).” The circular also states that “user charges will be sufficient to recover the full cost to the Federal Government,” and it defines full cost as all direct and indirect costs—including personnel, physical overhead, and depreciation of structures and equipment—associated with providing a good, resource, or service. As authorized by law and under the policy guidance, these recovered costs are retained at the units issuing the permits to defray the costs of administering and monitoring the permits. The Park Service’s 2001 Management Policies document, which provides the service’s most current overall policies, states that “all costs incurred by the Service in writing the permit, monitoring, providing protection services, restoring park areas, or otherwise supporting a special park use will be reimbursed by the permittee.” Park Service policy guidance further states that “appropriate fees for cost recovery, as well as performance bond and liability insurance requirements, will be imposed, consistent with applicable statutory authorities and regulations,” and directs that “when appropriate, the Service will also include a fair charge for the use of the land or facility.” Consequently, each permit should stipulate that these costs must be reimbursed by the permittee. Recoverable costs are those costs directly attributable to the use or necessary for the safe completion of the special park use. For example, the policy states that recoverable costs include the time charged by a park ranger to visit the site of the event, such as a festival held on park grounds, to monitor that the terms and conditions of the permit are met. Additionally, the requirement includes recovering costs for equipment and facility use as well as restoration of any damage to park resources as a result of the event. Park Service policy guidance also outlines the conditions under which charges for special uses may be waived. According to the policy guidance, exemptions from charges for special uses may be appropriate when the incremental costs of collecting the charges would be an unduly large part of the receipts from the activity; the furnishing of the service without charge is an appropriate courtesy to a foreign government or international organization, or comparable fees are set on a reciprocal basis with a foreign country; the permittee is a state, local, or federal government agency or a tribal the superintendent determines that the use will promote the mission of the Park Service or promote public safety, health, or welfare. Exemptions from charges are appropriate when a charge is prohibited by legislation or executive order; or the requested use involves exercise of a right pertaining to water, property, minerals, access, Native American religious practices, or the rights guaranteed by the First Amendment to the Constitution, including freedom of assembly, speech, religion, and the press. Through their special uses permit system, Park Service superintendents also manage requests for public assembly for the exercise of First Amendment rights, including freedom of assembly, speech, religion, and the press. Consistent with the First Amendment, it is the Park Service’s policy to permit groups to assemble peaceably and exercise freedom of speech on park lands. The number of First Amendment permit requests varies greatly by park unit. For example, each year hundreds of permit requests are submitted for First Amendment activity in Washington, D.C., area park units, but there are few requests for this type of permit at remote units such as Yellowstone National Park. For First Amendment permits, as with other special uses permits, the superintendents establish conditions for the assembly, such as site location, date, time, and number of participants. However, unlike other special events permits, superintendents are required by Park Service policies to issue these permits without requiring fees, cost recovery, bonding, or insurance. At five of the six parks we visited, we found failure to adhere to the Park Service’s policy to recover from permittees the cost to either administer or monitor permits for special events and for commercial filming and still photography activities. This inconsistent application of agency policy included not assessing or underassessing fees for reviewing and issuing permit applications, and not charging or undercharging for the cost of monitoring permits. As a result, parks did not fully identify and recover costs for permitting special events and for commercial filming and still photography. Consequently, in some parks, a portion of the financial resources spent on reviewing, issuing, and monitoring permits was not recovered from permittees, and therefore was not available to manage the park permits programs. Of the six park units we visited, we found that one park unit did not charge fees for reviewing and approving permit applications. Although five of the six park units charged administrative fees, three of these units did not recover the full costs associated with reviewing and approving permit applications. All six park units had established fees for monitoring the implementation of the permit. However, four of these units did not recover the full costs associated with their monitoring activities. Table 1 shows the park units we visited and whether they charged administrative or monitoring fees and recovered the associated costs. The Park Service does not maintain centralized data on the number of special event and commercial filming and still photography permits issued each year. However, an agency official informed us that for fiscal year 2003, National Capital Parks-Central issued the largest number of these permits—estimated in excess of 1,400—of all park units. National Capital Parks-Central charged no administration fees for permitting special uses. For example, during fiscal year 2003, this park management unit did not assess any administrative fee for permits issued for special events, filming, and still photography, as required by Park Service policy unless prohibited by law or otherwise exempted. National Capital Parks-Central officials told us that since the mid-1990s, it has been regional policy that park units within the National Capital Region would not charge any administrative costs associated with processing permits. For example, National Capital Parks-Central issued permits for both the NFL kickoff event and the filming of the major motion picture National Treasure, both of which engaged Park Service personnel in numerous planning meetings, but for which no administrative costs were recovered. As a result of GAO bringing this issue to the attention of the Solicitor’s Office at Interior, the Solicitor’s Office modified its guidance and directed the National Capital Region to re-examine its administrative cost recovery practices. As of February 2005, according to Interior’s Solicitor’s Office, steps were being taken to require all park units in the National Capital Region to assess processing or application fees for all permit applications. Administrative fees are based on the actual costs incurred by the park unit involved in overseeing the permit activity and should include all costs to the Park Service associated with processing a permit application from the time the first inquiry is received until the permit is signed and issued. For example, officials at Independence National Historical Park charge a $50 nonrefundable fee for each permit application. In fiscal year 2003, this park unit issued a total of over 300 permits for special events and for commercial filming and still photography. According to these park officials, this fee has not been updated for at least 8 years and will be increased to $100 in late 2005 to reflect increased administrative costs. Blue Ridge Parkway charged a $25 nonrefundable fee to cover the costs of initially considering permit applications and an additional $75 to cover additional processing costs for each approved permit. According to a park official, these fees had not been updated in 8 years, but the fees have now been increased as of January 2005 to $50 and $125, respectively, to reflect increased administrative costs. In fiscal year 2003, this park unit issued a total of about 40 permits for special events and for commercial filming and still photography. Officials at these park units agreed that their 2003 charges did not reflect increases in costs, such as for personnel, that had occurred during the past several years. In contrast, according to park officials, Golden Gate National Recreational Area, Jefferson National Expansion Memorial, and Yellowstone National Park charge administrative fees based on current costs. These park units periodically assess and adjust their fees to reflect increasing costs, such as for salary and associated benefits. Delicate natural resources in park units (see fig. 1) require monitoring to ensure resources are protected for the enjoyment of future generations. For example, at Yellowstone National Park, if a film crew consists of five or more persons, a park official assigns staff to monitor the crew’s activities at all times to ensure compliance with permit conditions, safety, and that the activity does not interfere with the visitor experience. If the filming activity is at or near one of the park’s thermal pools, a Park Service staff monitor is required as part of the permit conditions to ensure that the film crew does not damage this natural resource or its surroundings by entering a restricted area to obtain a particular photo or angle of view. According to Yellowstone’s film permit coordinator, permittees sometimes try to push the boundaries of the permit conditions, without understanding the potential damage or injury that could result. The Yellowstone coordinator stated, however, that because of their close monitoring actions, there has not been any resource damage from permittee actions. At three of the six parks we visited—Blue Ridge Parkway, Yellowstone National Park, and Golden Gate National Recreation Area—hourly monitoring fees had not been updated to reflect current higher costs, according to park officials. As a result, staffs at these units are not collecting fees sufficient to cover their monitoring costs. According to the Blue Ridge Parkway permit coordinator, actual hourly monitoring costs are about $50 per hour; however, the park has charged only $30 per hour since 1997. At Blue Ridge Parkway, not only were the monitoring fees below actual costs, but staff who monitored permitted activities did not submit documentation that would allow the park unit to bill and collect monitoring fees from the permittee for 20 of 28 permitted special events. Blue Ridge Parkway officials plan to increase the monitoring fee to $50 per hour in 2005. At Yellowstone National Park, the $50 hourly monitoring fee has not been updated in about 10 years. The hourly monitoring fee at Golden Gate National Recreation Area ($65 per hour) has not been updated for 4 years. Officials at Blue Ridge, Golden Gate, and Yellowstone explained they had not updated their hourly monitoring fees either because of a high workload at some park units or because updating fees was given a low priority at other park units. However, they said they plan to revise the fee to more accurately reflect actual costs in fiscal year 2005. Officials at National Capital Parks-Central are not collecting fees sufficient to cover their monitoring costs. These officials require permittees to bear the cost of Park Service overtime to monitor permitted activity for those permits where a bond is required. However, National Capital Parks- Central officials do not recover their costs for any permit monitoring that occurs during normal business hours and where no bond is required. In contrast, two park units, Independence National Historical Park and Jefferson National Expansion Memorial, charged monitoring fees based on current cost rates. As mentioned earlier, five of the six parks we visited—Blue Ridge Parkway, Golden Gate National Recreation Area, Yellowstone National Park, Independence National Historical Park, and National Capital Parks- Central—did not fully recover applicable administrative or monitoring costs. Some of these parks failed to collect several thousand dollars or more in fiscal year 2003. For example, had National Capital Parks-Central charged a $50 administrative fee like Independence National Historical Park, it would have collected at least $70,000 for the estimated 1,400-plus permits the park issued in fiscal year 2003 for special events and filming and photography. As a result, if these park units had implemented agency policy and the OMB directives to fully recover all costs, additional—and in one case, significant—revenues, such as those at National Capital Parks- Central, could have been available for managing permits programs. Delays in implementing the May 2000 legislation requiring the Secretary of the Interior to establish a fee schedule for commercial filming and still photography have resulted in significant annual forgone revenues for the Park Service. This law requires the agencies to establish a fee for the use of the land—referred to by the Park Service as a location fee—in addition to recovering agency costs. If the law requiring the Park Service’s officials to collect location fees for commercial filming and still photography had been implemented, GAO estimates that, for the reported permitted activity in fiscal year 2003, the agency would have collected revenues of about $1.6 million (unadjusted for inflation). According to the Park Service’s Special Uses Program Manager, the commercial filming and still photography permitted activities used by GAO to estimate forgone revenues of about $1.6 million are representative of a typical year’s worth of activities. The Park Service, along with three other federal land management agencies, is currently participating in a working group to develop regulations to implement the legislation and the associated location fee schedule. The Commercial Filming Law, enacted in May 2000, requires the Secretary of the Interior and the Secretary of the Department of Agriculture to issue permits and establish reasonable fees for commercial filming and still photography activities. The law affects Interior’s Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), and Park Service, and Agriculture’s Forest Service (FS). However, the law has not been implemented. Although BLM and FS already had established filming and still photography fee schedules in place prior to this law, the Park Service and FWS are collaborating with FS and BLM to develop a single fee schedule for all four agencies. Subsequent to the law’s enactment, the Department of the Interior’s Office of the Solicitor created a working group, in June 2000, with representatives from each of the four affected agencies to develop implementing regulations and a fee schedule. To ensure that First Amendment issues were adequately addressed, attorneys from the Solicitor’s Office agreed to seek concurrence from the Department of Justice prior to finalizing the regulations. In October 2000, the Solicitor’s Office submitted the proposed regulations drafted by the working group to Justice’s Office of Legal Counsel. However, Justice’s suggested revisions were not provided to Interior officials until October 2003. Since that time, representatives from each of the four land management agencies have worked together to finalize the regulations and the associated fee schedule. According to officials at Interior and the Park Service, the draft regulations are currently being circulated among the appropriate reviewing officials in each agency, and the agencies plan to have them published in the Federal Register later this year. In addition to drafting regulations to implement the Commercial Filming Law, the working group considered two different approaches when developing a uniform fee schedule: One approach specifies a uniform minimum fee schedule allowing the land management agencies to assess additional fees based on comparable markets, while the other approach does not allow for fee adjustments based on comparable markets. For example, the Forest Service currently uses the same fee schedule in five of its nine regions. In contrast, BLM’s existing fee schedule for filming and still photography, while similar, varies by state and is set by BLM state offices. Although the working group has developed a standardized fee schedule, one of the group’s challenges has been to reach consensus among the affected agencies on whether the use of a standardized fee schedule would allow individual locations to assess an additional fee for use of its sites. The Commercial Filming Law requires the Park Service to establish a location fee for commercial filming and still photography that provides a fair return for the use of the land to the United States. The law specifies that this fee must be based upon the following criteria: (1) the number of days the filming activity or similar project takes place on federal land under the Secretary’s jurisdiction, (2) the size of the film crew present on federal land under the Secretary’s jurisdiction, and (3) the amount and type of equipment present. Furthermore, the law allows that “other factors” may be included in determining an appropriate fee. The Forest Service has a fee schedule, developed prior to this law and implemented under other legislative authority, that uses similar criteria. For example, the Forest Service’s commercial filming fee schedule ranges from a minimum of $150 per day for crews of 1 to 10 people to $600 per day for crews of over 60 people. These fees are then multiplied by the number of days the crews are on the site during all phases of filming. For example, applying this schedule to just one of the 320-plus filming permits issued by National Capital Parks-Central in fiscal year 2003 (65 people for 2 days) would have resulted in a $1,200 return to the park. Once these fees are collected, they remain with the Park Service units and are available until expended. Using the fee schedule that the Forest Service has in effect, we estimate that the Park Service would have collected location fee revenues of about $1.6 million in fiscal year 2003. The Park Service has drafted a proposed standardized location fee schedule that would charge higher fees than the Forest Service for larger parties, but it has not yet been finalized. Using the Park Service’s draft fee schedule, we estimate forgone revenues of about $2 million in 2003 (see app. II). The Park Service is required by law to collect costs and location fees associated with permits for commercial filming and still photography and authorized to collect costs for other permitted activities. Because costs recovered from permitting activities are used by park units for managing their permit program and other park programs, failing to recover such costs decreases the financial resources park units have for processing permits and monitoring permitted activities. Unless steps are taken to ensure that units fully identify and collect administrative and management (including monitoring) costs associated with special event permits and with commercial filming and still photography permits, the Park Service will continue to deprive itself of funds important for managing and carrying out agency policy and delivering agency services. This is particularly evident in the National Capital Region, where only recently has consideration been given to charging administrative fees to recover costs. Because our review was limited to six park units, the extent to which other park units are not consistently applying existing cost recovery guidance is unclear. Conducting a systemwide review would help identify park units that are not fully recovering costs for special events and filming and still photography, and the measures necessary to ensure that all park units identify and collect all appropriate permitting fees. Significant revenues that would be available to the Park Service to help defray the costs of administering its commercial filming and still photography permit program are forgone because of delays in implementing regulations consistent with the Commercial Filming Law. By law, the Park Service is now required to collect location fees for commercial filming and still photography activities. Expediting implementation of the law will help ensure that the Park Service does not experience more forgone revenues. To ensure that the Park Service fully identifies and collects administrative and monitoring costs associated with special event and with commercial filming and still photography permits, as well as location fees for filming activities, we recommend that the Secretary of the Interior direct the Park Service Director to take the following four actions: Ensure that the park units we visited consistently apply existing cost recovery guidance and maintain updated cost recovery fee schedules. Ascertain the extent to which other park units are not consistently applying existing cost recovery guidance, and take appropriate actions to ensure they are consistently applied and costs are identified and recovered. Expedite the implementation of the law that requires the Park Service to collect location fees and costs for commercial filming and still photography, when appropriate. Follow through to ensure that the National Capital Region assesses administrative fees to recover the costs of processing permits for special events and for commercial filming and still photography. We provided the Department of the Interior with a draft of this report for review and comment. The department provided written comments that are included in appendix IV. The department did not comment on our recommendations; however it suggested language to clarify the application of Park Service general policy guidance to the National Capital Region. Specifically, it suggested that we include language to clarify that the regulations governing special events within the National Capital Region are different from those contained in the Park Service’s general regulations, particularly as it applied to the NFL kickoff event. We agree that the regulations governing such special events in the National Capital Region are different from the general regulations and have included clarifying language in the report. The department provided other technical clarifications that we have incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Key contributors to this report are listed in appendix V. We identified and analyzed applicable laws, regulations, policies, and procedures to determine Park Service policy and requirements applicable to the review and approval of special uses permits, including those for special events and commercial filming and still photography. This included an analysis of servicewide guidance as well as guidance applicable to the specific units we visited, such as units in the National Capital Region. We discussed the policy guidance with the Office of the Solicitor, Department of the Interior, and with officials from Park Service headquarters and each of the six park units visited to gain an understanding of how the guidance should be interpreted and applied. To evaluate whether policy guidance was consistently applied, we reviewed files and examined permitting practices at the nonprobability sample of six park units visited and interviewed park unit officials about their procedures in reviewing, approving, and monitoring permitted activities. We also reviewed these units’ procedures to identify and recover costs associated with permit activities. We first searched for existing data sources describing the number of special event and commercial filming and still photography activities on park land. However, the Park Service does not maintain national or regional data about these activities. We also contacted sources outside of the Park Service—including the Sierra Club and The Motion Picture Association of America—to ascertain whether these sources had information on the number of special event and commercial filming and still photography permits issued by each of the park units, but these groups did not have such data, either. In the absence of this data, we used an expert referral technique to identify park units to visit. We asked officials from each of the Park Service’s seven regional offices to identify, using their knowledge of regional operations, the three park units within their respective regions with the greatest number of (1) special event and (2) commercial filming and still photography permits. In each case, the officials produced a list in which the same unit had both the most special events and the most commercial filming and still photography permits for fiscal year 2003. We selected the top park unit from each of the regional offices. Park Service regional officials identified the following seven park units issuing the greatest number of permits for both special events and for commercial filming and still photography in fiscal year 2003: Alaska Region—Denali National Park, Midwest Region—Jefferson National Expansion Memorial, Intermountain Region—Yellowstone National Park, Pacific West Region—Golden Gate National Recreation Area, Northeast Region—Independence National Historical Park, National Capital Region—National Capital Parks-Central, and Southeast Region—Blue Ridge Parkway. Because of the relatively low number of reported permits issued in the Park Service’s Alaska Region, we limited our site visits to parks in the six Park Service regions within the continental United States. To determine the extent to which the Park Service implemented the Commercial Filming Law, requiring it to collect location fees for commercial filming and still photography activities, we analyzed the legislation and interviewed Park Service and Department of the Interior headquarters officials. We also contacted officials at the Department of Justice and obtained their concurrence regarding the delays and changes made to the draft regulations. In addition, we analyzed documents pertinent to Justice’s review of Interior’s proposed regulations for collecting location fees to verify reported delays in Justice’s review of the regulations. We asked the Special Uses Program Manager at Park Service headquarters to assist GAO in administering a data collection instrument (DCI) sent to each of the Park Service’s 388 park units to obtain information on the amount of commercial filming and still photography activity that would have been subject to location fees in fiscal year 2003, if the legislation had been implemented. The DCI was sent to all park units to obtain information on permits issued for filming and still photography activities occurring in fiscal year 2003. We asked the park units to provide (1) a permit number for each permit issued, (2) the date the permit activity started, (3) the number of days for which the permit was authorized, (4) the number of people using the permit, and (5) if the park unit would have charged a location fee based on current Park Service policy guidance. We requested that the information provided by the park units in the DCI be sent to GAO with a copy to the Special Uses Program Manager. We coordinated with the Special Uses Program Manager to ensure we received all of the responses and printed out hard copies of the filming and still photography activities provided by the park units. Because some of the smaller parks have a management office that issues permits for multiple park units, some of the respondents provided information containing the aggregated responses. These DCI responses were grouped into 27 combined park unit responses representing 95 individual park units. Of the 388 park units operating in 2003, we removed 17 because they either (1) did not own or manage property in their designation or (2) were not located in the United States or the District of Columbia, leaving us with 371 park units. Of the 371, we received responses from 355, giving us a response rate of 96 percent. Of these 355 park units, 95 were provided in grouped responses and the remaining 260 responses were from individual park units. We reviewed all of the filming and photography permits at four of the six sites visited. We reviewed these permit files to determine whether they contained specific required administrative information, such as evidence of the recovery of incurred costs. However, we did not review the permit files for evidence of all administrative requirements outlined in policy guidance because it was outside of the scope of this assignment. We reviewed 7 permit files at Jefferson National Expansion Memorial, 77 permit files at Yellowstone National Park, 76 permit files at Independence National Historical Park, and 15 permit files at Blue Ridge Parkway. The key administrative information regarding cost recovery was present in all of the 175 files we reviewed. Our file review at both of the remaining sites included an additional level of review. For these two sites, we reviewed the files for key administrative information as we did for the other four sites previously described. In addition, we also compared the information provided by the park unit on the DCI with the information contained in the permit file. Of 152 total permits at Golden Gate National Recreation Area, we reviewed 10 filming and photography permits with the highest costs recovered. These permits were selected using the park’s 318 account summary, which lists all fees charged and collected for filming and photography permits for fiscal year 2003. All key administrative cost recovery information was present, and all DCI information matched in these 10 permit files. We could not identify the top 10 highest-cost filming and photography permits for National Capital Parks-Central based on the park’s 318 account summary for fiscal year 2003, because the account combines costs recovered for filming and photography permits with other special uses permit costs. As a result, we requested files for filming permits that included 25 or more people, as indicated on the returned DCI from National Capital Parks-Central. This resulted in a review of 29 of 678 permit files (4 percent), which, according to National Capital Parks- Central staff, would generally be comparable with the permits with the highest costs recovered in fiscal year 2003. All key administrative cost recovery information was present, and all DCI information matched in these 29 permit files. The information we gathered was provided by staff at National Park Service units. The Park Service staff located the data by pulling paper files and transferring the information into the DCI. This information is not centralized, and it had never been gathered on a national level prior to our data collection for fiscal year 2003. The Special Uses Program Manager is responsible for ensuring that all Park Service staff adhere to the policy and guidance regarding issues associated with permit procedures, drafting policy and guidance associated with permitting procedures, and the coordination of the training and curriculum for Park Service staff on permitting policies and procedures. In her opinion, the information provided by the park units was accurate, complete, and reflective of the amount of permitted activity in a typical year. Based on our comparison of DCI data with hard copy files and our discussion with Park Service officials regarding the data, we determined that the data were reliable enough for the purposes of this report. Using the data we obtained from these park units, we estimated the forgone location fee revenues for fiscal year 2003 by applying the established fee schedule of the Forest Service. Both the Bureau of Land Management (BLM) and the Forest Service have established location fee schedules in place; however, the BLM fee schedule varies by state. Therefore, we used the Forest Service’s established fee schedule for our calculations because it is standardized within five of its nine regions and based on similar criteria included in legislation authorizing the Park Service to charge fees. We used these data to estimate forgone revenue. Details of these calculations are provided in appendix II. We conducted our work from May 2004 through May 2005 in accordance with generally accepted government auditing standards. To estimate the revenues the Park Service could have collected in location fees in fiscal year 2003, if the requirement to collect such fees had been implemented, we asked the Park Services’ Special Uses Program Manager to assist GAO in administering a data collection instrument (DCI). The Park Services’ Special Uses Program Manager sent the DCI to each of the Park Service’s 388 park units to obtain information on the amount of filming and still photography activity that would have been subject to location fees in fiscal year 2003. In the DCI, we asked for information on activities specifically related to the number of filming and still photography permits issued, the number of days each permit was in effect, the number of people using the permit, and whether the unit would have charged a location fee based on current Park Service policy guidance. Park Service policy allows the superintendents to waive fees under certain conditions, such as if the permittee is a state, local, or federal government agency or a tribal government, or if the superintendent determines that the use will promote the mission of the Park Service or promote public safety, health, or welfare. In our calculations to estimate forgone revenues, we only used the permit activity reported by the park units where a location fee would not have been waived. We received responses from 355 of 371 park units in our sample, giving a 96 percent response rate. (Seventeen units were removed from the universe. See app. I for details of our methodology.) To estimate forgone revenues, we used the information collected from respondents, along with the Forest Service’s existing fee schedule used in five of its nine regions for commercial filming and still photography activities (see tables 2 and 3). We also used the Park Service’s draft fee schedule to provide an alternative estimate of forgone revenues even though its fee schedule is not yet final (see tables 4 and 5). Both schedules charge daily fees based on the number of people participating in the activity; the Forest Service’s fees are lower for larger parties. To develop the forgone revenue estimates for activities in fiscal year 2003, we multiplied the number of people using permits each day by the corresponding Forest Service and Park Service fees when the park unit would have charged fees for the permitted activities. For example, as shown in table 6, we estimated forgone revenues for fiscal year 2003 of $1,135,250 and $464,450 for commercial filming and still photography activity, respectively, using the Forest Service fee schedule, for a total of $1,599,700. By comparison, we estimated forgone revenues of $1,292,850 and $750,950 for commercial filming and still photography activity, respectively, using the Park Service proposed fee schedule, for a total of $2,043,800. However, this schedule has not been approved for use, and it is uncertain whether the amounts in the schedule would have been applicable in fiscal year 2003. In September 2003, the Park Service’s National Capital Parks-Central staff approved a permit for the National Football League (NFL) to hold its annual kickoff event on the National Mall (Mall) in Washington, D.C. The 4- day event was promoted as a welcome home for American military troops. The Department of the Interior’s Take Pride in America initiative was listed as a partner in the event. The event was attended by thousands of people— estimates ranged from 100,000 to 500,000—who participated in football- related activities and attended performances from a variety of entertainers. Public reaction to the event ranged from “joy to anger,” and many questions were raised about the event and the Park Service’s permitting process. Specifically, concerns were raised about the appropriateness of permitting this event on the Mall, the extent of commercial signage, limitations on public access, and whether costs to repair damages to Mall resources and property were recovered from the NFL. Park Service guidance states that special events should “contribute to visitor understanding of the significance of the park area.” Consequently, critics questioned whether the Mall was an appropriate venue for an NFL kickoff event. The Mall is a two-mile greenway that stretches from the U.S. Capitol on the east side to the Lincoln Memorial on the west. The Mall is the setting for world-renowned national museums, memorials, and significant federal buildings. However, for many years the Mall has also been host to diverse events, including fund-raisers, sports tournaments, and festivals as well as hundreds of First Amendment activities. Park Service policy for special events states that it “will not permit the public staging of special events that are conducted primarily for the material or financial benefit of organizers or participants; or are commercial in nature; or that demand in-park advertising or publicity; or for which a separate public admission fee is to be charged.” Critics of the NFL kickoff event asserted this was a commercial event that was conducted primarily for the financial benefit of the NFL and the event’s commercial sponsors. As discussed in this report, Park Service superintendents have a great deal of discretion in applying the agency guidance for approving permits. According to Park Service officials, the NFL kickoff event was intended to honor members of America’s armed forces and to promote volunteerism on public lands. According to a September 3, 2003, statement by the Secretary of the Interior, the NFL kickoff event was “a wonderful opportunity to showcase public service by volunteers” who put “their love of country to work to improve our national parks, wildlife refuges, public lands, cultural and historic sites, playgrounds and other recreation areas.” In addition to setting up a Take Pride in America booth at the event to recruit volunteers for this program, public service announcements about Take Pride in America, narrated by Washington Redskins players, were broadcast during the NFL team’s season opener. Finally, Park Service officials stated the product-related signs were allowed as a form of sponsor recognition for those companies underwriting the cost of the concert and other activities that were all free to the public. On May 7, 2003, a permit application for a season-opening event on the Mall was submitted to the Park Service’s National Capital Parks-Central office by an NFL representative. In the application, the event was described as a celebration of American treasures, heroes, places, and pastimes. Following receipt of the application, numerous discussions and planning meetings took place between Park Service and NFL representatives. According to Park Service officials involved in these meetings, the key issues addressed involved public safety and protection of park resources. Park Service permits for these types of events contain general conditions such as the requirement to procure liability insurance that lists the agency granting the permit as an insured party. In addition to meeting the general conditions, the NFL was also required to acquire certain permits from the District of Columbia through the city’s Emergency Management Agency, which coordinates with the Metropolitan Police, the Fire Department, and other District agencies to assure the NFL provided adequate emergency medical services such as first aid stations and ambulances during the event. In late August 2003, National Capital Parks-Central formally approved the agreed-upon terms and conditions for the event by issuing the event permit. National Capital Parks-Central officials continued to meet with NFL event planners to reach agreement on last minute details of the event and a revised permit was issued on September 3, 2003. Due to the location of the event—the Mall—the NFL was required to closely coordinate all security plans through the United States Park Police (Park Police), which provided public safety and security for the event and related activities. A condition of the permit stated that the NFL was responsible for obtaining the necessary permissions and permits from the Metropolitan Police Department and from other agencies and departments with jurisdiction over the public lands not under the jurisdiction of the Park Service. In addition, the Park Police used the assistance of other law enforcement officers from federal, state, and local agencies to provide sufficient staff and personnel to handle the event. As required by the permit, the costs for providing law enforcement officers, including Park Police, were reimbursed by the NFL. During and following the event, criticism was directed at the Park Service over the lineup of entertainers, which included Aerosmith, Britney Spears, and Aretha Franklin, as well as the content of some of the performances. For example, some people did not consider specific aspects of Britney Spears’ show to be appropriate family entertainment for an 8:00 p.m. broadcast. While Park Service policies state “the theme of the special event must be consistent with the mission of the park and appropriate to the park in which it is to be held,” National Capital Parks-Central officials stated they do not make “content-based decisions on whether to permit” requested events. The NFL kickoff event was advertised as a welcome home celebration for American soldiers—a tribute to the military personnel serving in Iraq and Afghanistan—and an opportunity for people to gather and watch popular entertainers for no charge. But to some observers, such as the President of the National Coalition to Save Our Mall, it was a “tasteless extravaganza of electronic advertising.” Following the event, criticisms directed at the Park Service for allowing commercial signage on the Mall grew. Critics claimed there were contradictions between the Park Service’s policies and the activities that occurred during the event. Some critics of the event claimed that most, if not all, of the commercial signs should not have been displayed on the Mall. One author, a former national park ranger who is the director of a Washington, D.C.-based advocacy organization and former president of the Conservation and Preservation Charities of America, concurred with outraged critics over “the dimensions of the commercial displays that had no legitimate place on the National Mall in the first place”—his description of the giant product banners on the grounds between the U.S. Capitol and the Lincoln Memorial. According to the National Capital Parks-Central Superintendent, the nationally televised event was a new experience for park staff and resulted in many “lessons learned.” Concerning the “excessive commercial signage” described by some critics, the current Superintendent, who was the Acting Superintendent at the Park in September 2003, took responsibility for these issues and said she had misunderstood the amount, type, and size of signage the NFL planned to use. In the event permit, she noted, Park Service had not quantified the number of sponsor recognition signs allowed because they had not foreseen the need to do so. Consequently, there were also far more banners and signs posted than the Park Service had anticipated. Congress passed legislation putting new restrictions on permits issued for the Mall. Public Law No. 108-108 prohibited the use of appropriated funds in fiscal year 2004 for special event permits on the Mall, unless the permit “expressly prohibits the erection, placement, or use of structures and signs bearing commercial advertising.” The law still allowed for recognizing the sponsors of special events, providing “the size and form of the recognition shall be consistent with the special nature and sanctity of the Mall and any lettering or design identifying the sponsor shall be no larger than one-third the size of the lettering or design identifying the special event.” As a result of this legislation, the Park Service has drafted policy guidance to restrict “the size, scale, scope and location of corporate logos and script.” In addition, National Capital Parks-Central officials now require permit applicants to provide detailed lists of planned signage along with a scaled replica of each sign to the Park Service for approval at least 30 days in advance of an event. Park Service regulations for the National Capital Region state that the decision to issue a special event permit must be based on a consideration of a number of factors, including whether the park area requested is reasonably suited in terms of size, accessibility, and nature of the event. The NFL kickoff event, although permitted, raised a number of access issues. For example, while the permit for the event stipulated that “all sidewalks, walkways, and roadways must remain unobstructed to allow for the reasonable use of these areas by pedestrians, vehicles, and other park visitors,” some groups complained that large portions of the Mall were inaccessible for days leading up to the NFL event. Another condition of the permit stated “no vehicle shall obstruct or interfere with the Tourmobile service that utilizes Jefferson and Madison Drives, from 3rd to 14th Streets. However, the National Tour Association Web page advised tour operators and motorcoach drivers bound for Washington, D.C., to be aware of several street closures and the closure of access to the Mall at noon on September 4 in association with the NFL kickoff event. The Smithsonian Institution museums remained open during the festival, but access was not available from the regular Mall-side doors on the afternoon of the NFL kickoff event. According to Park Service officials, these streets were closed during the event for security reasons consistent with security plans for other large- scale public gatherings on the Mall. The Mall entrance to the Smithsonian Metro stop was also closed at noon on the day of the NFL event by the Washington Metropolitan Area Transit Authority for security reasons. In addition to addressing signage limitations, Public Law 108-108 also stated that “the Secretary shall ensure, to the maximum extent practicable, that public use of, and access to the Mall is not restricted.” There was significant turf and walkway damage to the Mall as a result of the NFL kickoff event. Prior to the event, National Capital Parks–Central officials required the permittee to provide irrevocable letters of credit totaling $250,000 to cover event-related liabilities, such as monitoring costs and potential resource damages as a result of the event. The actual cost of the event far exceeded the Park Service’s estimated costs. According to a November 3, 2003, letter to the permittee, the increase in damage recovery costs occurred in part due to “the increase in the number and types of heavy equipment that were utilized during the setup and break down of the event staging and other facilities.” The Superintendent noted that several days of heavy rain also contributed to the higher-than-expected amount of damage to the turf and walkways. However, a condition in the NFL kickoff event permit—which is a standard condition in special event permits— specifies that the permittee is liable for damage to the resource as a result of the permitted activity. Consequently, after the NFL kickoff event, the turf and walkway damage was assessed and the permittee was notified of the damage along with the Park Service’s estimate of repair costs. The NFL ultimately reimbursed the Park Service over $430,000 to cover both event monitoring costs and to repair resource damages (primarily to turf and sod). The NFL reimbursed the Park Police almost $700,000 to cover the cost for security personnel for the NFL event. Prior to the event, the NFL posted a letter of credit for the Park Police in the amount of $1,150,000. The actual expenses charged for Park Police support of operations relating to the NFL kickoff permit totaled $698,625. The inclement weather was cited by Park Police officials as a factor in their reduced costs, because fewer participants showed up at the event and fewer people stayed late. This resulted in fewer required security personnel than originally anticipated and with fewer actual hours of monitoring. Reimbursement was not sought from the NFL for the time both Park Service and Park Police officials spent in planning meetings for the 2003 NFL kickoff event. The practice of the National Capital Region—to limit charges for administration of permits to cost recovery for overtime expenses—was based on a mid-1990s unwritten legal opinion from the Solicitor’s Office at the Department of the Interior. National Capital Parks- Central officials told us they viewed time spent in event-planning meetings and in processing the permit paperwork as a “budgeted” or sunk cost. In February 2005, Interior’s Office of Solicitor revised its legal conclusion and recommendation on this matter and advised both Park Service and Park Police officials in the National Capital Region to re-examine this practice in order to come into better compliance with cost recovery policy guidance. The following are GAO’s comments on the Department of the Interior’s letter dated April 25, 2005. 1. We added the Special Park Uses Manager’s comment about the appropriateness of charging cost recovery when the monitoring was conducted as part of routine operations, in footnote 8 on page 10. However, based on our review of permit documentation and discussions with officials at Blue Ridge Parkway, this circumstance did not exist at Blue Ridge Parkway. Thus, no change to the example on page 3 is needed. 2. We have included the Reference Manual 53 Web site address in footnote 10 on page 13, so that readers can more easily seek out Park Service policy guidance. 3. We agree that further definition of the term “administrative fee” is warranted. As a result, we added clarifying text and a footnote to page 13 to more explicitly describe permit processing costs included in administrative fees. (See footnote 10.) 4. While it is true that FWS and Park Service were barred from collecting a location fee for filming and photography prior to the passage of the Commercial Filming Law, this was a regulatory prohibition instituted by the agency itself. The Commercial Filming Law effectively repealed that prohibition. 5. Park Service regulations are cited in footnote 4 on page 8; consequently, including a lengthy excerpt from the regulations in the text is unnecessary. 6. See GAO comment 5. 7. We have removed the reference to the general Park Service regulations and modified the text on page 34 to describe the specific regulations associated with the National Capital Region’s permitting of the NFL event and public access restrictions. 8. See GAO comment 1. In addition to those named above, John Delicath, Doreen Feldman, Timothy Guinane, Julian Klazkin, Roy Judy, Diane Lund, Judy Pagano, Paul Staley, and Mary Welch made key contributions to this report. Darren Goode, Robert Martin, Miguel Lujan, Glenn Slocum, and John Warner made significant contributions related to cost accounting issues during this review. Kevin Bailey, Denton Herring, and Matthew Reinhart made important graphic or data input contributions to the report.
The National Park Service routinely issues permits for special park uses, such as special events or commercial filming and still photography. However, the National Football League's use of the National Mall to launch its 2003 season raised questions about whether permitting such events was consistent with existing policies and law and whether all applicable fees for permitting special park uses were being collected. GAO (1) identified applicable policy guidance for issuing special uses permits for special events and for commercial filming and still photography, (2) assessed the extent to which this guidance was applied during fiscal year 2003, and (3) determined the extent to which the Park Service implemented the requirement to collect location fees for commercial filming and still photography. The Park Service has developed policy guidance for issuing permits for special events and for commercial filming and still photography activities. This policy guidance includes general criteria about the terms and conditions as to when and where specific types of activities can take place and requires park units to recover applicable costs associated with administering and monitoring special park uses. Recovery of costs associated with filming activities is required by law. Recoverable costs include, for example, the time charged by a park ranger to visit the site of the event, such as a festival held on park grounds, to monitor that the terms and conditions of the permit are met. During fiscal year 2003, park units did not consistently apply Park Service guidance for permitting special events and for commercial filming and still photography, and often did not identify and recover costs associated with permitting such activities, thereby decreasing financial resources available to the parks. Of the six park units we visited, one did not charge fees for processing applications; one only recovered monitoring costs associated with some of its permits; and three others had not updated, for several years, hourly charges to reflect current higher costs for personnel time for administering and monitoring permitted activities. For example, National Capital Parks-Central officials charged no administrative fees for the estimated 1,400-plus permits issued for special events and for filming and still photography in fiscal year 2003. Officials said that park units had not updated fees because of regional policy and a high workload or because updating the fees was given low priority. The Park Service has not implemented a law enacted almost 5 years ago to collect location fees for commercial filming and still photography, resulting in significant annual forgone revenues. The agency has not implemented the law because of delays in reviewing the proposed regulations at the Department of Justice and a lack of agreement among the Interior agencies about the fee schedule and how it is to be applied. We estimated the Park Service would have collected about $1.6 million in location fee revenues in fiscal year 2003, if the requirement to collect such fees had been implemented.
A business is generally allowed to insure an employee’s life when the business has an insurable interest in the employee. Insurable interest is defined by state law and, once established at the time of purchase, continues for the life of the insured. Thus, a business generally may maintain life insurance on employees even after their employment has ended. Business-owned life insurance can refer to corporate-owned life insurance (held by all types of corporations or only nonbank corporations), bank-owned life insurance, trust-owned life insurance (held by business- established trusts), or all three. Business-owned life insurance is permanent life insurance, which has an insurance component and a savings component. The premium for a newly issued permanent life insurance policy pays for the insurance component, but the premium initially exceeds the cost of providing life insurance protection for the insured person. The excess amount is added to the policy’s cash value, which earns interest or other investment income— called inside buildup. The inside buildup is accrued income because the policyholder does not receive cash payment as the policy earns income. The Internal Revenue Code allows for the deferral of income tax on the accumulated inside buildup on life insurance policies and some other investments that appreciate in value, such as stocks, some bonds, and real estate. However, the Internal Revenue Code provides for income tax-free death benefit payments on life insurance, so that unlike other investments, the accrued income is not taxed if the policy is held until the insured party’s death. However, if a policy owner surrenders a policy before the death of the insured, the owner may incur a tax liability to the extent that the policy’s cash surrender value exceeds its cost base and may incur a tax penalty. The cost base is equal to the total premiums paid less dividends and withdrawals received from the policy. Also, if a business owns life insurance policies, the annual earnings and death benefit proceeds are among the factors that could make the business subject to the alternative minimum tax. To qualify as life insurance for tax purposes, a contract must qualify as a life insurance contract under applicable state law and meet one of two tests defined in Internal Revenue Code section 7702 to ensure that the contract is not overly investment oriented. In addition, while policy owners may access the cash value of their policies by borrowing against them, policy owners’ ability to deduct the interest on such loans in connection with policies covering employees, officers, and individuals financially interested in the business was limited to loans up to $50,000 per policy by the Tax Reform Act of 1986. The Health Insurance Portability and Accountability Act of 1996 eliminated the interest deductibility for these individuals, except for policies on a limited number of key persons. Before the limitations adopted in 1986 and 1996, some businesses purchased “leveraged business-owned life insurance,” in which they leveraged their life insurance ownership by borrowing against the policies to pay a substantial portion of the insurance premiums and in doing so incurred a tax-deductible interest expense while realizing tax-free investment returns. State and federal legislatures considered numerous proposals in 2003 and early 2004 that would change the conditions under which businesses may purchase business-owned life insurance, the consent requirements for such purchases, or the tax treatment of the insurance. For example, California considered and passed a law to prohibit businesses from purchasing life insurance policies on employees that are not exempt from the state’s overtime compensation requirements. Texas considered, but did not adopt, a proposal to prohibit business-owned life insurance except in certain cases, such as when an employee is eligible to participate in an employee benefit plan and consents to being insured. Also, several members of Congress introduced legislation in 2003 that would have required employee consent or limited the tax-favored treatment of business-owned life insurance on policies taken out on employees that were not key persons, although none had been enacted by the end of the first session of the 108th Congress. The legislation would have affected the tax-favored treatment of such policies in various ways, such as taxing policy earnings and income from death benefits except on key person policies, taxing the death benefit payments on policies where the employee died more than 1 year after leaving employment, and limiting allowed deductions for a business’s general interest expenses based on its business-owned life insurance holdings. In addition, pension legislation that the Senate Finance Committee passed in February 2004 included provisions that would generally limit the tax-favored treatment of business-owned life insurance, except for policies on those individuals the legislation defined as key persons; require employees’ written consent for a business to hold insurance on their lives; and require businesses to report policy information to IRS. In pursuing their regulatory missions, federal and state regulators have collected limited information on the prevalence and use of business-owned life insurance. Federal bank regulators have collected more data than other regulators on the prevalence of business-owned life insurance; however, the data are limited because the regulators did not require all banks and thrifts to report it. While SEC has not specifically required reporting on business-owned life insurance, we found that some life insurance companies had reported information on policy sales in their Forms 10-K and in a life insurance industry survey. Federal revenue estimators have estimated the annual forgone tax revenue attributable to earnings on the insurance, although IRS has not required businesses to report on the prevalence of business-owned life insurance. Information at the state level is limited, however, because state insurance regulators have not collected information on the prevalence of the policies through their financial reporting forms. Some state laws permit businesses to purchase business- owned life insurance for business continuation purposes or in connection with employee benefit plans, but businesses generally are not obligated to use the death benefit proceeds for a particular purpose. Although federal and state regulators generally have not collected data on the uses of business-owned life insurance, we found some examples of how businesses said they intended to use such policies. In monitoring the safety and soundness of individual institutions, federal bank regulators have collected more financial information than other federal and state regulators on business-owned life insurance policies. For supervisory purposes, federal bank regulators have required that regulated institutions disclose in quarterly financial reports earnings from and the cash surrender value of business-owned life insurance if the amounts exceed a certain threshold. As discussed below, the regulators have used the amounts reported to determine the need for further review of institutions’ risk exposure. Business-owned life insurance is an asset reported at cash surrender value—that is, the sum of accumulated premium payments and inside buildup, less accumulated insurance costs, fees, and charges that the policyholder would be required to pay for surrendering the policy. It does not take into account income tax liabilities that might result from the surrender. The Federal Reserve, FDIC, and OCC require the institutions they regulate to disclose the cash surrender value of policies worth more than $25,000 in aggregate and exceeding 25 percent of “other assets,” which include such items as repossessed personal property and prepaid expenses. Through the end of 2003, OTS required the thrifts it supervises to report the cash surrender value of policies if the value was one of the three largest components of “other assets;” in 2004, OTS began requiring all the thrifts it supervises to report the cash surrender value of their policies. We found that about one-third of banks and thrifts—3,209 of 9,439, including many of the largest institutions—had disclosed the cash surrender value of their business-owned life insurance holdings as of December 31, 2002. The remaining two-thirds either did not hold business-owned life insurance or held such insurance but did not meet the reporting threshold. The total cash surrender value of reporting institutions’ policies was $56.3 billion. A total of 259 banks and thrifts with assets of $1 billion or more owned 88 percent ($49.4 billion) of the total reported cash surrender value (fig. 1). These 259 institutions included 23 banks and thrifts that were among the top 50 largest institutions and that owned 66 percent ($36.9 billion) of the total reported cash surrender value. Because not all institutions that owned policies met the reporting threshold, these data indicate the minimum number of institutions that held business-owned life insurance and the aggregate cash surrender value of their policies; with this data we could not estimate the prevalence of business-owned life insurance or its value among institutions that did not report on their holdings. The federal bank regulators’ thresholds for reporting business-owned life insurance earnings differed from the ones for reporting cash surrender value, so not all of the same institutions reported earnings as reported cash surrender value. We found that nearly one-fifth of banks and thrifts reported their 2002 annual earnings on the cash surrender value of business-owned life insurance. As of December 31, 2002, some 1,563 institutions reported $2.2 billion in such earnings. SEC officials told us that the agency has not specifically required businesses to report on their purchases or sales of business-owned life insurance because such data generally are not material to public companies. According to SEC officials, agency regulations do not specifically require public companies to disclose the value of their business-owned life insurance in the financial statements submitted to the agency. Similarly, SEC does not specifically require public companies that sell business-owned life insurance to report on those sales. Rather, in administering federal securities laws, SEC requires public companies to prepare their financial statements in accordance with generally accepted accounting principles, which would require them to disclose information about business-owned life insurance policies that is material—that is, according to SEC, information that an investor would consider important in deciding whether to buy or sell a security or in making a voting decision related to a security that the investor owns. According to SEC officials, however, following generally accepted accounting principles would rarely require holdings of and earnings from business-owned life insurance to be shown as separate line items because they are unlikely to be financially material to a company. Although SEC does not explicitly require insurance companies to report information on the business-owned life insurance policies they have sold, some insurance companies have disclosed such information on their Forms 10-K. By reporting their revenue from business-owned life insurance premiums, life insurance companies show how significant sales of such policies are compared with total sales; they also provide an indication of the level of demand for business-owned life insurance. We reviewed the Forms 10-K of 32 life insurance companies that were among the 50 largest such companies ranked by assets. We found that nine insurers reported receiving, in aggregate, over $3 billion in total business-owned life insurance premiums in 2002 from new and, in some cases, previous sales. The amount of business-owned life insurance premiums received in 2002 ranged from 11 to 53 percent of each company’s 2002 total life insurance premiums for the four companies that reported this information. In addition, three insurance companies reported the accumulated cash surrender value of business-owned life insurance policies they had previously sold as totaling about $28 billion as of December 31, 2002. Separate from reporting to SEC, some insurance companies have also reported business-owned life insurance sales in response to industry surveys. CAST Management Consultants, Inc., conducts research on business-owned life insurance and has reported on premiums paid on new policies. A life insurance industry association and life insurance companies cited CAST’s surveys as the only currently available information on aggregate business-owned life insurance premiums. CAST estimated that in 2001, premiums from new sales of business-owned life insurance totaled $9.3 billion: $5.2 billion in bank-owned life insurance premiums and $4.1 billion in corporate-owned (excluding bank-owned) life insurance premiums. CAST also estimated that in 2002, premiums from new sales of corporate-owned (excluding bank-owned) life insurance totaled $3.2 billion. CAST did not estimate bank-owned life insurance premiums for 2002. CAST’s estimates were based on responses to a 2003 survey concerning corporate-owned life insurance premiums and a 2002 survey concerning bank-owned life insurance, increased by CAST adjustments. Each survey received responses from 20 life insurance companies, although not all of the same companies responded to both surveys. In addition, a representative of the A.M. Best insurer rating company said that the company collects information on business-owned life insurance, but does not currently report the data. A.M. Best reported aggregate premiums from business-owned life insurance for 1998 (the last year for which it reported data) as more than $10 billion for 20 large insurers. Because these surveys did not use statistical samples of insurers, the resulting estimates made from the limited number of respondents do not represent statistically valid estimates of all business-owned life insurance sales and, therefore, our interpretation of the resulting data is limited. The statistics from these surveys are meant to indicate only that some large insurance companies have had active sales in recent years and that the premiums in the aggregate are significant. IRS officials told us that the agency has not generally required businesses to report on the value of, earnings on, or death benefit income from business-owned life insurance policies. The officials noted that these amounts are not typically included in taxable income and that, therefore, the information is generally not needed. Businesses that are subject to the alternative minimum tax include income from death benefits and earnings from insurance when calculating the tax, but they are not required to list the insurance-related values on the alternative minimum tax form. Also, businesses that are required to complete Schedule M-1, Reconciliation of Income (Loss) per Books with Income per Return, as part of their Form 1120, U.S. Corporation Income Tax Return, would report earnings on business-owned life insurance as part of the income recorded on their books but not on the tax return. However, according to IRS officials, these earnings might not be identified as earnings from business-owned life insurance, as they are often lumped together with other adjustments. Federal revenue estimators have estimated that the current tax treatment of earnings on the cash value of business-owned life insurance results in over a billion dollars in forgone tax revenues annually. In its “Estimates of Federal Tax Expenditures for Fiscal Years 2004-2008,” prepared for congressional use in analyzing the federal budget, the Joint Committee on Taxation estimated that the forgone tax revenues resulting from the tax treatment of investment income on life insurance for corporations would total $7.3 billion for 2004 through 2008. Similarly, OMB, in its fiscal year 2005 budget “Analytical Perspectives,” reported Treasury’s estimate of forgone tax revenues resulting from the tax treatment of life insurance as $13 billion for 2004 through 2008. These estimates assumed policies would be held until the insureds’ deaths, making the current tax-deferred earnings tax-free. The estimates did not reflect the forgone tax revenues on the additional income from death benefit payments in excess of the premiums paid and the accumulated tax-deferred earnings. Officials involved in preparing these estimates said that, lacking comprehensive data on the earnings on business-owned life insurance, they developed their estimates using available data on life insurance companies’ investment income and assumptions about business-owned life insurance’s share of the total life insurance market. State insurance regulators, concerned with state requirements, rates, and solvency issues, have collected extensive financial information from insurers through NAIC’s standardized financial reporting forms, but not at the level of detail that would describe the prevalence of business-owned life insurance policies. State insurance regulators use insurers’ financial statements to monitor individual companies’ solvency. According to the four state regulators we contacted and NAIC, information on business- owned life insurance is not required or necessary for regulating solvency. Insurers’ financial statements list the number of policies and premiums collected during the reporting period, but the amounts are broken out only by individual and group policies, not by whether businesses or individuals owned the policies. Under state laws that define insurable interest, businesses may purchase life insurance for various purposes, including for business continuation— that is, to ensure that a business can continue to operate when a key employee or owner dies. Historically, insurable interest reflected a family or business’s dependency on an individual and the risk of financial loss in the event of that individual’s death. Accordingly, a traditional use of business-owned life insurance is as key-person insurance, which is intended to ensure recovery of losses—such as a loss of earnings or added hiring costs—in the event of the death of key employees. In addition, businesses may use business-owned life insurance as part of “buy-sell arrangements” that allow the surviving owners to use the death benefits to purchase a deceased owner’s share of the business from the estate or heirs. In the 1980s and 1990s, several states expanded their definitions of employers’ insurable interest to permit purchases of broad-based business- owned life insurance in connection with employee compensation and benefit programs. Several of these states limit the aggregate amount of insurance coverage on nonmanagement employees to an amount commensurate with the business’s employee benefit plan liabilities or require that insured employees be eligible to receive employee benefits. Information we obtained from officials of large banks and from our analysis of a sample of public companies’ Forms 10-K indicates that firms have related their purchases of broad-based business-owned life insurance to various types of employee benefit costs, including health care for current or retired employees, life and disability insurance for current or retired employees, workers’ compensation, qualified retirement plans— including defined benefit and defined contribution plans, such as 401(k) plans—and nonqualified retirement plans, such as supplemental executive retirement plans. Consistent with this expanded use of business-owned life insurance, NAIC has observed that many products sold by life insurers have evolved to become primarily investment products. Also, consulting firms that specialize in business-owned life insurance transactions, life insurance brokers, and industry experts have emphasized the potential use of broad-based business-owned life insurance as a profitable long-term investment strategy to finance employee benefit costs and not merely as protection against financial losses that a business would incur in the event of the death of key persons. According to bank regulators and life insurance industry representatives, when purchasing life insurance, businesses generally relate the amount of coverage they purchase on a group of employees to the value of their projected employee benefit costs. For example, a business might insure the lives of a group of employees such that the present value of expected cash flows to be received from the policies over time, net of premiums, would cover some portion or all of the present value of the business’s employee benefit expenses over the same period of time. When calculating the expected future cash flows from the insurance, businesses would not generally assume policies will be surrendered if employees leave or retire because surrendering the policies would result in taxation and possibly surrender charges; rather, businesses would assume that they will hold the policies until the insured employees die. Because businesses may hold business-owned life insurance policies for many years before receiving death benefit payments, businesses do not necessarily receive the cash flows from business-owned life insurance at the same time that they must pay their employee benefit expenses. According to insurance industry representatives, when businesses use the insurance in connection with health care benefits for retired employees, the death benefit proceeds are well timed for reimbursing the benefit costs, because retirees tend to incur their largest medical expenses in the last months of their lives. However, we found examples of businesses that said they used the insurance in connection with current employee benefit costs, such as active employee health care. In such cases, the timing of the death benefit payments would not necessarily correspond to the timing of employee benefit expenses because businesses must pay those expenses years before receiving death benefits on most insured employees. Regardless of a business’s reported purpose for purchasing business- owned life insurance, the business generally does not have an obligation to restrict its use of the life insurance proceeds to these purposes. Although the expected income from broad-based business-owned life insurance policies over time might be commensurate with a business’s expected employee benefit costs at the time of the insurance purchase, businesses are generally not required to use the proceeds from the policies to pay for employee benefits. Unless the policies were placed in a trust that restricted their use to employee benefit payments, the life insurance policies would be part of the unrestricted general assets of the business and, as such, could be used to pay any obligations of the business. Of the federal and state regulators we contacted, only OTS has required the institutions it regulates to provide information that distinguishes among the uses of business-owned life insurance. Through the end of 2003, OTS required the thrifts it supervises to report the value of their key-person policies and the value of business-owned life insurance policies purchased for other purposes as separate items, if the amounts met the reporting threshold. Of the $3.3 billion cash surrender value that 249 OTS-supervised thrifts reported owning as of December 31, 2002, about $400 million was for key-person insurance and $2.9 billion was for other business-owned life insurance. However, these amounts may not be representative of the proportions of key-person and other business-owned life insurance that these thrifts held. OTS’s disclosure threshold applied separately to each category, so that OTS-supervised thrifts could have been required to report on only one type of policy rather than the total value of their business- owned life insurance holdings. Beginning in 2004, OTS eliminated its reporting threshold so that all the thrifts it supervises are required to report the value of both their key-person and other business-owned life insurance policies. The new requirement will allow OTS to determine the cash surrender value of all key-person and other business-owned life insurance held by the institutions it supervises. Although SEC did not specifically require them to do so, we found that some businesses included information on how they intended to use business-owned life insurance in the Forms 10-K they filed with SEC. We reviewed the Forms 10-K of 100 randomly selected Fortune 1000 public companies. Of these, 11 provided information on the intended use of their business-owned life insurance policies. All 11 businesses reported using these policies to provide deferred compensation or benefits for executives; 1 also reported using them to provide postretirement health care. For example, 1 of the 11 businesses reported having a supplemental executive retirement plan financed by life insurance that had a cash surrender value of about $66 million as of December 31, 2002. The amount of insurance coverage was designed to cover the full cost of the plan, which at that time was estimated to have a present value of about $69 million. Another 1 of the 11 businesses reported that it had purchased policies with a cash surrender value of about $161 million as of February 28, 2003, with the intention of using the policies’ proceeds as a future financing source for postretirement medical benefits, deferred compensation, and supplemental retirement plan obligations aggregating $241.3 million. However, the business noted that the life insurance assets did not represent a committed financing source and that the business could redesignate them for another purpose at any time. Some large businesses have also provided survey responses suggesting that some business-owned life insurance is used to finance executive benefit plans. Clark Consulting has conducted annual executive benefits surveys of Fortune 1000 corporations and reported on respondents’ use of business- owned life insurance to informally fund nonqualified deferred compensation and supplemental executive retirement plans. Businesses informally fund such plans by planning to have assets available to pay for them, although the assets would not generally be protected in bankruptcy. From its 2003 survey, which had a 22 percent response rate, Clark Consulting reported that 93 percent of the respondents offered nonqualified deferred compensation plans, 69 percent of those with nonqualified deferred compensation plans informally funded them, and 55 percent of those that informally funded the plans used business-owned life insurance to do so. Similarly, 71 percent of the respondents offered supplemental executive retirement plans, 53 percent of those respondents informally funded the plans, and 61 percent of those that informally funded the plans used business-owned life insurance to do so. Because the survey did not use a statistical sample of businesses and may be subject to other sources of error such as nonresponse bias, respondents’ answers cannot be projected to all Fortune 1000 companies or to all businesses in the United States. The statistics reported here are meant to indicate only that some large businesses are using life insurance in a variety of ways. Banks and thrifts are required to follow federal regulatory guidelines in purchasing business-owned life insurance. Officials from federal bank regulators that had examined some institutions’ purchases told us that these purchases had not raised major supervisory concerns. SEC’s general disclosure requirements apply to business-owned life insurance; the agency has not had specific investor-protection concerns about such policies. The Internal Revenue Code includes statutory requirements, and IRS has issued regulatory requirements related to the tax treatment of the insurance. IRS officials told us that the agency was studying potential concerns. States had differing laws concerning insurable interest and consent requirements for business-owned life insurance. The insurance regulators of the four states we contacted described limited oversight of business-owned life insurance sales, and the four state regulators and NAIC generally did not have concerns about the policies. Federal bank regulators have issued guidelines for purchases of business- owned life insurance that they have used in overseeing banks and thrifts’ holdings of such policies. The regulators’ oversight, consistent with their missions, includes assessing the safety and soundness of supervised institutions, and regulatory officials said that the agencies generally have not had major supervisory concerns about banks and thrifts’ business- owned life insurance holdings. They said that while business-owned life insurance carries some risk, policies that were purchased in accordance with their guidelines are generally not a major threat to an institution’s safety and soundness. The regulators cited other types of activities—such as commercial real estate, specialized, and subprime lending—as generally raising more supervisory concerns than business-owned life insurance because of increased risk or volatility. OCC and OTS guidelines describe the permissible uses of business-owned life insurance. According to Federal Reserve and FDIC officials, their agencies generally follow OCC’s guidelines. The OCC and OTS guidelines state that banks and thrifts may purchase life insurance only for reasons incidental to banking, including insuring key persons and borrowers and purchasing insurance in connection with employee compensation and benefit plans. The guidelines require that, before purchasing policies, a bank or thrift’s management conduct a prepurchase analysis that, among other things, determines the need for insurance and ensures that the amount of insurance purchased is not excessive in relation to the estimated obligation or risk. For example, the guidelines state that when purchasing life insurance on a group of employees, the institution may compare the aggregate obligation to the group (such as employee benefit costs) with the aggregate amount of insurance purchased. The guidelines require that the prepurchase analysis determine the amount of insurance needed using “reasonable” financial and actuarial assumptions, such as those for the time period or the discount rate used to calculate the present value of expected employee benefit costs. However, the guidelines do not specify parameters for the assumptions, such as the discount rate or time period, to be used in the prepurchase analysis— parameters that affect the amount of insurance that can be purchased. OCC officials stated that specifying such parameters would have little or no effect because banks tend to purchase less insurance than they could justify based upon their expected employee benefit expenses, regardless of the assumptions used in prepurchase analyses. In addition to the requirements for determining the need for insurance, the guidelines state that banks and thrifts using business-owned life insurance for executive compensation should ensure that total compensation is not excessive— that is, unreasonable or disproportionate to the services performed, taking into account factors such as the financial condition of the institution and compensation practices at comparable institutions. The OCC and OTS guidelines also require the bank or thrift’s prepurchase analysis to consider the risks associated with business-owned life insurance and to maintain effective senior management and board oversight of the purchases. In addition, the guidelines state that a bank or thrift should consider the size of its purchase of business-owned life insurance relative to the institution’s capital and diversify risks associated with the policies. The OCC and OTS guidelines require banks and thrifts to document their decisions and continue to monitor, on an ongoing basis, the financial condition of the insurance companies that carry their policies. For example, the guidelines state that institutions should review an insurance company’s ratings and conduct further independent financial analysis, with the depth and frequency of such analysis determined by the relative size and complexity of the transaction. OCC officials explained that the agency’s guidelines do not require institutions to continue to compare their projected employee benefit costs with the projected cash flows from the insurance after purchasing the policies. However, purchases of additional insurance would require a prepurchase analysis, so that institutions would be required to update their comparisons at such times. Officials at three large banks said that their banks had not compared the projected employee benefit costs and projected insurance cash flows after purchasing the insurance. Officials at a fourth large bank said their bank had updated the comparison annually in conjunction with additional insurance purchases in recent years. Federal bank regulators told us that their risk-based examination programs are designed to target aspects of banks and thrifts’ purchases of business- owned life insurance that would raise supervisory concerns about institutions’ safety and soundness. They specifically identified the credit and liquidity risks associated with business-owned life insurance as concerns that could warrant attention during an examination. Credit risk arises from the potential failure of an insurance carrier that might then be unable to pay death benefits or return the cash surrender value of policies upon request. OCC and Federal Reserve officials said they were less concerned about credit risk when it was diversified—for example, when institutions held policies with several highly rated insurers. Liquidity risk arises from the long-term nature of life insurance and the cost to the bank or thrift of surrendering policies. OCC officials emphasized that other risks associated with business-owned life insurance could also raise supervisory concerns, particularly among institutions with relatively large holdings. Specifically, OCC officials said that the potential risk to institutions’ reputations could be of concern as a result of negative perceptions of their holding the policies. For example, the officials noted that under California’s new law, businesses must disclose to insured employees the existence and face amount of insurance policies purchased on their lives by the end of March 2004, which could negatively affect the businesses’ reputations if employees were unaware that the policies existed. The officials also cited potential concerns about transaction risk, which arises from an institution not fully understanding or properly implementing a transaction. For example, if an institution did not comply with applicable insurable interest laws in purchasing a policy, it may not be able to collect the death benefits on the policy. Finally, OCC and Federal Reserve officials cited potential concerns about tax risk—the risk that Congress could change the tax treatment of business-owned life insurance. If any such changes were applied to previously purchased policies, banks might not receive the returns on the policies that they had expected, which could, in turn, raise supervisory concerns with respect to certain institutions. The federal bank regulators explained that they determined whether to include business-owned life insurance in the scope of an examination based not only on their preliminary assessment of the level of risk associated with business-owned life insurance but also on the size of an institution’s holdings relative to capital. The regulators’ examination procedures, in general, direct examiners to identify concentrations of credit—instances where the institution’s exposure to a creditor or, in some cases, a group of creditors (such as an insurance company or companies from which the institution has purchased policies) exceeds 25 percent of the regulator’s measure of the institution’s capital. All of the regulators said that, if the cash surrender value of a bank or thrift’s policies exceeded this threshold, they would consider whether further supervisory review of these holdings was warranted. Such a review would help to ensure that the institution was not unduly exposed to credit or liquidity risk and that it was complying with the guidelines on business-owned life insurance. OCC officials also said that the difficulty of quantifying the reputation, transaction, and tax risks associated with the policies underscored the importance of examiners considering whether institutions had overly concentrated holdings of business-owned life insurance. As of December 31, 2002, 467 banks and thrifts reported business-owned life insurance holdings in excess of 25 percent of their tier 1 capital. We asked the bank regulators to explain their oversight of 58 institutions with the largest concentrations, all in excess of 40 percent of tier 1 capital. Bank regulatory officials said that their agencies were monitoring these institutions’ levels of holdings through reviews of quarterly financial reports and had conducted reviews of the holdings as part of their examinations at many of the institutions. Officials from each regulator told us their agencies had concluded that major supervisory concerns did not exist about the amount of insurance the institutions owned, although the Federal Reserve and OCC had cited the need for some institutions to improve their oversight or internal controls related to the policies. Specifically, Federal Reserve officials said that the agency had reviewed business-owned life insurance holdings as part of its examinations of the nine Federal Reserve-supervised banks that we identified (table 1). Federal Reserve officials said that the agency’s examinations did not raise concerns about the nine banks’ total holdings of business-owned life insurance. However, the officials said that the Federal Reserve had made recommendations to four of the banks, including that they conduct more diligent prepurchase analyses, communicate more information to board members, enhance internal controls, and conduct quarterly reviews of insurance carriers’ financial condition. Based on a review of examination summary reports, FDIC officials said that FDIC had criticized the level of business-owned life insurance at only 1 of the 32 FDIC-supervised institutions we identified; the officials said that the summaries might only note the results of a review of business-owned life insurance if examiners identified problems, so it was unclear how many of the other institutions’ holdings had been reviewed. OCC officials told us that OCC did not have safety and soundness concerns about the amount of holdings at any of the 15 OCC-supervised banks we identified. The officials distinguished between community banks (4 of the 15 we identified) and large banks (11 of the 15 we identified), noting that OCC’s primary supervisory concern has been with the effectiveness of community banks’ ongoing oversight of their business-owned life insurance. They said that OCC had reviewed the holdings of at least three of the community banks we identified and had cited the need for one of these banks to improve ongoing risk management of the policies. In contrast, the OCC officials said that large banks generally have sophisticated risk management systems and manage their insurance investments well. Although the officials did not report how many of the large banks’ business-owned life insurance holdings had been reviewed during examinations, they said that these banks sometimes approach OCC examiners before making new insurance purchases and that, in this respect, OCC monitors some banks’ business-owned life insurance programs on an ongoing basis. OTS officials told us that OTS had examined both of the thrifts we identified and did not have supervisory concerns about their current holdings or policy oversight. SEC officials said that the agency’s regulations for public companies do not specifically address business-owned life insurance; rather, SEC has relied on its broadly applicable disclosure requirements to identify any investor protection concerns. As discussed, SEC, whose mission is to protect investors and maintain the integrity of the securities markets, requires public companies to disclose material financial and other information so that investors can make informed decisions. SEC officials said that business-owned life insurance is unlikely to be a material item. However, they added that the agency would have an oversight concern if it became aware of a public company’s failure to disclose material purchases of or earnings from business-owned life insurance or if problems developed in accounting for these policies. For example, a senior SEC official said that SEC might become aware of a failure to disclose material information if it was examining a poorly performing business and found that its management had not disclosed that the business was using business-owned life insurance to sustain itself. SEC officials said that, to date, no such problems have arisen, and the agency has not had investor-protection concerns about public companies holding business-owned life insurance. IRS, whose mission includes administering the tax law, had some requirements related to the tax treatment of business-owned life insurance. The Internal Revenue Code defines life insurance for tax purposes, establishes its tax treatment, and limits the deductibility of interest on loans taken against policies. In addition, in September 2003, IRS and Treasury issued final regulations on the tax treatment of split-dollar life insurance policies—policies in which the employer and employee generally share costs and benefits as part of an executive compensation arrangement. Because none of IRS’s prior rulings regarding the taxation of split-dollar arrangements had directly addressed the types of arrangements that have been widely used in recent years, IRS and Treasury issued interim guidance in 2001 and 2002 that culminated in the final regulations. Under the final regulations, corporations cannot provide tax- free compensation to executives using split-dollar policies, and a business’s premium payments are treated as loans to an executive who owns the policy. If the employer owns the policy, the regulations treat the executive’s interest in the policy’s cash value and current life insurance protection as taxable economic benefits to the executive. IRS officials said that the agency was studying some possible remaining issues related to business-owned life insurance that is held by highly leveraged financial institutions such as banks and thrifts. Various sources have reported that the limitation on the deductibility of policy loan interest adopted in 1996 curtailed new sales of leveraged business-owned life insurance policies. However, IRS officials expressed concern that this limitation had not eliminated the tax arbitrage opportunities available through business-owned life insurance and that, for this reason, highly leveraged financial institutions such as banks and thrifts might be borrowing to indirectly finance their policies. Borrowing to indirectly finance policies can occur when businesses pay the premiums on life insurance policies by increasing debt that is not directly linked to the policies and then deducting the interest they pay on that debt from their taxable income. Although the Internal Revenue Code limits the amount of deductible interest that is linked directly to business-owned life insurance, establishing such a link is difficult because businesses may incur debt for many purposes. Borrowing to indirectly finance policies presents a tax advantage to businesses because they receive tax-deferred inside buildup from life insurance policies indirectly financed with debt on which the interest expense is tax-deductible. In addition, IRS officials said that the agency is concerned that banks may be using separate account policies to maintain excessive control over investments, which is inconsistent with the Internal Revenue Code treatment of life insurance. Internal Revenue Code provisions were intended to ensure that the primary motivation in purchasing life insurance would be the traditional economic protection provided by such policies, while discouraging the use of tax-preferred life insurance as primarily an investment vehicle. In separate account life insurance, an asset account is maintained independently from the insurer’s general account. Compared with a general account policy, which offers either a guaranteed rate of return or a rate that varies at the insurer’s discretion, a separate account policy permits the policy owner latitude in the choice of investments, particularly equities. Businesses may also purchase private placement policies, or separate account policies that allow policyholders to negotiate key terms of the policies—such as who will act as investment adviser— with the insurance company. These policies also offer investment alternatives that traditional separate account policies do not, including privately traded investments in start-up businesses and private venture capital funds. Based on IRS revenue rulings, the agency decides on a case- by-case basis whether the purchaser of a policy has excessive control over separate account assets. These revenue rulings have identified factors to consider, such as whether the purchaser directs the account to make a particular investment, sells or purchases assets in the account, or communicates with the investment adviser about the selection or quality of specific investments, and whether the account’s investment strategies are broad enough to prevent the purchaser from making particular investment decisions by investing in a subaccount. IRS officials said that the agency was studying its concerns about indirectly financing policies through borrowing and about using separate account policies at five banks that IRS had identified through routine examinations. The officials said that IRS had not taken action against any of these banks. Although NAIC has developed model legislative guidelines for business- owned life insurance, the states are not required to follow them. NAIC initially developed model guidelines for business-owned life insurance in 1992 and revised them in 2002 (fig. 2). The 1992 guidelines suggested that states consider including in their laws provisions that recognize employers’ insurable interest in employees, including nonmanagement employees who could expect to receive benefits. The 2002 revision added a recommendation for states to consider requiring employee consent to be insured and prohibiting employers from retaliating against employees who refused to grant their consent. However, states have passed a variety of laws regulating insurable interest and consent requirements for business- owned life insurance (see app. II). While some states have followed NAIC’s guidelines, state consent requirements still differ. Since NAIC revised its guidelines in 2002, several states have passed legislation requiring employers to obtain employees’ written consent before taking insurance on their lives (others already had such requirements). Also, while some states have consent provisions that specifically address business-owned life insurance, in some states consent provisions apply to life insurance policies in general. Compendiums of state laws prepared by NAIC and the American Council of Life Insurers and our review of selected state statutes indicated that, as of December 31, 2003, 35 states had laws requiring written consent (either for life insurance in general or specifically for business-owned life insurance), and another 4 states had consent requirements that were satisfied if an employee did not object to a notice of the employer’s intent to purchase a policy. However, at least 18 of these states exempted group life insurance policies from consent requirements. Additionally, 1 state required employers to notify employees when purchasing business-owned life insurance, but did not require employee consent. We spoke with insurance department officials from California, Illinois, New York, and Texas. The insurable interest and consent provisions of the four states differed, but all allowed some purchases of business-owned life insurance and three required some form of consent; two required the amount of coverage to be related to employee benefit costs (table 2). The insurance department officials told us that they conduct limited oversight to test compliance with their states’ insurable interest and consent laws. They said that their primary method of addressing this issue was through policy form reviews, or assessments of the proposed forms that insurers would provide to policyholders when selling policies in their states. For example, New York insurance department officials said that department officials review policy forms for compliance with the state’s requirements and that, for policies on non-key employees, the form must describe insured employees’ right to discontinue the coverage on their lives and must note the statutory limitations on the coverage amounts. Also, a submittal letter that insurers must provide to the department along with the policy form must explain how the insurer will verify that New York’s insurable interest requirements are satisfied and, for non-key employees, whether the employer or the insurer will prepare the required employee consent notices. In Illinois, insurance department officials said that they review policy forms to ensure that the forms include the state’s statutory requirements related to business-owned life insurance, but the forms need not detail procedures for obtaining consent or determining appropriate amounts of coverage. NAIC staff said that state insurance regulators generally have the authority to review policies currently in force for compliance with any state requirements. But the officials from the four states we contacted said that their departments had not routinely verified that employees covered by the policies had consented to being insured or, where applicable, tested whether coverage amounts were appropriate. An official from California’s insurance department said that the department did not routinely review business-owned life insurance sales, but added that the department had recently received complaints about at least one insurer and multiple employers. The official noted that the department was investigating these complaints, including reviewing documentation for the policies in question, and that any policies that were found to violate state provisions could be voided. Officials in Illinois, New York, and Texas said that a pattern of consumer complaints about business-owned life insurance would cause their departments to investigate the insurance sales during market conduct examinations of insurers or refer the matter to their legal division for an enforcement action. However, the officials said that generally they had not received complaints about business-owned life insurance. In addition, NAIC staff told us that the organization maintains a national database of consumer complaints made to state insurance regulators and that business- owned life insurance had not been a significant source of complaints. As a result, NAIC had not developed a separate category for tracking such complaints. However, relying on complaints may not be an effective means of identifying violations of state law related to business-owned life insurance, because employees who are not aware of their state’s notification and consent requirements and whose employers have not provided the required notification or obtained the required consent, would not know that they have a basis for complaining to their state insurance regulators. More comprehensive data on the prevalence and use of business-owned life insurance could be useful to Congress in assessing the potential effects of legislative proposals that address the tax-favored treatment of this insurance. Data would be most useful if reported separately for business continuation and broad-based policies because legislative proposals that would further limit the tax-favored treatment of business-owned life insurance generally have treated the policies differently—they have applied primarily to broad-based policies. Data on business continuation versus broad-based insurance would be useful in understanding the proportion of the total business-owned life insurance market that might be affected by future legislative proposals. Useful data that are not available include the amount of tax-free income received from the death benefit payments on business continuation and broad-based policies—data that could help Congress better understand the potential effect of changes to the tax treatment of these policies on tax revenues. Other data on the prevalence and use of business-owned life insurance, further broken down or identified by business continuation and broad- based policies, might also be helpful to Congress in evaluating the potential effects of legislative proposals on businesses, their employees, and insurance companies. The cash surrender value of business-owned life insurance policies could help assess whether the value of assets invested in such policies is consistent with the behavior that Congress wishes to encourage through tax preferences. The annual premiums paid on new policies could be used to determine the demand for business-owned life insurance and the potential effect of proposed legislative changes on the market for business-owned life insurance. The number of businesses that hold business-owned life insurance policies could provide information on how many businesses might be affected by proposed legislative changes. Additional information on the size, type, and geographic location of businesses holding the insurance could be used to characterize the businesses that might be affected. Finally, although obtaining an unduplicated count of the total number of people covered by business- owned life insurance might be impractical, data on the number of each business’s employees insured under such policies could be used, for example, to determine the average number or percentage of employees covered by businesses that reported owning policies. Although more costly to obtain, data collected over multiple periods could help identify trends that might provide additional insights into the effects of legislative proposals. Businesses that hold business-owned life insurance or insurance companies that sold the policies could provide the data for Congress’s use, but both types of entities would incur administrative costs in extracting the required information from their records and summarizing it. We did not discuss these costs with businesses, however, we expect that they would maintain financial records and insurance policy statements from which the required data could be extracted. Also, some businesses already aggregate this information for use in completing their income taxes or Forms 10-K filed with IRS and SEC, respectively, suggesting that some businesses would not have difficulty providing the data. Nonetheless, businesses might differ in their willingness to voluntarily provide the data, depending at least in part on the cost and their perception of the benefits of doing so. While we did not independently determine the costs that insurers would incur in collecting the data, officials from several insurance companies told us that extensive effort would be required to identify policies as business-owned life insurance, as opposed to policies in which a business is the owner but not the beneficiary, and extract the data that we identified as being useful for decision making. These officials also told us that it would be difficult for them to distinguish between business continuation and broad-based policies. Consistent with these concerns, three life insurance industry trade associations recently supported proposed legislation that would require businesses that hold business-owned life insurance to report some information on their policies to IRS. While businesses might be able to provide data on the policies they own more easily than insurance companies could provide information on the policies they have sold, requiring insurance companies to report would substantially limit the number of reporting entities. About 1,200 companies sell life insurance, according to NAIC, while many more businesses purchase it. The organization collecting, analyzing, and reporting the data would also incur costs. SEC, Treasury, and NAIC are candidates for this role, because each already collects financial information from businesses that purchase business-owned life insurance, insurers, or both. One of the agencies or NAIC could collect the data by modifying existing reporting instruments, such as the SEC Form 10-K, applicable IRS tax forms, or insurance company annual reporting forms. Alternatively, the agencies or NAIC could collect the data through a survey. We did not determine the resources that would be required for the agencies or NAIC to modify their existing reporting instruments or conduct a survey. Beyond the costs, other factors could be considered in selecting one of these or another organization to lead the effort. Either SEC or Treasury might be able to combine data on business-owned life insurance with other data that businesses already report to them, such as business size, type, or location. SEC currently collects information only from publicly traded companies, whereas Treasury, through IRS, requires all businesses, including life insurance companies, to file tax returns. While taxpayer information is confidential and would not be publicly available except in the aggregate, Congress would likely need only aggregate information. Collecting business-owned life insurance data through NAIC, a membership organization of chief state insurance regulators, assumes the data would be collected from insurance companies and would involve the organization’s voluntary cooperation. The use of life insurance, which receives tax-favored treatment, has expanded from its traditional coverage of a family’s principal wage earners and a business’s key employees to broad-based coverage of a business’s other employees. Although recent legislative proposals have sought to limit the tax-favored treatment of business-owned life insurance, comprehensive data on the prevalence and use of such insurance have not been available for use in assessing the impact of these proposed changes. Should Congress conclude that such data would facilitate its ongoing deliberations on the appropriate tax treatment of business-owned life insurance, decisions would be required on what data are needed, who should provide the data (insurance buyers or sellers), who should collect the data (SEC, Treasury, NAIC, or another organization), how to collect the data (additional reporting or a survey), what it would cost to collect the data, and whether the benefits of collecting additional data warrant the cost of doing so. Important data for understanding the tax and other implications of changes in the tax-favored treatment of business-owned life insurance would be the amount of tax-free income received from death benefit payments, reported separately for business continuation and broad-based policies. Additional data of value could include the cash surrender value of policies, the annual dollar amount of premiums paid on new policies, the number of businesses that hold business-owned life insurance, the characteristics of businesses that own the policies, and the number of employees insured under such policies. If Congress decides that it needs more comprehensive data on the prevalence and use of business-owned life insurance, such as the tax-free income from death benefit payments and/or other select data reported separately for business continuation and broad-based policies, Congress could, among other alternatives, obtain the data by assigning responsibility to SEC or Treasury to (1) require purchasers of business-owned life insurance or insurers to report the data in their financial statements or federal tax returns, respectively, or (2) conduct a survey of the purchasers or insurers to obtain the data; or encouraging NAIC to (1) require insurers to report the data in the annual reports they file with NAIC or (2) conduct a survey of insurers to obtain the data. We received written comments on a draft of this report from Treasury, IRS, SEC, and NAIC that are reprinted in appendixes III–VI, respectively. Treasury commented that the report is well-researched and informative, but together with SEC expressed reservations about the matter for congressional consideration. Both agencies were reluctant to have a potential role in collecting data on business-owned life insurance, stating that having such a role would not be necessary to fulfill their regulatory missions. NAIC did not express such reservations, but said that it would like to evaluate the need for and utility of the data and favored using a survey as an initial step in the data gathering process. In addition, we received technical comments from Treasury, the federal bank regulators, SEC, and NAIC that we incorporated into the report where appropriate. In addressing their concern about collecting the data described in our matter for congressional consideration, SEC and Treasury commented that because they do not need the data to fulfill their regulatory missions, they do not believe it would be appropriate for them to collect the data. Specifically, SEC expressed concern about collecting data for purposes other than protecting investors. Similarly, Treasury expressed concern about collecting information not directly needed to calculate tax liabilities or enhance IRS’s ability to audit tax returns. However, as discussed in our report, Treasury provides OMB the estimate of forgone tax revenues resulting from the tax treatment of life insurance, and OMB reports this estimate in its budget documents. As we also report, this estimate is not complete because it does not reflect the forgone tax revenues on the additional income from death benefit payments in excess of the premiums paid and the accumulated tax-deferred earnings. Accordingly, Treasury might find that gathering additional data would allow the agency to provide OMB with a more complete and accurate estimate. NAIC reiterated that collecting the data described in our matter for congressional consideration would go beyond what is needed to support states’ regulation of insurers’ solvency. But NAIC did not explicitly express reservations about being charged with collecting the data should Congress request that it do so. We recognized in the report that none of the potential candidates that we identified for collecting additional data on business-owned life insurance needs the data to fulfill its missions and that the data would be used primarily for making tax policy decisions rather than for providing regulatory oversight. As discussed in the report, if Congress decides that it needs more comprehensive data on business-owned life insurance, among its alternatives would be to turn to SEC, Treasury, NAIC, or another entity to collect the data. Addressing the issue of how to collect the data, Treasury commented that it would be costly to design and distribute a survey, that response rates might be low without a penalty for noncompliance, and that Treasury would not be the best candidate to conduct a survey because it is not a “statistical gathering agency.” Regarding the latter, Treasury said that a survey of insurance products could be better performed by other organizations or agencies. As discussed in the report, we agree that collecting the required data would involve an investment of resources, whether it is done through a survey or via existing reporting mechanisms. We also agree that obtaining an adequate survey response rate presents a challenge. However, according to professional literature, congressional action making the survey mandatory should significantly improve the response rate. Also, according to this literature, government surveys that have employed response improvement methods continue to achieve acceptable response rates. Additionally, the surveyed entities may be more likely to respond if they believed that doing so would be in their interest. For example, they might conclude that congressional action would be more favorable to them if it was based on more complete data. Further, although Treasury is not a statistical gathering agency, it has chosen to conduct surveys to provide required information to Congress, as well as for other purposes, such as to study the growth of investment in foreign securities. Treasury has also contracted out surveys, as have other federal agencies. NAIC also commented that collecting the data could entail significant costs. NAIC said that it would like to evaluate the need for and utility of collecting the data and suggested that an initial study sampling the data described in our matter for congressional consideration might be a cost-effective way to assess the need for broader data collection. We agree that such a strategy could be one way of approaching the data collection effort. We are sending copies of this report to the Chairmen of the Senate Committee on Finance, House Committee on Financial Services, Joint Committee on Taxation, and other interested congressional committees. We will send copies to the Chairman of the Board of Governors of the Federal Reserve, Secretary of the Treasury, Chairman of FDIC, Commissioner of Internal Revenue, Comptroller of the Currency, Director of OMB, Director of OTS, Chairman of SEC, Executive Vice President of NAIC, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any further questions, please call me at (202) 512-8678, dagostinod@gao.gov, or Cecile Trop at (312) 220-7600, tropc@gao.gov. Additional GAO contacts and staff acknowledgments are listed in appendix VII. To obtain information on the prevalence and use of business-owned life insurance, we analyzed the quarterly financial reports—the Call Report and Thrift Financial Report—that banks and thrifts filed with their respective regulators. We obtained the data from the Federal Deposit Insurance Corporation (FDIC), which compiles Call Report and Thrift Financial Report data collected by the Board of Governors of the Federal Reserve System (the Federal Reserve), FDIC, the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS). We also obtained additional Thrift Financial Report data from OTS. Because regulators only began collecting the information in a consistent format at the beginning of calendar year 2001, our analysis covered the eight quarters ending March 31, 2001, through December 31, 2002. Although we did not independently verify the accuracy of the data, we assessed its reliability by discussing the data system with bank regulatory officials and examining the data for missing or unreasonable values. We concluded that the data were reliable for purposes of this report. To obtain further information on the prevalence and use of business-owned life insurance, we reviewed the most recent annual Form 10-K financial reports that publicly traded companies had filed with the Securities and Exchange Commission (SEC) between January 2002 and September 2003. To identify information about insurance companies’ sales of such policies, we reviewed the Forms 10-K of 32 life insurance companies that were among the 50 largest such companies ranked by assets. We also searched for references on how businesses used such policies in the Forms 10-K that a random sample of 100 Fortune 1000 public companies filed with SEC. Although the examples that we identified were not necessarily representative of all businesses that own these policies, they illustrated some uses of business-owned life insurance. We also reviewed industry literature, including studies by life insurance industry consultants and brokers, and interviewed experts to identify other surveys related to business-owned life insurance sales and use. We reviewed surveys conducted by CAST Management Consultants, Inc. that estimated 2001 and 2002 business-owned life insurance premiums and by Clark Consulting that reported on businesses’ use of business-owned life insurance in 2003. We did not fully assess the quality of these surveys, but we determined that despite some limitations, the surveys illustrated the amount of some recent sales and some businesses’ uses of business-owned life insurance. We also obtained information concerning the prevalence and use of business-owned life insurance from officials of relevant federal regulatory agencies: four federal bank regulators (the Federal Reserve, FDIC, OCC, and OTS), SEC, and the Internal Revenue Service (IRS). In addition, we reviewed the Joint Committee on Taxation’s and the Office of Management and Budget’s reported estimates of forgone tax revenues attributable to business-owned life insurance and obtained information from officials of both entities about the development of these estimates. We also obtained information about the prevalence and use of business-owned life insurance from the National Association of Insurance Commissioners (NAIC), two life insurance trade associations (the American Council of Life Insurers and the Association for Advanced Life Underwriting), and four large banks that held business-owned life insurance. In an effort to better describe the prevalence and use of business-owned life insurance, we also considered the possibility of conducting a survey of life insurance companies, but did not do so. Although representatives of six life insurance companies cooperated in a survey pretest, and American Council of Life Insurers representatives said that they would encourage their members to participate in the survey itself, the results of the pretest led us to conclude that we would not be able to obtain sufficiently reliable data to warrant conducting the survey. The insurance company representatives told us that their companies do not have a business need to maintain the comprehensive data on business-owned life insurance that we needed for the survey. We did not verify the accuracy of these statements. Still, for the reasons the insurance company representatives cited, we were uncertain whether we would receive an acceptable response rate to a survey. Also, insurance companies’ requests for anonymity would have precluded us from determining the percentage of total life insurance sales the survey respondents represented. To describe federal and state regulatory requirements for and oversight of business-owned life insurance, we met with officials of federal and state agencies that have regulatory authority related to business-owned life insurance to discuss their requirements and oversight activities and reviewed agency documentation and applicable federal and state laws. Specifically, we discussed requirements and oversight with officials at each of the four federal bank regulators and reviewed their guidelines, regulations, and reporting forms and instructions. We also discussed these topics with officials at SEC and IRS and reviewed their regulations, reporting requirements, and applicable sections of the Internal Revenue Code. We also interviewed NAIC staff to gain a perspective on state approaches to regulating business-owned life insurance and to discuss the organization’s model guidelines for state laws concerning such policies. To understand more about how states oversee compliance with their statutes, we obtained information from officials of insurance departments in California, Illinois, New York, and Texas. We selected these states because they represented different geographical regions of the United States and had differing insurable interest and consent provisions for business-owned life insurance; we did not select states whose provisions did not specifically address business-owned life insurance because we concluded that their insurance departments would not have specific oversight activities or requirements for business-owned life insurance. We did not conduct a comprehensive evaluation of the quality of federal and state regulators’ oversight activities. For example, we did not review records from federal examinations of banks and thrifts or state examinations of insurers. In addition, although we did not review every state statute that could affect business-owned life insurance, we reviewed each state’s statutes that related specifically to business-owned life insurance. Further, to better understand differences in state laws concerning insurable interest and consent requirements for business-owned life insurance, we analyzed compendiums of state statutes from NAIC and the American Council of Life Insurers. To address the potential usefulness of and costs associated with obtaining more comprehensive data on business-owned life insurance, we reviewed state and federal legislative proposals for changing the tax treatment of business-owned life insurance or addressing other public policy issues related to this insurance and determined the types of data that could be useful in considering these kinds of proposals. We also assessed the extent to which the information we obtained on the prevalence and use of business-owned life insurance provided a comprehensive basis for decision making. In addition, we reviewed IRS, SEC, and NAIC’s reporting forms and instructions to understand what role these organizations or Treasury might play if Congress wanted them to collect, analyze, and report more comprehensive information on business-owned life insurance. We also discussed with representatives of six insurance companies the challenges insurers might face in providing data, including what data they would be able to readily provide and what data would be difficult to provide. Finally, we discussed challenges that might be faced in collecting the data with Treasury (including IRS), SEC, and NAIC representatives. We conducted our work between February 2003 and December 2003, primarily in Washington, D.C., in accordance with generally accepted government auditing standards. Table 3 provides information on state insurable interest and notification and consent provisions applicable to purchases of business-owned life insurance by employers or employer-sponsored trusts. The Department of the Treasury (Treasury) commented on our matter for congressional consideration, which suggested that, among other alternatives, Congress could assign responsibility to Treasury for collecting data on business-owned life insurance. We discuss these comments in the Agency Comments and Our Evaluation section of this report. We also modified the report based on the technical comments that Treasury provided, as appropriate. In addition, discussed below is GAO’s detailed response to another comment from Treasury’s April 23, 2004, letter. 1. Treasury commented that the report leaves the impression that interest not linked directly to business-owned life insurance is almost always deductible, noting that the Internal Revenue Code generally allocates a portion of the interest expense of a business to its unborrowed policy cash values. However, Treasury also noted that this provision does not apply to contracts that cover the life of a person who is an officer, director, or employee of the business. Our report addresses business- owned life insurance as permanent insurance that an employer purchases on the lives of employees, with the business as the beneficiary. Because the provision cited by Treasury does not apply to this type of insurance, we do not discuss it in the report. The Securities and Exchange Commission (SEC) commented on our matter for congressional consideration, which suggested that, among other alternatives, Congress could assign responsibility to SEC for collecting data on business-owned life insurance. We discuss these comments in the Agency Comments and Our Evaluation section of this report. We also modified the report based on the technical comments that SEC provided, as appropriate. In addition, discussed below is GAO’s detailed response to another comment from SEC’s April 13, 2004, letter. 1. SEC commented that the efficacy of an information collection program through SEC filings would be constrained by the federal securities law’s limitations on those companies that are required to file information with SEC. Our report recognizes on page 36 that SEC only collects data from public companies. As Congress evaluates the need for additional data, it might determine that data from a subset of all companies would provide adequate information. Ms. Davi M. D’Agostino Director, Financial Markets and Community Investment 441 G Street, NW Dear Ms. D’Agostino: Thank you for the opportunity to provide you with feedback on the GAO’s draft report entitled, Business-Owned Life Insurance: More Data Could Be Useful in Making Decisions About Its Tax Treatment. The NAIC appreciates the opportunity to respond to your findings in this report. We also appreciate the opportunity to have assisted during the development of the report by meeting with the GAO to provide background information. The NAIC has been interested in this issue for some time and has also reviewed the subject. During 2002, the NAIC appointed a working group, chaired by Commissioner Jim Poolman of North Dakota, to review the NAIC’s earlier guidance and to recommend an appropriate state response. The NAIC’s COLI Working Group reviewed the model guidelines mentioned in your report and recommended an additional provision that requires written affirmative consent prior to purchase of coverage on an employee. The changes to the model guidelines also acknowledged that coverage might continue after the person was no longer employed by that company and prohibited the employer from retaliating in any way against an employee who chose not to give that consent. The working group also considered requiring notification of the employee’s spouse and including a provision that would require employers to make notification to all existing employees where coverage was already in place. These provisions were not ultimately included because of the negative effect they would have on employers in relation to the benefit that would be provided. As mentioned in the report, the majority of states do define insurable interest in such a way as to allow employers to purchase life insurance coverage on some, if not all, employees. Many states also require getting consent prior to that purchase. Ms. Davi M. D’Agostino Director, Financial Markets and Community Investment fund employee benefits and some state laws require that the policies be segregated into a trust for that purpose. Page 5 of your report may mislead readers by specifying that four insurance regulators have issued guidelines applicable to business-owned life insurance. Readers may assume that the report is referring to the four states interviewed, or may be confused by the wording to believe that only four states have such requirements. A reading of the chart contained in Appendix II will clarify that, in fact, most states have guidelines. The report correctly states that insurance regulators have collected little data on business-owned life insurance because concerns about abuse or misuse of the policies have not been previously forthcoming. The GAO suggests that the NAIC might be a source of information about the extent of business-owned life insurance by adding the reporting of that information to the annual statement that insurers are required to file with the NAIC. As you are aware, the NAIC collects a large amount of data from the insurance industry, a large portion of which is included in the annual and quarterly financial filings submitted to the states to support solvency regulation. The current statutory filing requirements do not require the level of detail for business-owned life insurance, or any other specialty line of business, contemplated by the GAO in this report. We believe that what the GAO is suggesting by the collection of more detailed information on business-owned life insurance goes beyond that needed to support solvency regulation. We agree there may be a significant cost associated with the collection of this data, both from the reporting entity and collecting organization standpoint; however, the NAIC would like to better understand the type of information to be collected to further evaluate the need for and utility of this information. Perhaps instead of an ongoing data collection effort, a study sampling this data may be more cost effective in assessing the larger need for ongoing data collection. Thank you again for the opportunity to respond to the findings in your report. We hope these comment are helpful and that you will contact us if we can be of further assistance. The National Association of Insurance Commissioners (NAIC) commented on our matter for congressional consideration, which suggested that, among other alternatives, Congress could encourage NAIC to collect data on business-owned life insurance. We discuss these comments in the Agency Comments and Our Evaluation section of this report. We also modified the report based on the technical comments that NAIC provided, as appropriate. In addition, discussed below is GAO’s detailed response to another comment from NAIC’s April 19, 2004, letter. 1. NAIC commented that a statement on page 5 of our draft report may lead readers to believe that only four states have issued guidelines applicable to business-owned life insurance. We modified that statement to clarify that we are referring only to the four states we contacted. Although many other states have issued guidelines and requirements that are applicable to business-owned life insurance, we only discussed regulatory oversight of such policies with officials from the four states that we contacted. In addition to those individuals named above, Joseph Applebaum, Emily Chalmers, Rachel DeMarcus, Daniel Meyer, Marc Molino, Carl Ramirez, and Julianne Stephens made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Business-owned life insurance is permanent insurance held by employers on the lives of their employees, and the employer is the beneficiary of these policies. Its attractive features, common to all permanent life insurance, generally include both tax-deferred accumulation of earnings on the policies' cash value and tax-free receipt of the death benefit. Legislators have expressed concerns about the ability of employers to receive tax-favored treatment from insuring their employees' lives. GAO was asked to discuss (1) the prevalence and use of business-owned life insurance, (2) federal and state regulation and oversight of these policies, and (3) the potential usefulness of and costs associated with obtaining more comprehensive data on business-owned life insurance. Limited data are available on the prevalence and use of business-owned life insurance. Federal bank regulators have financial reporting requirements, but not all institutions holding policies meet reporting thresholds. The Securities and Exchange Commission (SEC), the Internal Revenue Service (IRS), and state insurance regulators told GAO that they generally have not collected comprehensive policy data because they have not had a need for such data in fulfilling their regulatory missions. GAO found, however, that some insurers have disclosed information about policy sales. Also, the Joint Committee on Taxation and the Office of Management and Budget have reported estimates of forgone tax revenues from these policies as $7.3 billion to $13 billion for the period 2004-2008, excluding forgone tax revenues on additional income from death benefit payments. Regulators said that they do not generally collect data on the intended use of policies, but that businesses can, for example, use business continuation policies to insure against the loss of a key employee or broad-based policies to fund employee benefits. The federal bank regulators told GAO that they have reviewed the holdings of institutions with significant amounts of business-owned life insurance against their guidelines and concluded that no major supervisory concerns exist. SEC officials said that the agency has relied on its broadly applicable requirement that public companies disclose information material to investors in their financial statements, which would include any material information related to business-owned life insurance; SEC did not have investor protection concerns about public firms' ownership of the insurance. IRS had some requirements related to the tax treatment of the insurance and is reviewing compliance with these requirements. State laws governing the insurance differed; the four states' regulators that GAO contacted described limited oversight of the policies, and these regulators and the National Association of Insurance Commissioners (NAIC) generally reported no problems with the policies. More comprehensive data could be useful to Congress in assessing the potential effects of legislative proposals that address the tax-favored treatment of business-owned life insurance. Costs would be incurred in obtaining the data. Such data would be most useful if reported separately for business continuation and broad-based policies because legislative proposals have generally treated these policies differently. Data on the amount of tax-free income that businesses received from death benefits could help explain the potential effect of changes to the tax treatment of policies on tax revenues. Businesses holding the policies or insurance companies that sold them could provide this and other data. SEC, Department of the Treasury (Treasury), and NAIC already collect financial information from businesses and insurers and could be required or asked to collect the data. Should Congress decide that the data would be useful, decisions would be required on, among other things, whether the benefits of collecting the data outweigh the costs of doing so.
The Army’s modular restructuring initiative began in 2004 as part of the overall transformation of the Army and was informed by earlier Army studies, such as the Striker Brigade Combat Team effort. The foundation of the modular force is the modular brigade combat team. A primary goal of the restructuring was to increase the number of available brigade combat teams to meet operational requirements while maintaining combat effectiveness that is equal to or better than previous division brigades. Modular combat brigades have one of three standard designs—heavy brigade, infantry brigade, or Stryker brigade. In addition, combat support and combat service support formations have a common design that can be tailored to meet varied demands of the combatant commanders. As opposed to the Army’s legacy units, the standardized modular unit designs are being implemented in the National Guard and Army Reserves with the same organizational structure, equipment, and personnel requirements as active duty units. The Army plans to have reconfigured its total force—to include active and reserve components—into the modular design. With the assistance of the Army, GAO identified the types of personnel and equipment that will enable the brigade-based modular force to be as capable as its predecessor, the division-based force. These key equipment enablers are classified by category, such as tactical radios. Within each category we identified the different equipment items that provide that capability; for example, in the tactical radio category, there are 317 different types of equipment (see table 1). We also classified key personnel enablers by category, such as psychological operations, and within each category we examined specific types of officer and enlisted skills. For example, within the psychological operations category we identified psychological operations officers and enlisted psychological operations specialists as key personnel enablers of the modular force (see table 2). As part of the redesign of the modular force, the Army is developing unit blueprints that identify design requirements for equipment and personnel. The design requirement, also known as the Objective Table of Organization and Equipment or objective requirement, represents the Army’s goal of a fully modernized level of equipment and staffing for each type of modular unit and is unconstrained by resources. Because the Army’s design requirements represent a future objective that is continually updated and refined over time, the Army establishes an interim requirement, known as the Modified Table of Organization and Equipment, and authorizes equipment and personnel levels across the Army based on its current inventory of equipment and personnel, planned procurement timelines, and anticipated funding. The Army expects to use this modified list of equipment and personnel for the foreseeable future to guide the conversion of existing divisions to modular brigades. In sum, the design requirement is the level that the Army would like each unit to have in the long-term, whereas the authorized level is what the Army can afford in the interim. The Army also considered DOD’s strategic plan as it restructured to a brigade-based force. For example, the Army’s Brigade Combat Team designs were intended to be effective across the full spectrum of conflict, including global war, major theater war, smaller scale contingencies, insurgency/counter-insurgencies, and stability and support operations. Full spectrum of conflict includes a span of threats ranging from low intensity conflict, where the major threats are from ambush and skirmishes carried out by insurgents, to high-intensity conflict, where an enemy operates large numbers of armored vehicles and advanced weapons. DOD’s most recent strategic plan, the 2006 Quadrennial Defense Review, now refers to Army combat power in terms of brigade combat teams rather than number of divisions, consistent with the Army’s new structure. In addition, the Army will create a number of different types of modular support units, and multifunctional and functional support brigades, which will provide, for example, intelligence, logistics, communications, and other types of important support capability to brigade combat teams. The Army has traditionally evaluated units’ designs and capabilities, such as support units and support capability, across a number of domains or areas: doctrine, organization, training, materiel, leadership, personnel, and facilities (DOTMLPF). Doctrine describes how DOD fights, trains, and sustains it forces and is generally the starting point for assessing capabilities. Organization refers to the design of units—how many and what types of personnel and materiel (equipment) a unit needs to provide a specific capability. Training, materiel, leadership, personnel, and facilities are also important components in building and sustaining capabilities. By looking across the domains, the Army can evaluate how proposed changes in one area can affect other areas and the units’ overall capability. For example, the Army may evaluate the effect of adding more or different types of materiel or equipment on the capability of a unit to determine whether such changes would require changes in a unit’s doctrine, organization, or training requirements. TRADOC is responsible for developing designs of modular units and evaluating whether modular combat and support units will be capable of successfully conducting operations across the full spectrum of conflict. Other organizations within the Army have responsibilities for personnel, equipment, and facilities that are also critical to building and maintaining the modular force. The Secretary of Defense announced an initiative in January 2007— referred to as the Grow the Force initiative—to expand the size of the Army by about 74,200 military personnel to meet increasing strategic demands and to help reduce stress on the force. This planned expansion includes building six additional active modular brigade combat teams and additional modular support units, which will require a substantial increase in funding for personnel, equipment, and infrastructure. In January 2007, the Army estimated this expansion may require about $70.2 billion in increased funding initially and a significant amount in annual funding to sustain the expanded Army. The Army is making progress establishing modular units, but does not have a transparent results-oriented plan with clear milestones to guide efforts to fully equip and staff the modular force. Although the Army has extended the timeline from 2011 to 2019 for fully equipping the modular force, it has not identified the total cost needed to achieve its revised equipping goal. Our prior work has shown that successful transformation initiatives have a plan that links overall results with funding needs. While the Army projects that it will make progress toward its authorized equipment and staffing goals, it is likely to face some significant shortfalls by 2012 of modern equipment that is required for the modular force to operate as originally designed. Further, the Army’s equipment and personnel plans depend on some assumptions related to rehabilitating equipment used in operations in Iraq and Afghanistan and related to recruitment and retention that may be uncertain, given the current pace of operations. According to a key 2004 Army Task Force Modularity study, the success of modular design rests in part on the availability of key enablers that are required for modular brigade combat teams to function as planned. Without providing a detailed plan for equipment and staffing that links funding with results, the congressional decision makers will not have information to track the Army’s progress toward equipping and staffing its forces. The Army is making progress establishing modular units. In accordance with Army strategy, including its expansion plans, the Army plans to have converted 256 of 303 (84 percent) modular combat and support units through the end of fiscal year 2008. Figure 1 shows the status of the conversions for active, reserve, and National Guard combat and support brigades. As we reported in December 2007, however, modular units are being established with shortfalls of some equipment and personnel. To meet operational needs, the Army has allocated available equipment and personnel to deployed and next-to-deploy units. As a result, although the Army is converting units to modular unit designs, nondeployed units do not have all the equipment or personnel needed for the new combat and support brigades. Using a combination of regular and supplemental appropriations, the Army has spent billions of dollars procuring and repairing equipment in recent years. However, equipping deployed and deploying forces has been the priority, and the amount of equipment left for non-deployed forces has declined. In February 2008, the Chief of Staff of the Army testified before the Senate Armed Services Committee that the Army’s readiness is being consumed as fast as it can be built. The Army has announced a plan to restore balance to the force by 2011, but it has not detailed how it will achieve its goals of sustaining the force, preparing for missions, resetting equipment, and transforming for the future. The Army has extended its estimate for when it can fully equip the modular force from 2011 to 2019, but it still has not identified the total cost or established interim milestones toward reaching its revised equipping goal. Our prior work has shown that successful transformation initiatives have a clear plan with interim milestones that links overall results with funding needs. In our December 2007, report we recommended that the Army develop a comprehensive strategy and funding plan as well as measures of progress for equipping and staffing the modular force. We also recommended that the Secretary of the Army report this information to Congress to assist in its oversight of Army plans. Even though the Army agreed with our recommendations, it has not yet developed the comprehensive strategy or measures of progress needed to enable congressional oversight. The Army’s current investment plan is depicted in its 5-year defense plan, known as the future years defense program. However, this plan does not provide details about the Army’s equipping and staffing plans to reach goals that stretch until 2019. When developing its personnel or equipment plans, the Army must consider a number of factors. First, the Army gives priority to meeting the needs of deployed forces, and these requirements depend on dynamic operational conditions. For example, the surge of forces into Iraq in 2007 required the Army to equip and staff additional units quickly. Second, the Army must consider the wear and tear of ongoing operations on its equipment and make assumptions about how much equipment currently in use can be repaired. Third, the Army must determine how much equipment to buy to replace worn-out equipment and modernize the force. Finally, the Army has to decide how to distribute equipment and personnel across its remaining units within acceptable levels of risk. Army officials told us that they use internal tracking systems to plan procurements of equipment and assess projected levels against requirements; however, visibility outside the Army over the progress in equipping and staffing the force is limited. The Army has not provided congressional decision makers with this detailed information. The John Warner National Defense Authorization Act for Fiscal Year 2007 (hereafter Public Law 109-364) requires the Secretary of the Army to include in a report submitted annually with the President’s budget, among other things, an assessment of the progress made during that fiscal year toward meeting the overall requirements of the funding priorities for equipment related to the modularity initiative as well as the requirements for repair and recapitalization of equipment used in the Global War on Terrorism, and reconstitution of equipment in prepositioned stocks. In its fiscal year 2008 report, the Army submitted a list of requested fiscal year-2009- funding amounts for selected equipment. However, the Army did not provide comprehensive information that is necessary to determine the progress it is making in equipping modular forces. Specifically, the Army’s report did not include: (1) planned annual investments in acquisition and reset for equipment beyond fiscal year 2009 and quantities that it expects to procure or repair, (2) annual target levels for equipment and personnel, (3) key assumptions underlying the Army’s plans, or (4) an assessment of interim progress toward meeting overall Army requirements and the impacts of shortfalls. While Public Law 109-364 does not expressly delineate the level of detail the Army should submit in the progress assessment included in its annual report, unless DOD provides information that links requirements, funding requests, and planned procurements, Congress may not have the best information on which to base funding decisions. The Army’s equipping and staffing projections indicate that the Army will have enough equipment and personnel to meet aggregate equipping and staffing requirements by 2012. However, our analysis of the Army’s projections showed some potential shortfalls of modern equipment, and its projections are based partly on the continued use of some older equipment. For example, the Army projects that it will exceed its authorized level of medium tactical vehicles by fiscal year 2012, but its projections include continued use of more than 12,500 obsolete two-and- one-half-ton medium trucks that are not deployable overseas. As table 3 shows, our analysis of Army data found that when older equipment is excluded, shortfalls are projected in selected types of modern equipment within the key equipment categories. For example, our analysis showed significant shortages projected for three systems that make up the tactical internet: the Enhanced Position Location Reporting System and the Single Channel Ground and Airborne Radio System. According to the 2004 Task Force Modularity study, the full benefits of networking may not be realized if only some elements of the force have the capability. Appendix I contains a more complete discussion of our analysis, and findings. The Army’s projections of when it will be able to fully equip and staff the modular force are based on assumptions that will affect the actual equipment and personnel available. Expanding the size of the Army, rehabilitating equipment that has experienced wear and tear from overseas operations, recruiting and retaining personnel, and competition for increasingly scarce resources, each presents the Army challenges in planning and implementation as described below. Expanding the Army: The Army’s planned expansion includes building six additional active modular brigade combat teams and additional modular support brigades within its increased end strength of 74,200. Our prior work on recruiting and retention as well as equipping modular units have identified some potential difficulties that could arise in implementing an increase in the size of the Army at a time when the services are supporting ongoing operations in Iraq and Afghanistan. For example, our prior work has identified shortages in mid-level officers for a larger force. Repair and restore deployed equipment: Equipment is currently experiencing significant wear and tear in overseas operations, reducing the equipment’s expected service life. It is uncertain whether it is economically feasible to repair and restore equipment that has been deployed overseas, also known as equipment reset, to preserve its service life. An Army procurement official confirmed that the Army’s equipment projections rest on some uncertain assumptions related to the ability to reset the force. Recruiting and retention of personnel: While the services have generally met their recruiting and retention goals, several factors suggest that challenges for recruitment and retention are likely to continue. For instance, the Chairman of the Joint Chiefs of Staff testified in February 2008 before the Senate Armed Services Committee that recruiters have difficulty meeting their accession goals because of a decline in the willingness of persons in a position of influence to encourage potential recruits to enlist during a time of war. Another factor that DOD has reported contributing to the Army’s recruiting challenges is that more than half of today’s youth between the ages of 16 and 21 are not qualified to serve in the military because they fail to meet the military’s entry standards. Further, the Army has experienced decreased retention among officers early in their careers and shortages within certain specialty areas such as military intelligence (see app. I for a detailed analysis of the Army’s projections for specific personnel that are critical to the modular force). Availability of personnel: A growing number of Army personnel are unavailable for assignment because they are in training or are students, are transiting between positions, or are in a “holding facility” due to medical, disciplinary, or pre-separation reasons. Historically, about 13 percent of the Army’s end-strength has been unavailable. However, the number of service members who are unavailable now is likely to be greater because the number of personnel unavailable due to war wounds has increased over the past several years. Availability of Funding: The Army’s ability to execute its equipment and personnel plans rests on several assumptions related to future costs and available funding. DOD has relied on a combination of regular appropriations and supplemental funding to finance the transition to modularity. How long supplemental funding will be available for this purpose is unclear. We have previously reported that DOD tends to understate future costs in its equipment plans by employing overly optimistic planning assumptions in its budget formulations. A growing governmentwide fiscal imbalance could limit growth in defense funding and force choices among competing defense priorities, and rising costs for acquisition programs could require DOD to reassess the types and quantities of equipment it procures in the future. A senior Army official in the Office of the Deputy Chief of Staff for Programs stated that significant increases in costs to procure equipment required for current operations, such as armored vehicles, represents another factor that may lead the Army to procure less equipment than expected. Moreover, personnel costs are rising dramatically, and as the costs for military pay and benefits grow, questions arise whether DOD has the right pay and compensation strategies to cost-effectively sustain the total force in the future. While Congress has provided substantial funding in response to DOD requests, our analysis has shown the Army has not adequately demonstrated to Congress how it intends to invest future funding to procure the modern equipment and provide staff with critical skills that will enable modular units to operate most effectively and when it can expect all modular units to have the equipment and personnel they are authorized. Decision makers may not be fully informed of the Army’s equipment status because the Army has not developed a comprehensive equipment and personnel plan that details the equipment the Army has in its inventories as compared with the equipment required for units to operate effectively in their modular designs and that sets milestones against which to measure the Army’s progress equipping and staffing the modular force with key enablers. The Army uses a variety of approaches in testing unit designs and capabilities, but these efforts have not yielded a comprehensive evaluation of modular forces. Testing the modular force is intended to determine whether modular units are capable of performing potential missions across the full spectrum of conflict—and therefore needs to be as realistic as possible. Gaps in the Army’s testing of the modular support forces and lack of a focal point for ensuring thorough testing of these forces could result in less capable support forces than planned. First, the Army has not fully assessed the effectiveness of its support units because it has not completed the doctrine that would define how modular support units will train, be sustained, and support the fight. Without this underpinning doctrine, the Army does not have a basic framework upon which to develop measures to assess the effectiveness of support units. Second, the Army has been testing the capability of modular forces primarily at unconstrained design levels, not the authorized level of personnel and equipment units that the Army actually plans to provide. However, our analysis found significant shortfalls in the Army’s projected equipment and personnel when measured against design levels; as a result, this approach may not realistically test the capabilities of units that will generally be given less equipment and fewer personnel than called for in the design level. To support ongoing operations, the Army has focused its testing and evaluation efforts thus far on conducting ongoing counterinsurgency operations. However, without testing that is realistic and includes support forces across a full spectrum of potential conflict, the Army faces risks associated with equipment and personnel shortfalls should another type of conflict occur. The urgent need for modular combat units has caused the Army to place its priority on assessing these critical units, but it has not completed doctrine that would define how support units—which also have important roles—will operate. Further, unlike its approach for assessing combat units, TRADOC has not identified an organization responsible for performing integrated assessments of its modular support forces. In managing its transformation to the modular design, the Army has assessed combat units across seven domains or areas—doctrine, organization, training, materiel, leadership, personnel, and facilities (DOTMLPF). These areas are interrelated—for example, adding more or different types of materiel or equipment can change the capability of a unit that would need to be reflected in the unit’s organization or doctrine. TRADOC has made some changes in how its modular units operate based on lessons learned in current operations. The Army has stated that its transformation efforts will be based on the underlying doctrine that defines how the Army trains, sustains, and fights. Doctrine represents an approved guidebook that details how units are expected to operate, how they will be organized, trained, and equipped to perform their missions. Army officials stated that without doctrine it is difficult to assess a unit because doctrine provides the standards by which a unit is evaluated. Even though many support units have been converted to modular designs, the Army has not yet completed the doctrine that is basic to developing strategies to train and equip units. For example, doctrine for logistics units had not been completed, and the Army did not have a firm estimate for when it will be completed. Similarly, doctrine for all military intelligence and signal units was incomplete, and military intelligence officials were uncertain when this might be finalized. In 2005, the Army Science Board cited the lack of completed doctrine for modular support units as one issue that might limit effectiveness of the modular force. These officials explained that the Army cannot be sure that unit training is appropriate if doctrine is incomplete, because doctrine provides the standards by which the Army assesses unit training. Without approved doctrine, the Army cannot be assured that its efforts to assess and train modular units are adequate. Once doctrine is in place, the Army can evaluate support units across the other domains of the DOTMLPF domains. In contrast to its approach for combat units, however, the Army has not identified an organization responsible to ensure that integrated assessments of its support units are performed across the DOTMLPF domains that affect the unit’s capabilities. For combat brigades, the Army has designated experienced officials within TRADOC’s infantry and armor centers, called capabilities managers, who act as focal points for evaluating combat unit designs and coordinating comprehensive assessments of these units across the DOTMLPF domains to determine how best to mitigate potential risks with changes to doctrine and unit design, resolve training and equipping issues, and incorporate lessons learned. By assigning responsibility and authority for assessing forces to the capability managers, the Army has created a focal point for evaluating unit capabilities that clarifies lines of accountability and helps ensure that the designs of support units are fully tested across the DOTMLPF domains. For example, the Stryker Brigade Combat Team capability manager monitors the status of doctrine for Stryker units and lessons learned from current operations and updates as necessary doctrine and unit design as needed. Similarly, TRADOC established a capability manager for the Infantry Brigade Combat Team formation who, among other things, monitors the development of assessments across the DOTMLPF domains to ensure these areas are integrated and that the infantry unit design supports operational requirements. For example, the commander of one infantry brigade combat team stated that the infantry capability manager could help resolve concerns regarding training and equipment issues before deploying units to support the global war on terror. Without a responsible focal point to ensure that assessments across the DOTMLPF domains are conducted in an integrated fashion, the Army runs a risk that support units will not have the capabilities needed to support the modular force. TRADOC conducts computer simulations to test and evaluate the capability of the modular force based on designed equipment and personnel levels but does not perform these tests based on either authorized or available equipment or personnel levels. According to the Army, TRADOC assessed the modular force in 2004 based on the resources, equipment, and personnel specified in the modular unit design, not the authorized levels that would reflect the equipment and personnel that the units will actually have. During this assessment process, TRADOC identified some risks related to this modular transformation process and identified enablers, such as those we discussed earlier in this report, that would be needed to mitigate these risks. For example, when TRADOC used computer modeling tools to assess the combat capabilities of modular combat units, it determined that there was a risk associated with having two combat-focused, or maneuver, battalions in a modular combat brigade, as opposed to the three maneuver battalions that made up a combat brigade in the previous divisional structure. Based on this analysis, the Army made adjustments in the design of the units, such as adding battlespace awareness equipment such as unmanned aerial vehicles and increasing the number of intelligence personnel, before accepting the modular designs. However, the Army’s design represents an ideal future objective that is unconstrained by resources. Measured against the design level, the Army is projecting significant shortfalls in a number of different equipment and personnel areas. Since the Army accepted the modularity concept based on the design level, these shortfalls could also affect the capabilities modular units can deliver to combatant commanders. As table 4 shows, our analysis of selected key enabler equipment projections against design requirements found that the Army projects it will have less than half of the design requirement for some key equipment, such as battle command equipment, fire-finder radars, tactical and high frequency radios, and medium-wheeled vehicles. (For details of this analysis, see table 7 in app. I.) According to the Army, such enablers are critical to the modular force. During the development of the new modular brigade combat team designs, the Chief of Staff of the Army directed the Army to develop designs that would be “as capable as” the legacy designs the Army wanted to replace. Working under Army TRADOC, in 2004, the Army Task Force Modularity assessed several brigade combat team design alternatives and concluded that selected key enablers largely determined the performance of each of the alternatives. As a result, the Army made some changes to modular unit blueprints and assumed that modern equipment—including advanced battle command systems, unmanned aerial vehicles, and top of the line intelligence-surveillance-reconnaissance equipment that provide a brigade commander enhanced situational awareness—would be available for these units. These changes were meant to mitigate the risks associated with smaller but more numerous brigades; the Army created four modular brigade combat teams out of three former divisional brigades and reduced from three to two the number of battalions within a combat brigade. The Army approved an initial brigade combat team design, which senior Army leaders assessed as “good enough” for the Army’s modular restructuring. Since the initial 2004 assessment of the modular brigades, the Army has used a case-by-case review process to analyze specific shortfalls and identify any needed risk mitigation strategies. These assessments have been focused on supporting ongoing counterinsurgency operations. However, because these assessments focus on a few specific shortfalls and do not examine how all the equipment and staffing work together in modular force across the full spectrum of conflict, it is unclear whether the currently authorized personnel and equipment achieve the capability that was originally envisioned. Restructuring and modernizing the Army amid ongoing operations presents a complex and growing challenge. To date, the Army has received billions of dollars in regular and supplemental appropriations that have helped to prepare deploying units, but these investments have not yet translated into improved readiness for non-deployed units. As operations have continued, the target date for rebuilding the Army has slipped considerably and is now more than a decade away. We previously recommended that the Army establish management controls to assess progress in achieving its goal of fully equipping the modular force and report this information to Congress, and the Army agreed. However, in its 2008 report to Congress, the information the Army provided focused primarily on the 2009 budget year and did not include the detailed, year-by- year information that would represent the comprehensive management controls that are needed to demonstrate progress in equipping and staffing the modular force. Without detailed planning for results that includes interim targets for equipping and staffing the modular force and clearly links investments with goals for equipping and staffing modular units, DOD and Congress will not have the information needed to fully assess the Army’s progress or determine the impact of any shortfalls. Moreover, without the information the Army needs to show progress toward its goals, the Army could face difficulties competing for increasingly scarce resources in the future and risks additional slippage in its timeline for rebuilding the Army. The Army’s transition to the modular design has provided flexibility in supporting ongoing operations, but the effectiveness of the design across the full range of potential conflicts and with potential shortfalls in key equipment and personnel is still unknown. Understandably, the Army has focused its evaluation efforts on combat brigades supporting ongoing operations, although these are primarily counterinsurgency operations and do not represent the full spectrum of potential conflicts. However, although the integration of support forces with combat brigades is a key factor to the success of the modular design, the underpinning doctrine for modular support forces has yet to be completed. And, unlike its approach for combat forces, the Army has not yet identified an organization or focal point to be responsible for conducting integrated assessments of support forces across the DOTMLPF domains. By conducting an assessment of the total force against the full spectrum of requirements and identifying capability gaps in combat and support units, the Army can identify options that balance short-term needs with long-term risks. Lacking an analysis of the capabilities of the modular force at authorized levels—which represents what the Army actually plans to have—the Army will not be in a position to realistically assess whether the capabilities that it is fielding can perform mission requirements. To improve the Army’s focus on the relationship between investments and results and the completeness of the information that the Army provides Congress, we recommend the Secretary of Defense direct the Secretary of the Army take the following action: Develop and report to Congress a results-oriented plan that provides detailed information on the Army’s progress in providing the modular force with key equipment and personnel enablers. The plan should show actual status and planned milestones through 2019 for each type of key equipment and personnel, including goals for on-hand equipment and personnel levels at the end of each projected on-hand equipment and personnel levels at the end of each fiscal year, including planned annual investments and quantities of equipment expected to be procured or repaired as well as key assumptions underlying the Army’s plans; and an assessment of interim progress toward meeting overall Army requirements and the risks associated with any shortfalls. To enhance the Army’s efforts to comprehensively assess modular designs we are recommending that the Secretary of Defense direct the Secretary of the Army to take the following three actions: Develop a plan, including timelines, for completing doctrine for modular support forces. Establish an organizational focal point to ensure that integrated assessments of modular support units’ designs are performed across the DOTMLPF domains. Assess the capabilities of the modular force based on the amount and type of authorized equipment and personnel to identify capability shortfalls between authorized and design levels and take steps to revise authorized levels where appropriate. In commenting on these recommendations, DOD either disagreed or offered responses that we considered not to be fully responsive to the intent of our recommendations. We are therefore elevating the following matters for Congressional consideration. Congress should consider amending section 323 of Public Law 109-364 to require the Army to include in its statutorily required report on modularity a results-oriented plan that provides (1) goals for on-hand equipment and personnel levels at the end of each fiscal year; (2) projected on-hand equipment and personnel levels at the end of each year, including planned annual investments and quantities of equipment expected to be procured or repaired, as well as key assumptions underlying the Army’s plans; and (3) an assessment of interim progress toward meeting overall Army requirements and the risks associated with any shortfalls. To ensure that Congress is kept informed about the progress in implementing modular designs across the Army’s operating forces and the capabilities of the modular force and associated risks from personnel and equipment shortfalls, it should consider revising section 323 Public Law 109-364 to require the Army to report on the status of its transition to modularity to include assessments of (1)the status of development of doctrine for how support forces will train, be sustained, and fight, (2) capabilities of modular units with expected personnel and equipment and risks associated with any shortfalls against required resources. In written comments on a draft of this report, DOD disagreed with one recommendation, agreed with two recommendations, and partially agreed with one recommendation. DOD disagreed with our recommendation to report detailed information on the Army’s progress in equipping and staffing the modular force. The department agreed with our recommendations to develop a plan for completing doctrine for modular support forces and establishing a focal point for assessing modular support units’ designs. However, the department stated that its current processes adequately address these issues. The department partially agreed with our recommendation to assess the capabilities of the modular force. However, DOD stated that the Army assesses the capabilities of the force in many ways and that its current assessments are adequate and that additional actions are not necessary. As discussed below, we continue to believe that the actions we recommended are important to improve the Army’s ability to identify gaps in personnel and equipment and target investments to improve capabilities more efficiently as well to manage the transition of support forces to modular designs and operations. Therefore, we have raised these actions as matters for congressional consideration. DOD stated that our first recommendation to develop and report to Congress a results-oriented plan that provides detailed information on the Army’s progress in providing the modular force with key equipment and personnel enablers is not needed because the department’s budget, yearly acquisition reporting, and congressionally required reporting provide information on the status and plans for equipping and manning the force. In addition, DOD stated that yearly goals and projections for on-hand equipment and personnel are highly variable, given fluctuations attributed to unit position in the Army Force Generation cycle, equipment repair and reset plans, and planned modernization acquisitions. Although we agree that the Army provides Congress with information on planning, budgeting, and acquisitions systems, these systems do not constitute a coherent plan that provides sufficient information on the agency’s progress in equipping and staffing the modular force. Without the benefit of a clear plan and milestones against which to assess progress, the Army cannot assure Congress that it is on a path to restore readiness or when it will have the equipment and personnel it needs. The Army has relied heavily on supplemental funding to support its transition to modularity, and the Army has placed its priority for equipping and staffing on deploying forces. However, in light of pressures on the federal budget, the Army needs to make clear how it will use the funding it requests, when the Army expects to be able to fully resource its forces in accordance with its force generation cycle and the extent to which improvements are being achieved in the interim. Therefore, we have elevated this to a matter for congressional consideration, suggesting that Congress consider directing the Army to include in its annual report on modularity detailed information on equipment and personnel levels, progress toward equipment and staffing goals, and risks associated with any shortfalls. DOD agreed with our recommendation that the Secretary of the Army develop a plan, including timelines, for completing doctrine for modular support forces but stated that its current assessments are adequate. However, DOD’s response did not address two specific issues we raised: (1) the doctrinal manuals for support forces are not complete and (2) no plan with milestones for completing the manuals has been developed. In its comments, DOD stated that it had published Field Manual 3-0, Operations and that this manual included doctrine for modular support forces. We agree that Field Manual FM 3-0 serves as broad-based direction for all Army doctrine; however, it does not include specific modular support force doctrine that defines how modular support units will train, be sustained, and fight. As the report discusses, the Army’s Training and Doctrine Command has published, in separate field manuals, doctrine for each of the types of modular combat units that details how these units will train, be sustained, and fight. Our report highlights the need for the support-unit-specific doctrine to provide the standards by which support unit training can be evaluated. Until the Army develops a plan to complete such doctrine that includes a timeline and designates appropriate authority and responsibility, it is not clear that priority will be placed on this effort. We believe that the actions the department has taken do not meet the intent of our recommendation to improve the assessment of support forces and that our recommendation has merit. Therefore, we have elevated this to a matter for congressional consideration, suggesting that Congress consider requiring DOD to report on the Army’s progress in developing specific doctrine for modular forces, including support forces, in its annual report on Army modularity. DOD agreed with our recommendation that the Army establish an organizational focal point to ensure that integrated assessments of modular support units’ designs are performed across the doctrine, organization, training, materiel, leadership, personnel, and facilities domains. However, in its written comments, the Army indicated that the Deputy Chief of Staff is the focal point for organization, integration, decision making, and execution of the spectrum of activities encompassing requirements definition, force development, force integration, force structure, combat development, training development, resourcing, and privatization and that these activities included being the focal point for integrated assessments of unit designs across the doctrine, organization, training, materiel, leadership, personnel, and facilities domains. However, our recommendation was not directed toward the responsibilities or authorities of senior Army leadership. Rather, our recommendation focuses more narrowly on the need to address the current lack of integrated assessments of modular support units. Our recommendation was intended to encourage as a best practice the Army’s current strategy of appointing a focal point for ensuring integrated assessments of modular combat units and to highlight how applying this strategy could improve the integration of assessments for support units. We recognize that there are a number of ways that the Army could address the intent of this recommendation to improve integration of assessments for support forces, so we have not elevated this as a matter for congressional consideration at this time. However, we continue to believe that employing the best practice of appointing a focal point for integration would improve the Army’s ability to integrate assessments across domains for each type of support unit. DOD partially agreed with our recommendation to assess the capabilities of the modular force based on the amount and type of authorized equipment and personnel in order to identify capability shortfalls between authorized and design levels and to revise authorized levels where appropriate. In its comments, DOD stated that the Army assesses the capabilities of the force in many ways and that modular brigades are assessed based on the missions assigned and the ability to accomplish these missions given personnel, training, and equipment available. Further, DOD stated that the Army is currently assessing its capabilities and no new direction is needed. We agree that the Army performs many types of assessments of force capabilities. However, although the Army provided us documentation of its assessments of modular combat force designs with the level of equipment called for in the unit design, we found no evidence that the Army has assessed the modular forces with the equipment that these forces can realistically expect to have given the personnel and equipment available. As our report discussed, we identified significant shortfalls in the Army’s projected equipment and personnel when measured against the design levels. Further, the Army has focused its testing and evaluation efforts thus far on conducting ongoing counterinsurgency operations. We continue to believe that until the Army begins to test units with realistic personnel and equipment levels and across the full spectrum of conflict, the Army faces risks associated with shortfalls of key equipment should a different type of capability be needed in future operations in a different kind of conflict. Therefore, we elevated this to a matter for congressional consideration, suggesting that Congress consider requiring an assessment of modular force capabilities and associated risks at expected levels of personnel and equipment and across the full spectrum of conflict. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Secretary of the Army. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1816 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. For the 15 key equipment and the 9 key personnel enabler categories we identified, each profile presents a general description of the equipment item or functions of military personnel. We grouped key enablers into broad equipment and personnel categories that include more specific equipment items and military occupational specialties that are critical to the modular force design. For example, tactical radios are a key equipment enabler category that includes numerous equipment items, such as the Single Channel Ground and Airborne Radio System, that may consist of both older and more modern variants. Signals is a key personnel enabler category that includes two enlisted occupational specialties (nodal network operator/maintainer and satellite communication systems operator/maintainer) and one officer (signal corps officer) occupational specialty. Our selection methodology generally required that equipment and personnel be assigned to at least two types of modular units (brigade combat teams, multifunctional support brigades, or functional support brigades) to qualify as a key enabler. We excluded certain types of equipment that are important to brigade combat teams, such as Abrams and Bradley tanks, because they are present in both the new brigade designs as well as the previous divisional structure. After we identified a preliminary list of key enablers, we submitted this list to the Headquarters, Department of the Army, for official input and held subsequent discussions with Army officials. Based on our discussions, we developed and submitted to the Department of the Army a final list of key equipment and personnel enablers of the modular force that served as the basis for our data request. An Army procurement official identified the specific equipment line items associated with each of the key equipment enablers and personnel officials verified that we had identified the appropriate skills associated with these enablers. The All-Source Analysis System is the Army’s primary intelligence integration program, found at all Army echelons at battalion and higher level organizations. This system is composed of a laptop and desktop configuration that provides battlefield commanders with enhanced situational awareness and timely intelligence on enemy force deployments, capabilities, and potential courses of action. Our analysis includes the four equipment items that encompass this system such as All Source Analysis System: AN/TYQ-93. The Office of the Army Deputy Chief of Staff for Programs stated that capabilities from this system will convert into the Army Distributed Command Ground System, which is expected to be fielded to active Army, Army National Guard, and Army Reserve units by the end of fiscal year 2010. The Analysis and Control Element is a subsystem of the All Source Analysis System that provides commanders above the brigade level with intelligence processing, analysis, and dissemination capability. This category includes eight different equipment items including the Analysis and Control Element (ACE) AN/TYQ-89 which operates at the divisional level. Battle command systems enhance the ability of the commander to gain information and make decisions through the use of technology, such as Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance equipment. Our analysis includes 95 equipment specific equipment items within this enabler category, such as the Force XXI Battle Command Brigade-and-Below and the Movement Tracking System. The Force XXI Battle Command Brigade-and-Below forms the principal digital command and control system for the Army at brigade levels and below; it also connects platforms from lower-level units through the Tactical Internet. The Movement Tracking System is a tracking and communications satellite-based system that provides situational awareness to combat support and combat service support units. Army officials in the Office of the Deputy Chief of Staff for Programs indicated that to mitigate the overall shortfall of battle command equipment, the Army will retain older variants that are in oversupply until new equipment is delivered. However, shortfalls in this category are greater than the availability of older equipment. The Fire Support Sensor System designates targets to enable ground and air delivered precision strike capability. Our analysis includes six equipment items for this enabler category, such as the Armored Knight Fire Support Vehicle, the Bradley Fire Support Vehicle, and the Stryker Fire Support Vehicle. For example, the Knight vehicle provides precision strike capability by locating and designating targets for both ground and air-delivered laser-guided ordnance and conventional munitions. Army officials in the Office of the Deputy Chief of Staff for Programs indicated that force structure changes are expected to reduce overall requirements for this system, which would eliminate potential equipment shortages. The officials also stated the Army plans to continue to modernize its fleet of Fire Support Vehicles with upgrades and replacements of non-repairable equipment. Firefinder radar is specialized equipment that detects the location of mortars, artillery, and short and long-range rockets through the use of radar. Our analysis includes six equipment items for this enabler category, such as the Firefinder AN/TPQ-36 that locates medium-range rockets. To mitigate overall shortfalls of these radars, the Army will retain a surplus of older radars until its modernization efforts replace existing equipment. The Joint Network Node is the Army’s modernization of the tactical communications network. This node provides high-speed, high-capacity tactical network communications and data transport down to battalion level, which supports command and control, intelligence, and logistics communications. Our analysis includes 8 equipment items for this enabler category, such as the Battalion Command Post which provides communications at the battalion level. In June 2007, the Under Secretary of Defense for Acquisitions, Logistics, and Technology approved a merger of the Joint Network Node with the Warfight Information Network- Tactical system. Long Range Advanced Scout Surveillance The Long Range Advanced Scout Surveillance System provides long range target acquisition and far target location capabilities to armor and infantry scouts enabling them to conduct reconnaissance and surveillance operations outside of enemy fire. It is a component of the Fire Support Sensor System, which provides target designation capability for fire support teams. Our analysis includes 3 equipment items for this enabler category, such as the Night Vision Sight Set and Long Range Scout Surveillance System AN/TAS-8. Army officials in the Office of the Deputy Chief of Staff for Programs stated that the Army plans to mitigate shortages by using substitute items that can provide the same or similar capabilities as the required item until the Army can procure the modernized item. High Frequency radios provide commanders with radios that provide beyond the line of sight voice and data capability. Our analysis includes 17 equipment items for this enabler category, such as the High Frequency Radio Set AN/PRC-150C man pack that is carried by soldiers. The Army’s goal is to procure the Joint Tactical Radio System, which provides a networking capability with multichannel, multiwaveform capabilities to increase speed and reliability of service. Currently, the Army is using older radios that it plans to replace; however, these older systems do not exist in enough numbers to address these shortages. Tactical radios provide the ability and flexibility for command and control of combat forces on the battlefield and maintain contact with the lowest level, the squad leader. Our analysis includes 317 equipment items for this enabler category, such as the Enhanced Position and Location Reporting System. The Single Channel Ground and Airborne Radio System radio provides commanders with a secure combat net radio with voice and data handling capability in support of Command and Control operations. The Enhanced Position and Location Reporting System radio provides a tactical Internet and communications capability. The Army’s goal is to procure the Joint Tactical Radio System, which provides a networking capability with multichannel, multiwaveform capabilities to increase speed and reliability of service. In the near term, the Army maintains older less capable radios such as earlier versions of the Single Channel Ground and Airborne Radio System to meet its tactical radio requirements. Tactical Wheeled Vehicles – Light The family of light tactical wheeled vehicles consists of the High Mobility Multipurpose Wheeled Vehicle, which is a light, mobile, four-wheel drive vehicle. It has six configurations: troop carrier, armament carrier, shelter carrier, ambulance, missile carrier, and scout vehicle. Our analysis includes 43 equipment items for this enabler category such as the 1-1/4 ton cargo and troop carrier. Current operations are placing a heavy burden on these vehicles, and the Army has made numerous design and configuration changes to these vehicles such as improving their armored protection. Ultimately, the Army plans to replace this vehicle with the Joint Light Tactical Vehicle that will be available in 2015. Tactical Wheeled Vehicles – Medium The family of medium tactical wheeled vehicles provides multipurpose transportation such as re-supply and mobility assets for combat support and combat service support units and includes cargo, tractor, van, wrecker, and dump trucks. Our analysis includes 176 equipment items for this enabler category; some of the older vehicles are 2-1/2 ton cargo vehicles, while newer models are 5 ton trucks. The Army has a medium vehicle modernization strategy that is scheduled to be completed in 2022. Until then, the Army will use older trucks to meet its requirements. Tactical Wheeled Vehicles – Heavy The family of heavy wheeled tactical vehicles performs unit resupply for combat, combat support, and combat service support units. Our analysis includes 106 equipment items for this enabler category, such as Heavy Expanded Mobility Tactical Trucks, Palletized Load System trucks, Heavy Equipment Transport, and Line Haul trucks. The Heavy Expanded Mobility Tactical Trucks provides all-weather, rapidly deployable transport capabilities for re-supply of combat vehicles and weapon systems. The Palletized Loading System truck is a prime mover with a load handling system. The Heavy Equipment Transport truck transports equipment such as tanks, fighting and recovery vehicles, and self-propelled howitzers. Line Haul trucks include the line haul tractor, light equipment transporter, and dump trucks. To address the shortfall of these trucks, the Army uses older equipment items that are authorized as substitute items. The Trojan Spirit is an intelligence dissemination system that provides high capacity satellite communications services at Top Secret and Special Compartmented Information levels to tactical Army forces. Our analysis includes 14 equipment items for this enabler category, such as the Trojan Spirit Lite. Army officials in the Office of the Deputy Chief of Staff for Programs stated that the Army plans to modernize and upgrade Trojan Spirit with current technology to prevent the obsolescence of this program until the system is replaced by the Warfighter Information Network – Tactical in the 2014-2021 timeframe. Unmanned Aerial Vehicle – Prophet The Prophet unmanned aerial vehicle provides an all-weather, near-real- time view of an area of responsibility through the use of signals and intelligence sensors. According to the Army, the Prophet provides the brigade combat team commander with the intelligence capability to visually display the battles space. Our analysis includes eight equipment items for this enabler category including the Countermeasures Detection System AN/MLQ-40. Army officials in the Office of the Deputy Chief of Staff for Programs stated that the Army’s strategy to mitigate equipment shortfalls is to maintain older equipment longer as substitutes until they can be replaced. Unmanned Aerial Vehicle – Small The small unmanned aerial vehicle provides reconnaissance, surveillance, and target acquisition capabilities to ground commanders. Our analysis includes 51 equipment items for this enabler category, such as the Extended Range Multi-Purpose Unmanned Aircraft System and the Raven B. The Army has a shortfall for these items at the authorized and design level, and the conversion to the modular force structure increased the requirement for these vehicles. However, the Army does not have older equipment to make up for these shortages. Table 5 illustrates, by key equipment enabler category, the on hand or available equipment at the authorized level for modular force units for the total Army—active and reserve components—in fiscal years 2007 and 2012. For example, the Army projects to have 100 percent of its authorized equipment by 2012 in the Analysis and Control Element category, whereas the Army had 21 percent of authorized levels in fiscal year 2007. In contrast, the Army projects to have 67 percent of authorized levels of small unmanned aerial vehicles in fiscal year 2012 —an improvement from fiscal year 2007, when it had 34 percent of its authorized level. Table 6 illustrates, by key equipment enabler category, the on hand or available equipment at the design level for modular force units for the total Army—active and reserve components—in fiscal year 2007. This data includes an analysis at the aggregate level of all equipment on hand in a category and the specific modern equipment required in the design. Table 7 illustrates, by key equipment enabler category, the projected available equipment at the design level for modular force units for the total Army—active and reserve components—in fiscal year 2012. This data includes an analysis at the aggregate level of all equipment projected on hand in a category and the design equipment, which represents the specific equipment items that are required in the design. We identified nine key personnel enabler categories. Within a category, we selected military occupational specialties that are critical to the modular force design. Ammunition personnel manage and maintain armament, missile and electronic systems, conventional and nuclear munitions and warheads and the detection, identification, rendering safe, recovery, or destruction of hazardous munitions. The Explosive Ordnance Disposal Officer is responsible for operations that include the location, rendering safe, removal, disposal, and salvage of unexploded conventional, nuclear, biological, and chemical munitions. Explosive ordnance officers are assigned to modular units such as the headquarters units within a combat support brigade (maneuver enhancement). The Army’s goal is to fill this occupational branch at 100 percent or higher. To meet staffing goals, the Army offers several incentives to captains, such as choice of occupational branch, duty station, civilian graduate education, military school or cash bonuses in exchange for 3 additional years of obligated service. The Army also offers similar options to pre-commissioned cadets in exchange for extending their initial service obligations and bonuses to recruit active duty Air Force and Navy officers to transfer to the Army. Explosive Ordnance Disposal Specialist (Enlisted) Explosive Ordnance Disposal Specialists locate, identify, render safe, and dispose of conventional, biological, chemical or nuclear ordnance or improvised explosive devices, weapons of mass destruction, and large vehicle bombs. They also conduct intelligence gathering operations of foreign ordnance. Explosive ordnance specialists are assigned to modular units such as the headquarters units within a combat support brigade (maneuver enhancement). The Army’s goal is to fill this occupational specialty at 100 percent or higher. Current operations have increased the need for explosive ordnance disposal specialists, a need which has led to a shortfall for this occupational specialty. Shortages are also because a high level of prerequisites needed for personnel to qualify for this specialty, a high attrition rate experienced in training, and low retention of career personnel due to competition from the private sector. To meet staffing goals, the Army has given this specialty a high recruiting priority and offers its second-highest enlistment bonus to new recruits and retention bonuses to personnel who re-enlist. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re-enlisting. Armor personnel direct, operate, and employ tanks, armored vehicles, support infantry, and related equipment. Cavalry Scout (Enlisted) The cavalry scout leads, serves, or assists as a member of a scout unit in reconnaissance, security, and other combat operations. More specifically, the cavalry scout operates and maintains scout vehicles and weapons and engages enemy armor with anti-armor weapons; serves as a member of observation and listening posts; gathers and reports information on terrain features and enemy strength; and collects data for the classification of routes, tunnels and bridges. Calvary scouts are assigned to modular units such as the headquarters units of battlefield surveillance brigades and the special troop battalion and combined arms battalions of heavy brigade teams. The Army’s goal is to fill this occupational specialty at 100 percent or higher. To meet staffing goals, the Army offers enlistment bonuses to new recruits and retention bonuses to personnel who re-enlist. Artillery personnel provide fire support to Army units through the employment of field artillery systems. These personnel control, direct and perform technical firing operations, and coordinate the efforts of multiple fire support assets. Field Artillery Firefinder Radar Operator (Enlisted) The field artillery Firefinder radar operator is responsible for operating or providing leadership in the operation of field artillery radar systems. Specific responsibilities include establishing and maintaining radio and wire communications, operating and maintaining Firefinder radars, and constructing fortifications and/or bunkers used during field artillery operations. Field artillery Firefinder radar operators are assigned to modular units such as the fires battalion of a fires brigade. The Army’s goal is to fill this occupational specialty at 95 percent or higher. To accommodate growth in staffing needs for field artillery Firefinder radar operators, the Army has significantly increased its recruiting requirements and training capacity. To meet staffing goals, the Army has given this specialty a high recruiting priority and offers its second-highest enlistment bonus for new recruits and retention bonuses for personnel who re-enlist. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re-enlisting. Civil Affairs personnel support the commander’s relationship with civil authorities, the local populace, nongovernmental organizations, and international organizations. These personnel must possess critical skills associated with a specific region of the world, foreign language expertise, political-military awareness, and cross-cultural communications. The civil affairs officer prepares economic, cultural, governmental and special functional studies, assessments, and estimates. These personnel also coordinate with, enhance, develop, establish, or control civil infrastructure in operational areas to support friendly operations. Additionally, they develop cross-cultural communicative and linguistic skills that facilitate interpersonal relationships in a host country environment. Civil affairs officers are assigned to modular units such as the headquarters unit of the combat support brigade (maneuver enhancement) and heavy brigade combat team. The Army’s goal is to fill this occupational branch at 100 percent or higher. To meet staffing goals, the Army offers several incentives to captains, such as choice of occupational branch, duty station, civilian graduate education, military school, or cash bonus in exchange for 3 additional years of obligated service. The Army also offers similar options to pre-commissioned cadets in exchange for extending their initial service obligations and bonuses to recruit active duty Air Force and Navy officers to transfer to the Army. Civil Affairs Specialist (Enlisted) Civil affairs specialists identify critical requirements needed by local citizens in combat or crisis situations. They also locate civil resources to support military operations, mitigate non-combatant injury or incident, minimize civilian interference with military operations, facilitate humanitarian assistance activities, and establish and maintain communication with civilian aid agencies and organizations. Civil affairs specialists are assigned to modular units such as the headquarters unit of the maneuver enhancement brigade and heavy brigade combat team. The Army’s goal is to fill this occupational specialty at 100 percent or higher. The Army only recruits personnel to fill this occupational specialty from current servicemembers. To meet staffing goals, the Army offers retention bonuses to personnel who re-enlist and critical skills retention bonuses targeted to senior noncommissioned officers with 17 or more years of service who remain on active duty. Mechanical maintenance personnel perform repair functions on Army weapons systems and equipment that support maneuver forces in their preparation for and conduct of operations across the entire operational spectrum. Light-Wheel Vehicle Mechanic (Enlisted) The Light-Wheel Vehicle Mechanic supervises and performs field, intermediate, and depot-level maintenance and recovery operations on light and heavy wheeled vehicles, associated trailers and material handling equipment. Light-wheel vehicle mechanics are assigned to modular units such as the forward support company within a fires brigade and the brigade support battalion within an infantry brigade team. The Army’s goal is to fill this occupational specialty at 95 percent or higher. To meet staffing goals, the Army designated this specialty a high recruiting priority, offers enlistment bonuses to new recruits and retention bonuses to personnel who re-enlist. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re- enlisting. Military intelligence personnel provide commanders with all-source intelligence assessments and estimates at the tactical, operations, and strategic levels dealing with enemy capabilities, intentions, vulnerabilities, effects of terrain and weather on operations, and predict enemy courses of action. In particular, they collect intelligence assets; produce threat estimates; ensure proper dissemination of intelligence information; conduct interrogation operations of enemy prisoners of war; interpret imagery; and perform counterintelligence operations. Intelligence Analyst (Enlisted) The intelligence analyst supervises, performs or coordinates the collection, management, analysis, processing and dissemination of strategic and tactical intelligence. Furthermore the intelligence analyst processes incoming information, determines its significance and reliability, and performs analyses to determine changes in enemy capabilities, vulnerabilities, and probable courses of action. Intelligence analysts are assigned to modular units such as the headquarters unit of a heavy brigade combat team and the military intelligence battalion of the battlefield surveillance brigade. The Army’s goal is to fill this occupational specialty at 95 percent or higher. The Army expects staffing needs for this occupational specialty to increase due to the conversion to the modular force. To meet staffing goals, the Army designated this specialty a high recruiting priority, offers enlistment bonuses to new recruits, retention bonuses to junior personnel who re-enlist and critical skills retention bonuses to senior non-commissioned officers who remain on active duty. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re-enlisting. Human Intelligence Collector (Enlisted) Human intelligence collectors supervise and conduct interrogations and debriefings in English and foreign languages and prepare and edit tactical interrogation reports and intelligence information reports. Additionally, they translate and use captured enemy documents and open source foreign language publications in support of promoting peace, the resolution of conflict and the deterrence of war. Human intelligence collectors are assigned to modular units such as the headquarter unit of heavy brigade combat teams and the military intelligence battalion of the battlefield surveillance brigade. The Army’s goal is to fill this occupational specialty at 100 percent or higher. The Army expects staffing needs for this occupational specialty to increase because of conversion to the modular force. However, the Army is challenged to increase training capacity for this occupational specialty because of the need for a one-to-one student- teacher ratio. To meet staffing goals, the Army has temporarily suspended foreign language requirements for this specialty, offers enlistment bonuses to new recruits, retention bonuses to junior personnel who re-enlist and critical skills retention bonuses to senior non-commissioned officers with 14 or more years of service who remain on active duty. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re-enlisting. Unmanned Aerial Vehicle Operator (Enlisted) The unmanned aerial vehicle operator supervises or operates unmanned aerial vehicles, to include mission planning, mission sensor/payload operations, launching, remotely piloting and recovering the aerial vehicle. Unmanned aerial vehicle operators are assigned to modular units such as the special troops battalion of heavy and infantry brigade combat teams. The Army’s goal is to fill this occupational specialty at 95 percent or higher. The Army expects staffing needs for this occupational specialty to increase because of the conversion to the modular force. To meet staffing goals, the Army offers enlistment bonuses for new recruits and retention bonuses to personnel who re-enlist, and is increasing its training capacity to meet increased staffing needs. Psychological operations personnel plan, conduct, and evaluate operations that convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals throughout the entire spectrum of conflict. This officer commands or serves on the staff of psychological operations units. Specifically, these officers advise United States military and/or civilian agencies on the use, planning, conduct, and evaluation of psychological operations. Additionally, they inform and train foreign governments and militaries on psychological operations. The Army’s goal is to fill this occupational branch at 100 percent or higher. To meet staffing goals, the Army offers several incentives to Captains, such as choice of occupational branch, duty station, civilian graduate education, military school, or cash bonus in exchange for 3 additional years of obligated service. The Army also offers similar options to pre-commissioned cadets in exchange for extending their initial service obligations and bonuses to recruit active duty Air Force and Navy officers to transfer to the Army. Psychological Operations Specialist (Enlisted) The psychological operations specialist supervises, coordinates, and participates in the analysis, planning, production, and dissemination of tactical and strategic psychological operations. These personnel assist in the collection and reporting of psychological operation data; assist in analyzing and evaluating current intelligence to support psychological operations; conduct research on intended psychological operation targets; and assist in the delivery of psychological operations products. Psychological operations specialists are assigned to modular units such as the headquarters unit within brigade combat teams. The Army’s goal is to fill this occupational specialty at 100 percent or higher. To meet staffing goals, the Army has given this specialty a high recruiting priority, and offers enlistment bonuses to new recruits and retention bonuses to personnel who re-enlist. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re- enlisting. Signals personnel manage all facets of Army and designated Department of Defense automated, electronic, and communication assets. More specifically, Signal Corps personnel are involved in the planning, design, engineering, operations, logistics, and evaluation of information systems and networks. This officer directs and manages the installation, operation, networking and maintenance of signal equipment. Furthermore, the general signal officer advises commanders and staffs on signal requirements, capabilities and operations. Signal officers are assigned to modular units such as the headquarters and support company units within heavy brigade combat teams and the signal company within the battlefield surveillance brigade. The Army’s goal is to fill this occupational branch at 100 percent or higher. To meet staffing goals, the Army offers several incentives to captains, such as choice of occupational branch, duty station, civilian graduate education, military school, or cash bonus in exchange for 3 additional years of obligated service. The Army also offers similar options to pre- commissioned cadets in exchange for extending their initial service obligations and bonuses to recruit active duty Air Force and Navy officers to transfer to the Army. Nodal Network Systems Operator-Maintainer (Enlisted) The nodal network systems operator-maintainer supervises, installs, operates and performs field level maintenance on Internet protocol based high-speed electronic nodal systems, such as the Joint Network Node; integrated network control centers; network management facilities; Communications Security devices; and other equipment associated with network operations. These personnel also perform network management functions in support of maintaining, troubleshooting and re-engineering of nodal assets as needed in support of operational requirements. Nodal network systems operator-maintainers are assigned to modular units such as the signal company within a battlefield surveillance brigade and the brigade support battalion within a heavy brigade combat team. The Army’s goal is to fill this occupational specialty at 95 percent or higher. The Army created this occupational specialty in part because of the conversion of the modular force and is reclassifying personnel from the network switching systems operator-maintainer specialty to this one. To meet staffing goals, the Army offers enlistment bonuses to new recruits and retention bonuses to personnel who re-enlist. Satellite Communication Systems Operator-Maintainer (Enlisted) The satellite communication systems operator-maintainer supervises, installs, operates and maintains multichannel satellite communications ground terminals, systems, networks and associated equipment. Satellite communication systems operator-maintainer are assigned to modular units such as the special troop battalion within an infantry brigade combat team and the signal network support company within a fires brigade. The Army’s goal is to fill this occupational specialty at 90 percent or higher. To meet staffing goals, the Army offers its highest enlistment bonus to new recruits, retention bonuses to personnel who re-enlist, and critical skills retention bonuses for senior enlisted personnel who remain on active duty. Transportation personnel are responsible for the management of all facets of transportation including the planning, operating, coordination, and evaluation of all methods of transportation. The general transportation officer functions as a logistical unit commander or as a staff officer responsible for the functional planning, coordination, procurement and control of the movement of materiel, personnel or personal property on commercial and military transport; and the coordination of all facets of transportation pertaining to water, air, and land transport systems. General transportation officer are assigned to modular units such as the headquarters unit of a sustainment brigade. The Army’s goal is to fill this occupational branch at 100 percent or higher. To meet staffing goals, the Army offers several incentives to captains, such as choice of occupational branch, duty station, civilian graduate education, military school, or cash bonus in exchange for 3 additional years of obligated service. The Army also offers similar options to pre- commissioned cadets in exchange for extending their initial service obligations and bonuses to recruit active duty Air Force and Navy officers to transfer to the Army. Motor Transport Operator (Enlisted) The motor transport operator supervises or operates wheeled vehicles to transport personnel and cargo in support of operational activities. Motor transport operators are assigned to modular units such as the headquarters unit of a sustainment brigade and the headquarters unit of a maneuver enhancement brigade. The Army’s goal is to fill this occupational specialty at 95 percent or higher. To meet staffing goals, the Army has given this specialty a high recruiting priority, offers its highest enlistment bonus to new recruits, retention bonuses to junior personnel who re-enlist and critical skills retention bonuses for senior enlisted personnel with 19 to 23 years of service who remain on active duty. Personnel from overfilled occupational specialties are also encouraged to convert to this one without extending their service obligations, or they can receive a retention bonus by re-enlisting. Table 8 illustrates the percentage of active component Army personnel on hand or projected to be on hand at the authorized level in fiscal years 2007 and 2012 by key enlisted and officer career field enabler category. Table 9 illustrates the percentage of active component Army personnel on hand or projected to be on hand at the authorized level in fiscal year 2007 by key enabler enlisted and officer occupational specialty and rank. Table 10 illustrates the percentage of active component Army personnel available or projected to be available at the design level in fiscal years 2007 and 2012 by key enlisted and officer career field enabler category. To assess the Army’s plan to guide its efforts to equip and staff the modular force, we obtained and analyzed relevant Army plans and reports to Congress for equipping and staffing the modular force. Because the Army lacks a mechanism to measure progress equipping and staffing the modular force, we developed in conjunction with the Army an analysis of key equipment and personnel enablers of the modular force. Based on our review of key Army modularity studies and reports, we defined key enablers as those pieces of equipment or personnel that are required for the organization to function as planned, providing the modular design with equal or increased capabilities to the previous divisional structure in areas such as a unit’s firepower, survivability, and intelligence-surveillance- reconnaissance performance. To develop a preliminary list of key equipment and personnel enablers, we reviewed key Army modularity reports using this definition and received input from Army Training and Doctrine Command (TRADOC), which is responsible for the design and evaluation of modular units, and Army Combined Arms Support Command. In addition, our selection methodology required that equipment and personnel must be assigned to at least two types of modular units (brigade combat teams, multifunctional support brigades, or functional support brigades) to qualify as a key enabler. We excluded certain types of equipment that are important to brigade combat teams, such as Abrams and Bradley tanks, because they are present in both the new brigade designs as well as the previous divisional structure. After we identified a preliminary list of key enablers, we submitted the list to the Headquarters, Department of the Army, for official input and held a follow-up discussion with an Army official to discuss the Army’s responses. Based on our analysis and this discussion, we developed a final list of key equipment and personnel enablers of the modular force (See app. I for a list of key equipment and personnel enablers of the modular force). An Army procurement official identified the specific equipment line items associated with each of the key equipment enablers. Our analysis of key equipment enablers compares total Army (active, National Guard, and Reserve) equipment design requirements and authorizations for the operating and institutional forces against total Army on-hand quantities in fiscal year 2007 and planned equipment deliveries by fiscal year 2012. However, our analysis excludes planned procurements funded by emergency supplemental requests for fiscal year 2008 because this data had not been entered into the Army equipment databases at the time of our request. Our analysis of key personnel enablers compares active Army personnel design requirements and authorizations for the operating and institutional forces against active Army on-hand personnel strength in fiscal year 2007 and projected personnel strength for fiscal year 2012. This analysis excludes about 13 percent of authorized end strength for the modular force because of military personnel who are in the transient, transfers, holdees, students category, according to Army personnel officials. The Army’s fiscal year 2007 to 2012 equipment and personnel plans were the most recent data available to us when we developed this analysis. Data retrieved from Army databases reflect equipment levels as of April 23, 2007, and personnel levels as of April 30, 2007. We shared the data with Department of the Army officials and provided them an opportunity to identify actions the Army intends to take to address equipment and personnel shortfalls. To assess the reliability of relevant Army equipment and personnel databases, we discussed data quality control procedures with Army officials responsible for managing the relevant equipment and personnel databases. Although we did not independently test the data electronically, we determined the data were sufficiently reliable for the purposes of this report. The Army provided updated data on the status of the Army’s equipment as compared to the design requirement as of June 29, 2008. We did not assess the reliability of this 2008 data. However, the 2008 data were generally consistent with the data we analyzed in 2007. To assess the extent to which the Army has developed a comprehensive plan to test and evaluate the design of the modular force, we analyzed TRADOC’s modular force assessment process, including documents related to the doctrine, organization, training, materiel, leadership, personnel, and facilities evaluations, and the use of modular force observations teams and lessons learned from ongoing operations. We also met with officials at TRADOC analysis centers and subject-matter experts at Army proponents and centers, for example, the Signal Center, to understand their efforts to develop and assess the design of the modular force. Further, we visited the Future Force Integration Directorate and the Army Evaluation Task Force at Fort Bliss to examine the Army approach to assessing the future modular force. In addition, we also assessed the Army’s plans to respond to recommendations from prior GAO work related to the evaluation of the modular force across the full-spectrum of conflict. Finally, we examined documents related to the combatant commanders’ evaluation of the modular units assigned to the commands. To assess the extent to which the Army has developed a comprehensive and integrated plan to fund its transformation and expansion of the modular force, we reviewed DOD’s fiscal years 2007 to 2009 base budget requests and fiscal years 2007 and 2008 supplemental Global War on Terror requests and met with Army budget officials. We also assessed the Army’s plans to respond to recommendations from prior GAO work related to Army modular force and Grow the Force funding plans. We visited or contacted the following organizations during our review: Office of the Under Secretary of Defense (Acquisition Technology and Logistics), Pentagon, Virginia Office of the Under Secretary of Defense (Comptroller), Pentagon, Virginia Office of the Under Secretary of Defense (Personnel and Readiness), Office of the Director (Program Analysis and Evaluation), Pentagon, Office of the Chairman, Joint Chiefs of Staff, Force Structure, Resources, and Assessment Directorate (J-8), Pentagon, Virginia Office of the Deputy Chief of Staff for Personnel (G-1), Pentagon, Virginia Office of the Deputy Chief of Staff for Logistics (G-4), Pentagon, Virginia Office of the Deputy Chief of Staff for Operations and Plans (G-3/5/7), Office of the Deputy Chief of Staff for Programs (G-8), Pentagon, Virginia Office of the Deputy Assistant Secretary of the Army for Cost and Economics, Pentagon, Virginia Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, Pentagon, Virginia Office of the Assistant Secretary of the Army, Financial Management and Comptroller, Pentagon, Virginia Office of the Assistant Chief of Staff for Installation Management, Army Budget Office, Pentagon, Virginia U.S. Army Force Management Support Agency, Fort Belvoir, Virginia National Guard Bureau, Arlington, Virginia U.S. Army Reserve Command, Fort McPherson, Georgia U.S. Army Forces Command, Fort McPherson, Georgia U.S. Army Human Resources Command, Alexandria, Virginia U.S. Army Materiel Command, Fort Belvoir, Virginia U.S. Army Tank-automotive and Armaments Command, Warren, Michigan U.S. Army Training and Doctrine Command, Fort Monroe, Virginia -Army Capabilities Integration Center, Fort Monroe, Virginia Future Force Integration Directorate, Fort Bliss, Texas -Combined Arms Support Command, Fort Lee, Virginia -Combined Arms Center, Fort Leavenworth, Kansas Current Force Integration Directorate, Fort Leavenworth, -Center for Army Lessons Learned, Fort Leavenworth, Kansas -TRADOC Analysis Centers: Fort Leavenworth, Kansas; White Sands Missile Range, New Mexico; Fort Lee, Virginia -Infantry School, Fort Benning, Georgia -Signals Center, Fort Gordon, Georgia -Intelligence Center and Office Chief of Military Intelligence, Fort Huachuca, Arizona -U.S. Army Infantry School, Fort Benning, Georgia -U.S. Army Signals School, Fort Gordon, Georgia -U.S. Army Intelligence School, Fort Huachuca, Arizona -U.S. Army Quartermaster School, Fort Lee, Virginia Congressional Budget Office, Washington, D.C. Congressional Research Service, Washington, D.C. We conducted this performance audit from April 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Gwendolyn Jaffe, Assistant Director; Margaret Morgan, Assistant Director; Kelly Baumgartner; Hillary Benedict; Herbert Bowsher; Kurt Burgeson; Grace Coleman; Stephen Faherty; Barbara Gannon; David Hubbell; Jim Melton; Steve Pruitt; Steven Rabinowitz; Terry Richardson; Kathryn Smith; Karen Thornton; and J. Andrew Walker made major contributions to this report.
Amid ongoing operations in Iraq and Afghanistan, the Army embarked in 2004 on a plan to create a modular, brigade-based force that would be equally capable as its divisional predecessor in part because it would have advanced equipment and specialized personnel. GAO has previously reported that restructuring and rebuilding the Army will require billions of dollars for equipment and take years to complete. For this report, GAO assessed the extent to which the Army has (1) developed a plan to link funding with results and (2) evaluated its modular force designs. GAO analyzed Army equipment and personnel data, key Army reports, planning documents, performance metrics, testing plans, and funding requests. GAO also visited Army Training and Doctrine Command, including selected Army proponents and schools; Army Reserve Command; and the National Guard Bureau. The Army will have established over 80 percent of its modular units by the end of 2008 but does not have a results-oriented plan with clear milestones in place to guide efforts to equip and staff those new units. The Army has been focused on equipping and staffing units to support ongoing operations in Iraq and Afghanistan; however, the equipment and personnel level of non-deployed units has been declining. The Army now anticipates that modular units will be equipped and staffed in 2019--more than a decade away--but has provided few details about what to expect in the interim. And while the Army projects that it will have enough equipment and personnel in the aggregate, its projections rely on uncertain assumptions related to restoring equipment used in current operations, as well as meeting recruiting and retention goals while simultaneously expanding the Army. Further, GAO's detailed analysis of Army data shows that the Army could face shortfalls of certain modern equipment. Such items are important because the success of the modular design rests in part on obtaining key enablers needed for modular units to function as planned, such as equipment to provide enhanced awareness of the battlefield. GAO has previously reported that the Army lacks a funding plan that includes interim measures for equipping and staffing the modular force, making it difficult to evaluate progress. Without a plan for equipment and staffing that links funding with results and provides milestones, the Army cannot assure decision makers when modular units will have the required equipment and staff in place to restore readiness. Finally, without this plan the Army risks cost growth and further timeline slippage in its efforts to transform to a more modular and capable force. The Army uses several approaches in testing unit designs and capabilities, but these efforts have not yielded a comprehensive assessment of modular forces. Testing the force is intended to determine whether modular units are capable of performing missions across the full spectrum of conflict. The Army has focused its testing efforts on combat units conducting ongoing counterinsurgency operations. However, gaps in the Army's testing could affect its forces' ability to deliver needed capabilities. First, the Army has not fully assessed the effectiveness of its support units because the doctrine that would define how modular support units will train, be sustained, and support the fight has not been completed. This doctrine provides a benchmark to measure the effectiveness of support units. Further, the Army has not assigned a focal point the responsibility for integrating assessments across activities, such as equipping and training. Second, the Army tested the capability of modular designs primarily unconstrained by resources, not at the level of personnel and equipment that the Army plans to provide units. Lacking an analysis of the capabilities of the modular force at levels that it plans to have, the Army will not be in a position to realistically assess whether the capabilities that it is fielding can perform mission requirements.
Following enactment of PAEA in 2006, USPS updated its delivery service standards for market-dominant products, which define the number of days USPS is to take to deliver the mail in a timely manner. USPS’s delivery service standards are set forth in federal regulations and differ depending on the type of mail, the time of day and location at which USPS receives the mail, and the mail’s final destination. For example, USPS standards for delivery of 2-day single-piece First-Class Mail require the mail to be received by a specified cutoff time on the day it is accepted, which varies depending on geographic location and where the mail is deposited (e.g., in a collection box, at a post office, or at a mail processing facility). This mail must then be delivered on the second regular delivery day (Monday to Saturday) to be considered “on time.” USPS measures delivery performance against its delivery service standards. For a given piece of mail, USPS first measures the transit time—that is, the number of days it takes from the point that the mail is accepted into USPS’s system until its delivery to a home or business. Then USPS compares this delivery time against delivery service standards to determine whether the mail was delivered on time. See figure 1 for USPS’s delivery performance of single-piece First-Class Mail from fiscal years 2011 to 2015. The second quarter of fiscal year 2015 experienced a significant decline in on-time delivery performance that USPS attributes to operational changes enacted in January 2015 coupled with adverse winter weather. However, performance improved in the next quarter. Since 2012, USPS has instituted several initiatives aimed at reducing expenses in its mail delivery and processing operations and networks as part of broader efforts to address its fiscal challenges and move toward financial viability. These initiatives included changing mail delivery service standards for some types of mail and then consolidating 141 mail processing facilities in 2012 and 2013. As we reported in September 2014, USPS changes to its delivery service standards increased the number of days for some mail to be delivered and still be considered on time. Further, effective January 5, 2015, USPS changed the delivery service standard for single-piece First-Class Mail sent to a nearby destination from 1 to 2 days. Table 2 presents these changes for market- dominant mail, which consists of First-Class Mail (e.g., correspondence, bills, payments, and statements); Standard Mail (mainly advertising); Periodicals (mainly magazines and local newspapers); and Package Services (mainly Media/Library Mail and Bound Printed Matter). To understand how changes in service standards affected expected transit times for First-Class Mail, we asked USPS to estimate the volumes of First-Class Mail subject to 1-day, 2-day, and 3-5-day delivery service standards for fiscal years 2011 through the second quarter of fiscal year 2015—the first quarter after USPS made its most recent changes to delivery service standards. USPS estimated that the percentage of First- Class Mail volume subject to a 1-day standard decreased from 38 percent in fiscal year 2011 to 13 percent in the second quarter of fiscal year 2015 (see fig. 2). When on-time delivery of First-Class Mail is redefined from a 1-day standard to a 2-day standard, USPS can take longer to deliver the mail for it to be considered “on time.” Based on delivery service standards, USPS sets annual performance targets for the percentage of mail that is to be delivered on time, and PRC annually assesses and reports USPS’s performance towards meeting these targets. PRC has responsibility for assuring delivery performance data are complete and accurate, and must approve any internal delivery performance measurement systems (i.e., systems administered by USPS as opposed to an external contractor). USPS is subject to legal requirements to create delivery service standards for market-dominant products, measure delivery performance, and report the results. Likewise, PRC is subject to legal requirements to specify how USPS should measure and report delivery performance, as well as requirements for using these data to provide oversight over USPS delivery performance. Key requirements for USPS and PRC are summarized in table 3. USPS uses two primary methods for measuring delivery performance: Tracking Barcoded Mail: Since 2011, USPS has measured and reported delivery performance for most types of market-dominant mail by using the time it is accepted at postal facilities to “start the clock” and scans of barcoded mail pieces to “stop the clock” by external, third-party reporters who receive the mail. This mail transit time is compared against delivery service standards to determine whether mail is delivered within the standard and thus considered on time. Most barcoded mail is tracked through USPS’s Full-Service Intelligent Mail program, which requires participating mailers that send bulk mail (i.e., mail entered in bulk quantities such as bills, advertisements, and magazines) to apply unique Intelligent Mail barcodes to mail pieces and provide USPS with electronic documentation for each mailing. Under this program, USPS uses a census approach that aims to measure all qualifying mail pieces in its mail processing network rather than a sampling approach. USPS commented that a census- type approach enables it to use the information to better manage day- to-day conditions throughout its network and that such visibility would not be available through sampling. Sending Test Mail Pieces: Since 1990, USPS has measured and reported on-time delivery performance for single-piece First-Class Mail through the External First-Class Mail measurement system (EXFC). Under this sampling system, an external contractor arranges for anonymous droppers to send test mail pieces from street collection boxes and private office-building lobby chutes to external, third-party reporters at residences and businesses. As will be discussed in more detail below, in January 2015, USPS proposed replacing its EXFC measurement system for single-piece First-Class Mail in favor of a system based on tracking barcoded mail. USPS’s measurement of on-time delivery performance has expanded greatly over the past 9 years, but remains incomplete because only about 55 percent of market-dominant mail volume is currently included in measurement. The remaining 45 percent is not included in measurement for two main reasons: (1) lack of trackable barcodes or (2) lack of needed information. USPS told us that it wants to include virtually all market-dominant mail in delivery performance measurement. To assess completeness, we determined measurement coverage—the percentage of mail included in measurement—as well as the various causes for why mail is not included in measurement and their possible effect on measured results. To the extent that mail is not included in measurement, performance data are not complete and may not be representative. There is not a minimum threshold of mail that is to be included in measurement for it to be considered representative. In general, the risk that measurement is not representative is greater if mail not included in measurement may be systematically different than mail included in measurement. In particular, if the unmeasured mail has different characteristics than the measured mail, and those characteristics are associated with the likelihood of on-time delivery, then the risk of a non-representative measurement is greater. As of the second quarter of fiscal year 2015, USPS measured on-time delivery performance for about half (55 percent) of market-dominant mail volume—up from only one-sixth of volume (16 percent) in fiscal year 2006 (see fig. 3). This increase in measurement coverage represents noteworthy progress by USPS and the mailing industry, which have devoted management commitment and significant resources to implement and participate in measurement systems for bulk mail that comprise most mail volume. Notably, USPS implemented measurement systems for bulk First-Class Mail, Standard Mail, and Periodicals. The number of pieces of market-dominant bulk mail included in delivery performance measurement has increased greatly in recent years—from 96 million pieces in the first quarter of fiscal year 2010 to 14.9 billion pieces in the second quarter of fiscal year 2015. Thus, the percentage of market-dominant bulk mail included in measurement increased from less than one percent in the first quarter of fiscal year 2010 to 48 percent in the second quarter of fiscal year 2015. Meanwhile, USPS provides performance measurement that covers virtually all single-piece First- Class Mail through mailing test pieces as part of its long-standing EXFC system. However, single-piece First-Class Mail comprises a small and declining percentage of market-dominant mail volume—down from 20 percent in the first quarter of fiscal year 2010 to 14 percent in the second quarter of fiscal year 2015. USPS’s measurement coverage has varied by class of mail (see fig. 4). The percentage of mail included in measurement is greatest for First- Class Mail (both bulk and single-piece mail), followed by Standard Mail, Periodicals, and market-dominant Package Services (mainly Media Mail/Library Mail and Bound Printed Matter) and has improved over time for each class. Most progress began in fiscal year 2011, when mailer participation increased significantly in Full-Service Intelligent Mail—a program that enables business mailers to track the progress of their barcoded mail through USPS’s mail processing system. Meanwhile, USPS’s EXFC measurement system continues to send test mail pieces to measure delivery performance of single-piece First-Class Mail. USPS’s measurement of on-time delivery performance for market- dominant mail remains incomplete because, as noted above, only about half of bulk mail volume was included in measurement as of the second quarter of fiscal year 2015. USPS tracks bulk mail using barcodes and electronic information about the mailing. The main causes for incomplete measurement of bulk mail can be broadly grouped into two different reasons: (1) mailers not applying a unique Intelligent Mail barcode to each mail piece to enable tracking (trackable barcodes)needed information. In addition, some mailers may not apply trackable barcodes due to the type of mail they are entering, such as certain types of locally-entered and delivered mail that are not eligible for barcode-based postage discounts. at a USPS facility. About 4.2 billion pieces of mail (about 45 percent of all exclusions) were excluded in the second quarter of fiscal year 2015 due to “no start-the-clock.” 2. No mail piece barcode scan was recorded by USPS’s automation equipment: This prevents USPS from being able to track the barcoded mail. About 1.5 billion pieces of mail (16 percent of all exclusions) were excluded in the second quarter of fiscal year 2015 due to a lack of a barcode scan recorded by USPS automated equipment. 3. Inaccuracies in mail preparation: These include deficiencies in preparing bundles of mail or the quality of barcodes on mail pieces. About 1.2 billion pieces of mail (13 percent of all exclusions) were excluded in this quarter due to inaccuracies in mail preparation. The percentage of market-dominant mail volume that was eligible for measurement but excluded for various reasons increased from 21 to 26 percent over the period from the fourth quarter of fiscal year 2013 to the second quarter of fiscal year 2015. Thus, exclusion of eligible mail from measurement has become a more important cause for why measurement data are not complete. USPS officials told us that the increase in excluded mail volume can be attributed to the overall increase of mail volume eligible for inclusion in measurement. USPS told us about several actions it has taken to reduce exclusions, including steps to improve the scanning of barcoded mail. USPS also reported that it is collaborating with the mailing industry and working with individual mailers to improve the quality of mail preparation and compliance with requirements that can reduce the volume of mail excluded from measurement. As previously discussed, delivery performance may be different for mail included in measurement compared to mail that is not included in measurement. USPS told us that it has not studied whether on-time delivery performance varies for mail sent by mailers that do not participate in its measurement programs. Thus, the effect of non-participation on delivery performance measurement is unknown. However, available information indicates that non-participation can affect results for some Standard Mail products— particularly if product-specific results are not weighted to reflect key characteristics of the mail. Large volume mailers, which are most likely to apply barcodes and thus have on-time delivery performance measured, reportedly use additional mailing practices to facilitate the timely delivery of their mail, such as entering large volumes of advertising mail close to its final destination. Destination-entered Standard Mail is more likely to be included in measurement than other “end-to-end” Standard Mail and is more likely to be delivered on time than end-to-end Standard Mail. For example, in its latest Annual Compliance Determination Report, PRC noted that 86 percent of Standard Mail Flats (a Standard Mail product generally consisting of large flat-sized advertising such as catalogs) were delivered on time in fiscal year 2014 when they were destination entered (entered at a postal facility generally closer to the final destination of the mail), but only 50 to 66 percent were delivered on time when they were not destination entered. USPS told us that the mail volume and preparation of destination-entry Standard Mail enable it to be more likely to be included in measurement as well as be delivered on time. Destination-entry mail also has sufficient volume and preparation to enable it to bypass various postal network processing and transportation (e.g., locally-entered and delivered so it is handled by only one processing facility), which according to USPS, is the reason this mail is more likely to be delivered on time. In January 2015, USPS proposed replacing its EXFC measurement system for single-piece First-Class Mail in favor of a system based on tracking barcoded single-piece mail. USPS stated its proposed system is intended to incorporate a larger and more representative population of mail pieces in measurement than EXFC does today. PRC established a public inquiry docket (a type of proceeding) to review USPS’s proposal,which is still ongoing as of September 21, 2015. In this proceeding, USPS said it expected the data from its proposed system to provide PRC with the ability to perform its responsibilities with a high degree of confidence and to reasonably inform the public regarding the quality of service provided for market-dominant products. However, some parties that commented on USPS’s proposal questioned whether the proposed system would produce representative results. For example, one concern was that the proposed system would measure mail deposited in blue collection boxes and at postal retail counters, but not measure the 38 percent of single-piece First-Class Mail that carriers pick up from customer mailboxes. Some parties commented that mail picked up by carriers may arrive at mail processing facilities a day later than mail deposited into collection boxes, such as situations where carriers return too late to the office for the mail to be transferred to transportation that day to a local mail processing facility. Others expressed concern that only barcoded single-piece mail would be eligible for measurement, which would exclude stamped mail without a barcode (such as personal correspondence and greeting cards). In USPS’s written reply to comments made on its proposal, USPS responded that critics overstated the difference between handling of mail left for carrier pickup and collection mail, and that the barcoded mail measured by its proposed system would provide a reasonable indicator of performance for all mail collected and transported to the processing facility. Among other comments, USPS responded that mail processing for single-piece First-Class Mail is conducted over a range of hours each day, offering a substantial window of opportunity to accommodate mail arriving later than normal, including mail that missed the last scheduled dispatch from facilities where carriers bring stamped mail for forwarding to processing facilities. USPS further responded that it is not yet feasible for it to measure delivery performance for single-piece First-Class Mail left for carrier pickup, but added that it was revising its proposal to measure stamped and metered mail left at postal retail lobby chutes. In its June 2015 interim order, PRC commented that because USPS’s proposal is still in development, PRC lacked sufficient information to make decisions concerning whether or not the proposed systems will be suitable for reporting service performance to PRC. PRC added that given that EXFC appears to have been producing reliable results for a considerable number of years, PRC cannot approve a new system to replace EXFC until the new system is similarly operational and verifiable. PRC directed USPS to plan to run EXFC and the proposed new system in parallel for a sufficient time to ensure it is operational and verifiable. PRC explained that test results demonstrating that the EXFC and new system generate objective and reliable measurements for all affected products over a period of four consecutive quarters would appear to be an acceptable demonstration. On August 25, 2015, USPS filed its statistical design plan for its proposed new system with the PRC, which explained the sampling methodology and the methodology for calculating results and their margins of error. However, USPS had not yet made public some other major aspects of its proposed new system—such as quality control procedures and internal controls including methods to address errors in collecting data. Since USPS has relied on the EXFC system since 1990 to measure delivery performance for single-piece First-Class Mail, a thorough review of detailed information on its proposed system will be important not only to PRC, but to stakeholders including Congress and the mailing industry. In this regard, PRC’s proceeding to evaluate this new system continues and the time frame for completion remains open-ended since there is no statutory deadline and PRC has not established a deadline. Although PRC reports have provided data on the amount of mail included in measurement of delivery performance, these reports have not fully assessed why these measurements were incomplete or whether USPS actions will achieve complete performance data. In addition, USPS officials told us that they have not established a time frame for achieving complete measurement. PRC uses these performance data to annually assess USPS’s delivery performance against targets that USPS has established for on-time delivery. Thus, delivery performance data that are complete and representative are essential for PRC to correctly determine whether USPS has met its delivery performance targets. Complete information is vital for effective management, oversight, and accountability purposes. Further, representatives of mailing industry groups and some mailers told us and commented in PRC proceedings that PRC should become more involved in issues regarding the quality of measurement data for on-time delivery performance, including issues regarding the exclusion of mail from measurement. These representatives provided us with a variety of suggestions in this regard, such as performing more in-depth and frequent oversight to ensure USPS measurement is complete. One said that USPS is still struggling to scan barcoded mail despite joint USPS- mailer efforts over the past decade. Another representative said that although PRC has oversight of USPS service performance, measurement, and reporting, there is little consequence to USPS as a result of not meeting its targets for on-time delivery or for deficiencies in its measurement and reporting practices. A third said that PRC should hold USPS accountable for improving measurement data by requiring a business plan where USPS would lay out the steps it needs to take and time frames for implementing its initiatives. PRC’s annual compliance reports have discussed how much mail volume for each type of mail is included in measurement and when USPS did not report performance results due to a lack of measurable data. However, PRC has not fully pursued the main causes for incomplete data (i.e., lack of trackable barcodes and lack of information causing data to be excluded from measurement). PRC reports have expressed concern with low levels of participation for certain types of mail, stating that low levels cause unreliable measurement. However, these PRC reports have not fully assessed the effectiveness of USPS actions taken or planned and associated timeframes with respect to the main causes for incomplete data. PRC could pursue the causes for incomplete data within its annual compliance reviews or it may initiate a separate proceeding. Furthermore, as previously discussed, by law, PRC may initiate a proceeding to improve the quality, accuracy, or completeness of data that USPS annually provides to PRC for its annual compliance determination whenever it appears the data have become significantly inaccurate or can be significantly improved. PRC officials told us that they have not been asked by any stakeholder to initiate such a proceeding. Nor has PRC exercised its option to initiate a proceeding on its own authority to address issues that impact the completeness of performance data. PRC and USPS officials told us that they are both opposed to having PRC initiate a proceeding focused on issues for improving the completeness of delivery performance measurement for two key reasons. 1. PRC officials believe that USPS’s delivery performance measures are generally sufficiently accurate, reliable, and representative for PRC to meet its legal responsibilities for assessing USPS’s compliance with service performance standards at the national level. Further, PRC officials told us that they believe non-measured mail has about the same on-time performance results as measured mail. However, PRC and USPS officials told us that neither have compared the performance of mail included and not included in measurement to determine if any differences exist. As previously discussed, available information indicates that non-participation in measurement can affect reported results for on-time delivery performance. Large volume mailers, who are most likely to have their mail barcoded and thus have on-time delivery performance measured, reportedly use additional mailing practices to facilitate timely delivery, such as entering large volumes of advertising mail close to its final destination. Destination-entered advertising mail is more likely to be included in measurement and is more likely to be delivered on time. 2. USPS officials stated that a new proceeding to consider data quality and completeness issues is not necessary because the current proceeding before the PRC (the performance measurement to replace EXFC) provides a public forum for consideration of the quality of service performance data, as well as mail excluded from measurement. However, according to publicly available documents in the current proceeding, PRC has not explored issues of delivery performance measurement data for bulk mail that are excluded from USPS’s current measurement systems, the multiple causes for these exclusions, and USPS actions under way and planned to address the causes. The proceeding also has not thoroughly explored mailers’ concerns regarding data exclusions, such as exclusion rules and mailer views regarding time frames for making progress on reducing exclusions. A PRC proceeding that focuses solely on issues of data quality and completeness—particularly the problem of data exclusions—may facilitate these issues receiving the fullest attention and making more rapid progress by USPS and the mailing industry toward achieving more complete measurement. As previously noted, while USPS has made progress toward achieving completeness since 2006—as illustrated by figure 3 earlier in this report—45 percent of market-dominant mail is still not measured. Performance information is sufficiently complete when it has the coverage to enable representative measurement of the percentage of mail delivered on time. While there is not a minimum threshold of mail that is to be included in measurement for it to be representative, the risk that measurement is not representative increases as more mail is not included in measurement because on-time delivery performance may be different for mail that is included in measurement from mail that is not included. Therefore, having a proceeding solely focusing on data quality and completeness could give USPS and postal stakeholders such as PRC, Congress, business mailers, and the general public the opportunity to conduct an in-depth evaluation of the quality of delivery performance data, identify practical opportunities to improve data quality, and establish actions and time frames for making progress. Having such a proceeding also could help PRC develop a better understanding of issues regarding the quality of delivery performance data and thereby be in a better position to conduct ongoing oversight of data quality and its annual compliance determination. USPS and PRC reports on delivery performance are not as useful as they could be for effective oversight. USPS and PRC annual compliance reports provide delivery performance analysis, as legally required. This information is reported at the national level. This analysis, however, does not facilitate an understanding of results and trends below the national level, such as for USPS’s 67 districts, to identify variations and areas where improvements in performance may be needed. USPS and PRC annual and quarterly reports on delivery performance information are not as useful for other oversight purposes or management and congressional decision making. For example, these reports do not include sufficient analysis to hold USPS accountable for meeting its statutory mission to provide prompt, reliable, and efficient services in all areas of the nation and regular postal services to rural areas. Further, delivery performance information is not sufficiently transparent as it is not readily available on respective USPS and PRC websites. Thus, it is difficult for effective oversight and for stakeholders to understand trends and develop analysis of USPS performance information. We have reported that ensuring information is useful to assist management and congressional decision making is key to the principles embodied in GPRA and the GPRA Modernization Act of 2010 framework for meeting fiscal, management, and performance challenges. USPS and PRC reports, however, provided little analysis to facilitate an understanding of results and trends below the national level. USPS and PRC websites do provide annual and quarterly delivery performance results on the national level and for each of USPS’s 7 areas and 67 districts. In addition, PRC provided annual delivery performance trend data at the national level in its annual compliance determinations covering fiscal years 2013 and 2014.however, are not sufficiently useful for determining variations in delivery performance across the nation or determining whether performance has improved in areas where performance has not met service standards or targets. National averages aggregate the mail delivery performance of different parts of the country into an average for the entire nation. Thus, on-time delivery performance in one section of the country may be masked by on-time delivery performance in another section of the country. A national average alone does not enable stakeholders to understand if certain areas of the country are experiencing poor delivery performance. Trend data solely at the national level, To better understand the range and variations in delivery performance across the nation, we analyzed trends in quarterly delivery performance at the district level. Our analysis showed how national data can mask wide variations in performance by various districts over time. For example, we analyzed quarterly performance for single-piece First-Class Mail with a 3-to-5-day delivery service standard for each of the 64 postal districts in the contiguous 48 states and the District of Columbia for the second quarter of fiscal years 2013 to 2015. For the second quarter of fiscal year 2015, none of the districts met that quarter’s performance target of 95 percent of mail delivered on time. Performance for that quarter ranged from 44 percent to 80 percent (see fig. 5). However, when analyzing the second quarter of the previous 2 fiscal years, of the 10 districts with the lowest scores in the second quarter of fiscal year 2015, 9 were below the national average in fiscal year 2014, and all 10 were lower than the national average in fiscal year 2013, but to a much lesser degree. Of the 10 districts with the highest scores in the second quarter of fiscal year 2015, 8 were above the national average in the second quarter of fiscal years 2014 and 2015. In addition, USPS’s reporting of delivery performance information is not sufficiently transparent. To be considered transparent, the criteria we identified suggest that delivery performance information is to be reported in a manner that is easily accessible and readily available. USPS, however, posts only its most recent quarterly report of area and district- level data on its public website.request numerous files from USPS to compile data necessary for understanding performance trends, such as whether on-time delivery is As a result, stakeholders would have to improving or getting worse. USPS told us that its reporting of delivery service information meets statutory requirements, and that it is not required to maintain quarterly trend data for delivery performance on its website. However, USPS can elect to maintain quarterly trend data on its website. A large mailer association we spoke with stated that USPS should be so transparent that everyone understands general performance and any factors contributing to good and poor performance. Similar to USPS, PRC’s reporting of delivery performance information is not readily available to stakeholders. While PRC also posts delivery performance information provided by USPS on its public website, stakeholders would have to find numerous files in multiple locations on its website to compile data necessary for understanding performance trends, such as whether on-time delivery is improving or getting worse. In addition, PRC’s reports are not easily accessible. PRC has reported its annual assessment of USPS’s delivery performance in fiscal year 2014 in two reports that are filed on its website at different times and at different links, while USPS’s quarterly data are posted at another link on PRC’s website. The lack of easily accessible and readily available performance information on USPS’s and PRC’s part impedes the ability of Congress, mailers, and customers to review and hold USPS accountable for its performance and to use the information to develop realistic expectations for when their mail will be delivered. USPS and PRC are not required to report—and do not report—delivery information for rural and non-rural areas, thus limiting effective oversight in these areas. USPS and PRC officials told us that they do not provide information or analysis to assess delivery performance specifically for rural areas because they are not legally required to do so. Without data on rural delivery performance, Congress cannot determine the extent delivery performance is timely in rural versus non-rural areas, and neither USPS nor PRC can prove or disprove any perceptions that rural areas may be affected differently than non-rural areas. Several Members of Congress and others have raised questions about whether delivery performance in rural areas has been negatively affected by changes USPS has implemented since fiscal year 2012 to reduce its expenses. For example, according to the National Newspaper Association (NNA), community newspapers have been negatively affected since USPS consolidated some postal facilities. Further, problems have emerged when newspapers, often in rural areas, had to be delivered outside of the local area and experienced a decline in service. NNA has requested that PRC gather information about the data that could be produced about rural mail to identify the sources of delivery problems, such as manual processing, increased travel distances, or inefficient processing plants. NNA has argued that “the possibility that what ails NNA newspapers also ails rural mail in general is more than a random guess.” In May 2015, two Members concerned about the lack of digital tracking in rural areas requested a PRC study on the feasibility of reporting on rural mail delivery performance. Other congressional requests for rural delivery performance information are also pending. For example, in a recent Senate report, the Senate Appropriations Committee directed USPS to take steps related to reporting delivery performance in rural areas. In July 2015, the Senate report accompanying the Senate Financial Services and General Government Appropriations Bill, 2016 directed USPS and PRC to report mail delivery performance to specifically include mail delivery from rural towns to other rural towns; from rural towns to urban areas; and from urban areas to rural towns. The Committee requested the methodology used to develop this information within 60 days of enactment of the Act with a subsequent report due by March 1, 2016. USPS has not reported data on on-time delivery performance based on a rural or non-rural distinction. USPS officials told us that no overall assessment of rural delivery service, separate and apart from urban/suburban delivery, has been undertaken since PAEA required delivery performance measurement, reporting, and assessment. USPS officials added that its delivery performance data provide a basis for internal diagnosis and assessment of operations and service, and satisfy USPS’s reporting obligations to PRC. USPS officials noted that its reports are generated at the national, area, and district level for these purposes, but are not routinely further disaggregated on the basis of whether particular districts or ZIP Codes are rural, suburban or urban in nature. On-time delivery performance information at the district level cannot inform stakeholders on delivery performance in rural areas since each of USPS’s 64 districts in the continental United States contains at least one core area with a population over 10,000 and thus is not entirely rural. USPS officials told us that USPS’s service performance measurement systems do not differentiate between rural and urban locations and that it may be cost prohibitive to attempt to measure performance of mail pieces in rural areas using an external data system. However, in response to the recent congressional request for PRC to report on rural mail delivery performance, USPS told us that it has begun collaborating with the technical staff at PRC to determine how measurement may account for rural origin and destination points and that its new, proposed internal service performance measurement plan might provide greater insight on service performance measurement specific to rural areas, assuming USPS and PRC can arrive at a reasonable definition of “rural” origin and destination points. At this time, however, USPS officials added that USPS was at an exploratory stage of the analysis and were not able to offer definitive conclusions on the feasibility of adding this feature to USPS’s measurement plans. PRC officials told us that they are currently working with USPS to determine how they will respond to the congressional request for rural delivery performance information. PRC officials also told us that PRC has limited its previous assessments regarding whether USPS met its delivery service standards for market-dominant types of mail to national results and has not conducted any rural-level analysis. PRC officials told us that PRC does not play a direct role (e.g., either annually or quarterly) in monitoring or reporting on USPS’s universal delivery service obligation (aside from annually estimating the cost of universal postal service), noting that PRC is not legally required to do so, nor has PRC been directed by Congress to play this role. PRC officials added that PRC has not considered requiring USPS to report quarterly and annual information on delivery speed and reliability in urban versus rural areas, because 1) PRC has not been specifically mandated by statute to require USPS to provide delivery service performance information separately for rural and urban areas and 2) in PRC’s previous assessments, Congress has not provided specific direction requiring USPS to implement such measurement and reporting. As noted previously, USPS officials told us that the costs of additional requirements for USPS to collect and report urban and rural delivery performance information through existing measurement systems would likely greatly outweigh the benefits. However, USPS and PRC were not able to provide specific cost estimates related to having USPS measure and report on delivery performance in rural and urban areas. We asked USPS for this information, but it did not provide such cost information, with USPS officials explaining that there is no clear definition or defined approach to measure what should be considered rural. USPS officials also told us that the cost would depend on the specificity of the data, such as whether there would be national-level results for urban and rural areas or detailed geographic breakdowns. We also asked PRC about the costs of providing delivery performance information in rural and urban areas. PRC responded that it has the authority to specify requirements for USPS’s delivery performance measurement, but that when considering reporting requirements for USPS, it is to give consideration to unnecessary or unwarranted administrative effort and expense by USPS. On this matter, PRC officials said that they do not know what the costs might be for USPS to collect data on delivery performance in rural and urban areas. Neither of the congressional directives mentioned above regarding studying delivery performance in rural areas directly address the costs associated with requiring rural delivery performance information. Without cost estimates, Congress may not have all the information it needs to understand the full implications of requiring data on delivery services in rural and non-rural areas. Quality delivery performance information is needed for USPS and postal stakeholders such as PRC, Congress, business mailers, and the general public to develop useful analysis that can help oversee or assess the balance between USPS’s cost-cutting to address its poor financial situation while maintaining affordable postal rates and providing timely, universal delivery service. Thus, it is important for both USPS and PRC to report delivery performance information in a sufficiently complete, transparent, and useful manner. Although USPS has made progress since PAEA was enacted in 2006, its delivery performance information is not complete, and it is unclear when USPS will achieve its goal of measuring on-time delivery for nearly all market-dominant mail volume. USPS measured on-time delivery for only 55 percent of market-dominant mail volume in the second quarter of fiscal year 2015. As a result, data may not be representative because performance may be different for mail not included in measurement. Although PRC’s reports provide data on the amount of mail included in measurement, they have neither fully assessed the reasons why these measurements are incomplete, nor specified what actions USPS needs to take and the related time frames needed to achieve complete performance measurement. PRC may initiate proceedings to improve the completeness and quality of delivery performance data, but it has not exercised this option. Although USPS and PRC are opposed to such a proceeding, we believe that a PRC proceeding that focuses on issues of data completeness—particularly the problem of excluding mail due to a lack of information—could facilitate more rapid progress by USPS and the mailing industry toward complete measurement. USPS and PRC annual and quarterly reports on delivery performance information are not as useful for oversight purposes beyond the annual compliance assessments because they do not include sufficient analysis that would facilitate holding USPS accountable for meeting its statutory mission to provide prompt, reliable, and efficient services in all areas of the nation, including rural areas. For example, neither USPS nor PRC reports trend data below the national level for all of USPS’s 67 districts to indicate whether performance is improving or getting worse in different parts of the nation. Further, delivery performance information is not sufficiently transparent as it is not readily available or easily accessible on either USPS’s or PRC’s website. Also, postal stakeholders—such as PRC, Congress, business mailers, and the general public—cannot determine whether delivery performance is a problem in rural areas because USPS and PRC are not required to report delivery performance information separately for rural versus non-rural areas. USPS believes that such an analysis would be costly, even though it does not know how much it would actually cost. Such cost information would be useful for Congress to have in order to assess whether developing this information would be appropriate. In addition, USPS and PRC are in the process of responding to a recent congressional request to determine the feasibility of reporting on rural mail delivery performance, which could facilitate determining the associated costs. To assist in determining whether to require USPS and PRC to report on delivery performance for rural and non-rural areas, Congress should direct USPS to provide cost estimates related to providing this information. To improve the completeness of USPS delivery performance information, we recommend that the Acting Chairman of PRC and the other PRC Commissioners exercise PRC’s statutory authority to hold a public proceeding involving USPS, the mailing industry, and interested parties to address how USPS can improve the completeness of USPS’s delivery performance information. To improve the usefulness and transparency of USPS’s and PRC’s reporting of delivery performance information, we recommend that: The Postmaster General provide additional and readily available delivery performance information, such as trend data for on-time delivery performance for all 67 postal districts. The Acting Chairman of PRC and the other PRC Commissioners provide readily available data and additional analysis of USPS’s delivery performance information so that stakeholders can better understand trends and variations in mail delivery performance. We provided a draft of this report to USPS and PRC for review and comment. USPS and PRC provided written responses, which are reproduced, respectively, in appendixes II and III of this report. PRC and USPS agreed with the recommendations addressed to them. Specifically, PRC agreed to hold a proceeding to address how USPS can improve the completeness of USPS’s delivery performance—after, as we reported, initially indicating it was opposed to such a proceeding. Although not the addressee of this recommendation, USPS disagreed with it stating that its measurement systems conform to the Office of Management and Budget’s standards and guidelines for statistical surveys, and that it employs a contractor with long-standing expertise in developing statistically valid and reliable systems. However, we found that key data quality issues involve the lack of completeness of census-type measurement and the associated risk of non-sampling error—issues that are separate from matters of statistical design. Further, USPS said its continuing collaboration with the mailing industry is more likely to stimulate industry cooperation and buy-in rather than lengthy, time- consuming proceedings before the PRC. We continue to believe that a new PRC proceeding on data quality would add value to its continuing collaboration with USPS and the mailing industry. Our report notes that representatives of mailing industry groups and some mailers told us, and commented in PRC proceedings, that PRC should become more involved in issues of the quality of measurement data for on-time delivery performance, including issues regarding the exclusion of mail from measurement. Both USPS and PRC agreed with our recommendations to improve the usefulness and transparency of delivery performance information that they report. USPS acknowledged that the delivery service performance data it reports on its website lacks the granularity of the reports it publicly files with PRC, and does not serve the purposes of in-depth congressional oversight. USPS added that although it is not clear that typical household mailers would use or find value in that level of data granularity, it will pursue establishing a distinct portal for public access to delivery service performance reports, in an effort to be more transparent. Likewise, PRC agreed that information it receives and produces regarding performance measures for USPS can be better organized, and it has updated its website in response to our recommendation. We are encouraged by USPS’s and PRC’s willingness to adopt these the Congress’s recommendations since, as we have recently reported,and the public’s confidence in the quality of performance information that federal agencies are using to assess and achieve results requires that information be publicly reported in a clear and readily accessible way. Although both agencies agreed with the recommendations addressed to them, they disagreed with certain findings, conclusions, and the supporting analytical basis used in this report. Key among the disagreements were our treatment of the completeness of USPS’s data, the appropriateness of the criteria we used for our assessment of PRC’s oversight and analysis, and the usefulness of USPS’s and PRC’s reporting on delivery performance. Regarding our treatment of data completeness, USPS said it understood in the abstract the basis for our finding that delivery performance may differ for mail included in measurement than mail that is not measured. However, it disagreed that this is the case in practice. Specifically, USPS stated that mail pieces that are included in performance measurement should have virtually the same performance results as mail not included in measurement. Additionally, PRC stated that data reliability has markedly improved as a result of PRC’s directives to USPS regarding measurement systems. As discussed in our report, although the completeness of measurement has improved over the past 9 years, 45 percent of market-dominant mail is still not included in measurement. Also, we identified a number of reasons to be concerned that delivery performance may be different for mail that is included in measurement than mail not included in measurement. For example, large-volume mailers, who are most likely to apply barcodes and thus have on-time delivery performance measured, use additional mailing practices to facilitate the timely delivery of their mail, such as entering large volumes of advertising mail close to its final destination. In addition, destination- entry mail has sufficient volume and preparation to enable it to bypass various postal network processing and transportation (e.g., locally entered and delivered so it is handled by only one processing facility)—a strategy that, according to USPS, is the reason this mail is more likely to be delivered on time. As we reported, USPS has set a goal of including virtually all market-dominant mail in measurement, using a census-type approach, and continues to strive for including more mail volume in measurement. Further, USPS raised concern that increasing the proportion of mail included in measurement data may come at potential significant costs, which we believe is a topic that could be further explored in the recommended proceeding. USPS disagreed with the specific example given in our report that destination-entered Standard Mail is more likely to be included in measurement than other “end-to-end” Standard Mail and is more likely to be delivered on time. This example suggests that results for Standard Mail as a whole are higher than they would be if all Standard Mail were included in measurement. USPS said that when compiling the national on-time delivery percentage for all Standard Mail, it weights results for measured mail pieces by shape and entry type so they are compiled in proportion to their prevalence in the entire population of Standard Mail. However, it does not appear that USPS applies weighting procedures when compiling results for individual Standard Mail products—such as Standard Mail Flats and Standard Mail Letters that have significant proportions of both destination-entry and end-to-end mail. Therefore, we have clarified our report to state that available information indicates that non-participation can affect results for some Standard Mail products, particularly if product-specific results are not weighted to reflect key characteristics of the mail. In addition, USPS and PRC made various comments on the importance of the statistical properties of USPS systems that measure on-time delivery performance. USPS said it employs a firm with long-standing expertise in developing measurement systems that are statistically valid and reliable, and provided a letter from this contractor that the measurement systems are designed in a manner to be statistically valid and representative. PRC said it reviews USPS data using statistical principles that determine whether service performance data are sufficient and the results are meaningful. Specifically, PRC said sampling fractions, confidence intervals, and margins of error are the primary factors it uses to determine whether data are accurate and reliable. We agree statistical considerations should inform the assessment of data collected through sampling. However, we also note that such statistical principles are not relevant to evaluate the quality of incomplete data collected using census- type measurement, which is the case for most types of market-dominant mail. An error created when non-measured mail has different on-time delivery performance than measured mail is a “non-sampling error”—as opposed to a “sampling error” that is associated with measurement based on a random sample. Non-sampling errors can affect results, regardless of how valid the statistical design of USPS’s measurement systems may be. As previously discussed, results based on incomplete data can be affected when the measurement process disproportionately includes mail that is more likely to be delivered on time. PRC also was critical of our focus on data completeness, stating that it is not a meaningful statistical measure and that PRC has not concluded that the percentage of mail in measurement should be the primary determinant of accurate, reliable, or representative measurement data. USPS stated that our report does not specify what an appropriate level of measurement may be. However, PRC has not defined what an appropriate percentage of mail in measurement would be for measuring on-time delivery based on census- type measurement, while USPS has set a goal of including virtually all mail volume in measurement. For most types of market-dominant mail, measuring on-time delivery performance involves census-type measurement as well as measurement based on sampling. For the mail that is included in the measurement based on a census-type approach, to assess non-sampling error would require determining whether the mail not included in measurement systematically differed from the mail included in the measurement, particularly regarding characteristics associated with on-time delivery. Regarding the appropriateness of the criteria we used for our assessment of PRC’s oversight and analysis, PRC stated that our assessment was based on GAO-created criteria rather than the statutory requirements in PAEA. We agree that our assessment was not intended to determine PRC’s compliance with its statutory requirements. Rather, our review used criteria that are appropriate for assessing an organization’s practices for reporting delivery performance information that would be useful for management and congressional decision-making. As noted in our report, the criteria we used to assess USPS’s and PRC’s measurement and reporting of delivery performance information are based on current laws—including PAEA—and regulations, as well as previously identified practices used by high-performing agencies, and prior GAO reports. Specifically, our criteria are a result of reviewing delivery performance measurement and reporting provisions applicable to USPS and PRC in PAEA, and PRC regulations, which we summarized in table 3. In addition, we believe that certain government principles can help inform congressional and executive branch decision-making to address challenges. For example, USPS should disclose more information about the accuracy and validity of its performance data and actions to address limitations to the data. Our prior work has found that without useful performance information, it is difficult to monitor agencies’ progress toward critical goals. PRC also disagreed with a statement in our draft report that its reports have not assessed why USPS’s delivery performance measurements were incomplete nor specified what actions USPS needs to take to achieve complete performance-measurement data. PRC said it has assessed the primary reasons measured mail may be inaccurate, unreliable, or not representative of nationwide performance, including data not in Full-Service Intelligent Mail, uncategorized mail, invalid data, and low district-level volumes. PRC said its reports have regularly directed USPS to improve data reliability and accuracy by increasing participation in Full-Service Intelligent Mail, increasing measured volumes for mail product categories in certain districts, and increasing the number of districts providing results. We agree that PRC reports have addressed some issues related to the quality of delivery performance data, such as providing data on the amount of mail included in delivery performance measurement and expressing concern with low levels of participation for certain types of mail. However, as we discuss in our report, PRC has not fully assessed why these measurements were incomplete, whether USPS actions will achieve complete performance data, why lack of participation remains a significant issue, and whether there are practical opportunities to make progress. Further, recent PRC reports have not assessed what have become the primary causes for excluding mail pieces from measurement, including no “start-the-clock” information, no mail piece barcode scan recorded by USPS automation equipment, and inaccuracies in mail preparation. Thus, we continue to believe that PRC has opportunities to improve its oversight and encourage PRC and all stakeholders to explore these causes in its forthcoming proceeding. Regarding the usefulness of reporting, PRC and USPS disagreed with our characterization that USPS’s and PRC’s reports are not sufficiently useful for effective oversight. PRC objected to the implication that it is not fully successful in meeting its oversight responsibilities, and added that it has provided strong oversight in achieving the transparency and accountability required by Congress and that its reports are useful. While we recognize that PRC is not statutorily required to assess USPS’s performance in providing mail to all parts of the country—including rural areas—USPS is still responsible for adhering to these requirements and no other oversight agency exists to hold USPS accountable to these requirements. Given the broad scope of recent changes in postal operations, network consolidations, and service standard changes, Members of Congress and other postal stakeholders have raised concerns about the impact of these changes on delivery performance. Thus, effective oversight is even more critical to ensure that any delivery performance problems are promptly identified and addressed. USPS and PRC also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Postmaster General, the Acting Chairman of the Postal Regulatory Commission (PRC), the other PRC Commissioners, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix IV. This report assesses (1) the U.S. Postal Service’s (USPS) measurement of mail delivery performance and the Postal Regulatory Commission’s (PRC) oversight of this measurement and (2) USPS’s and PRC’s reporting of this information. To conduct this work, we assessed whether USPS’s measurement of its delivery performance is complete and whether USPS’s and PRC’s reporting on this performance is useful and transparent. To make our assessments, we compared USPS’s and PRC’s measurement and reporting efforts to specific elements associated with these criteria. We originally developed these criteria for a 2006 report that assessed USPS’s delivery service standards, measures, and reporting. In developing those criteria, we identified applicable laws related to USPS’s mission, ratemaking, and reporting, and practices used by high-performing organizations related to delivery service standards, measurement, and reporting, including practices identified through our past work. For this review, as table 4 below illustrates, we adapted and updated each criteria identified in the 2006 report. We reviewed current laws, previously identified practices used by high-performing agencies, and prior GAO reports to identify specific, observable elements associated with each criteria, in order to make a more direct assessment on the extent delivery performance information is complete, useful, and transparent. For example, we reviewed provisions in the Postal Accountability and Enhancement Act (PAEA) and implementing PRC regulationsestablished the legal framework for measurement of mail delivery that performance, PRC’s oversight of this measurement, and reporting of this information. To identify practices for reporting delivery performance information that would be useful for management and congressional decision making, we reviewed the Government Performance and Results Act of 1993 (GPRA), and the GPRA Modernization Act of 2010 framework for meeting fiscal, management, and performance challenges, practices used by high-performing agencies, and prior GAO reports. To assess delivery performance measurement, we reviewed documentation of mail delivery performance, the measurement systems used to develop this information, and limitations of these systems. We also reviewed USPS’s annual reports to Congress and PRC, PRC’s annual compliance determinations, Mailers’ Technical Advisory Committeemeasurement systems and the data USPS collects. In addition, we reviewed relevant documentation regarding USPS’s proposal to replace its External First-Class Mail measurement system (EXFC), including USPS’s proposal, stakeholder comments on the proposal, and USPS’s reply responses to stakeholder comments. Between December 2014 and June 2015, we received written responses and data from USPS and PRC related to mail delivery performance measurement and associated limitations and interviewed USPS and PRC officials. USPS’s responses contained data on the amount of mail ineligible for delivery performance measurement and excluded from delivery performance measurement in fiscal years 2010 through the second quarter of fiscal year 2015. We assessed the reliability of USPS’s data through a review of related documents, such as written responses from USPS. We found these data sufficiently reliable for providing a general description related to the completeness of delivery performance information. To assess PRC’s oversight of delivery performance information, we reviewed PRC’s annual compliance determinations and other reports, obtained written responses from PRC and USPS, and interviewed PRC and USPS officials. We also interviewed representatives of mailing industry groups and business mailers with expertise on delivery performance measurement and postal issues to discuss the completeness of delivery performance information reported by USPS and PRC’s assessment of this information. We used our professional judgment to select these representatives; thus, the responses we received from them are not generalizable to the entire mailing industry. We also reviewed laws, regulations, and PRC orders and determinations to identify any guidance or requirements for USPS and PRC related to the quality of delivery performance information. presentations, and other documentation on USPS’s current To assess reported delivery performance information, we reviewed the mail delivery performance information reported in USPS annual reports to Congress, PRC annual compliance determinations and other reports, and on the USPS and PRC websites. We assessed the usefulness of the reported information to provide oversight over how effectively USPS fulfills its statutory mission to provide prompt, reliable, and efficient services to all areas of the country (universal delivery service), and a maximum degree of effective service in rural areas. We also interviewed representatives of mailing industry groups and business mailers with expertise on delivery performance measurement and postal issues to discuss the usefulness of delivery performance information reported by USPS and PRC. We used our professional judgment to select these representatives; thus the responses we received from them are not generalizable to the entire mailing industry. To determine the extent that the delivery performance information is transparent, we reviewed delivery performance information USPS and PRC disclose on their websites to assess the extent to which it is easily accessible and readily available. We also reviewed laws and statutory regulations to identify any requirements related to reporting delivery performance information in a transparent manner. We conducted this performance audit from October 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report were Teresa Anderson (Assistant Director); Samer Abbas; Kenneth John; Thanh Lu; Malika Rice; Amy Rosewarne; Kelly Rubin; and Crystal Wesco.
USPS is in the difficult position of balancing cost-cutting actions to address its poor financial situation with efforts to provide prompt, affordable, and reliable mail service. GAO has previously reported that complete, useful, and transparent delivery performance information is essential for USPS and stakeholders to understand USPS's success in achieving this balance. GAO was asked to review how USPS measures delivery performance and how PRC assesses this information. GAO assessed (1) USPS's measurement of mail delivery performance and related oversight by PRC and (2) USPS's and PRC's reporting of this information. GAO reviewed USPS and PRC delivery performance data for fiscal years 2010-2015, delivery service standards, and measurement system documents, as well as applicable laws and leading practices identified in GAO's prior work. U.S. Postal Service (USPS) measurement of on-time delivery performance has expanded greatly over the past 9 years, but remains incomplete because only 55 percent of market-dominant mail (primarily First-Class Mail, Standard Mail, Periodicals, and Package Services) is included (see fig.). The remaining 45 percent is excluded due to various limitations, such as not having barcodes to enable tracking. Incomplete measurement poses the risk that measures of on-time performance are not representative, since performance may differ for mail included in the measurement, from mail that is not. Complete performance information enables effective management, oversight, and accountability. In addition, the Postal Regulatory Commission (PRC) has not fully assessed why USPS data are not complete and representative. While PRC's annual reports have provided data on the amount of mail included in measurement, they have not fully assessed why this measurement was incomplete or whether USPS actions will make it so. PRC may initiate a public inquiry docket (a type of proceeding) to improve data quality and completeness, but has not done so. Such a proceeding could facilitate evaluating data quality and identifying areas for improvement, as well as actions and time frames to complete improvements. USPS's and PRC's reports on delivery performance are not as useful as they could be for effective oversight because they do not include sufficient analysis to hold USPS accountable for meeting its statutory mission to provide service in all areas of the nation. USPS's and PRC's reports provide analysis, as legally required. However, this national-level analysis does not facilitate an understanding of results and trends below the national level, such as for USPS's 67 districts, to identify variations and areas where improvements are needed. Further, delivery performance information is not sufficiently transparent or readily available. USPS posts only the most recent quarterly report on its website making it difficult for stakeholders to access trend data. Also, USPS and PRC are not required to provide—and do not report—performance information for rural areas. While several Members of Congress have recently requested studies on rural delivery performance, USPS has stated that such analysis would be costly, even though it could not provide specific cost estimates. Such cost information would be useful for Congress to assess whether developing this information would be appropriate. To assist in determining whether to require USPS and PRC to report on delivery performance for rural areas, Congress should direct USPS to provide cost estimates related to providing this information. Further, GAO recommends that USPS and PRC take steps to improve the completeness, analysis, and transparency of delivery performance information. USPS and PRC agreed with the recommendations addressed to them, but disagreed with certain findings on which they are based. GAO believes these findings are valid, as discussed in this report.
Cost estimation is a difficult process that requires both data and judgment, and seldom, if ever, are estimates precise—the goal is to find a “reasonable” estimate of future needs. Cost estimates are necessary for government programs for many reasons: for example, to support decisions about whether to fund one program over another, develop annual budget requests, or evaluate resource requirements at key decision points. As discussed in our Cost Assessment Guide, developing a good cost estimate requires stable program requirements, access to detailed documentation and historical data, well-trained and experienced cost analysts, a risk and uncertainty analysis, and the identification of a range of confidence levels. The guide also outlines 12 steps for a high-quality cost estimation process, which are to: Define the estimate’s purpose Develop the estimating plan Define the program Determine the estimating approach Identify ground rules and assumptions Obtain the data Develop the point estimate Conduct sensitivity analysis Conduct risk and uncertainty analysis Document the estimate Present estimate to management for approval Update the estimate to reflect actual costs and changes. It is important that cost estimators and independent organizations validate that all cost elements are credible and can be justified by acceptable estimating methods, adequate data, and detailed documentation. Hence, in addition to the 12 steps of a high-quality cost estimation process, the guide also describes four best practice characteristics of a high-quality, reliable estimate generated by a sound cost estimation process. Specifically, the estimate should be well documented, comprehensive, accurate, and credible. Table 1 describes these characteristics in more detail. Adherence to these best practices can help ensure that a cost estimation process provides a reasonable estimate of how much it will cost to accomplish all tasks related to a program and that the estimate is traceable, accurate, and reflects realistic assumptions. Appendix II presents the 12 steps of a high-quality cost estimation process mapped to the four characteristics of reliable, high-quality estimates. Because DOD’s cost estimates for military operations in Bosnia during the 1990s were consistently well below the actual costs, DOD contracted with IDA to develop a tool to assist in developing preliminary and detailed cost estimates for contingency operations. By 1998, IDA had developed the first version of COST. The tool generates a cost estimate for a contingency operation on the basis of type of mission, duration, operational tempo or intensity, number of personnel and equipment, transportation needs, subsistence for personnel, and the originating and destination site. Accordingly, the tool contains data relating to geographic locations, military unit types, military equipment types, management and cost factors, and adjustment factors pertaining to climate, terrain, and operational intensity. Formulas within the tool draw on these data and user-defined inputs, such as the number of personnel or equipment and duration of operations, to develop cost estimates for many types of costs associated with contingency operations from predeployment to reconstitution, up to 250 line items or types of costs depending on the operation in question. The tool cannot estimate every type of cost that might be incurred for a contingency operation. Rather, it can only estimate certain incremental costs from the personnel, personnel support, operating, and transportation cost categories. However, some types of costs within those categories—such as depot-level maintenance of equipment and certain contracts, as well as all costs in the investment cost category, including procurement and military construction—are outside the scope of COST’s estimating capabilities. Military components, including the services, are primary sources of data for the tool. Table 2 illustrates the primary sources of data from each component that uses COST to develop estimates for GWOT budget requests. Cost factors and management factors are key types of data that the tool uses to develop estimates. Cost factors function as variables in the tool, and a few examples are hardship duty pay, cost for operating and support of a ship, or average cost per ton mile for equipment airlift. Cost factors are developed from four main sources; DOD databases of record, service models for various statistics such as Air Force flying hours, DOD’s Cost of War reports, and other information provided by the service budget offices. Management factors include information such as the average metric tons per person of materiel for deployment or redeployment of the monthly flying hours of aircraft. The services use COST as required as part of their process to develop a GWOT budget request, but the degree to which each service relies on the COST results differs. Both DOD officials and IDA representatives state that COST is better suited for Army ground forces and was primarily intended to develop incremental personnel and operations cost estimates for operations with discreet phases and time frames. They noted it was not built to estimate costs for the long-term nature and broad scope of activities and needs related to operations such as GWOT. Therefore, service officials report varying degrees of confidence in the tool’s functionality and accuracy for their specific service. The Army relies on COST’s results when developing estimates related to personnel and operations, while Air Force, Navy, and Marine Corps officials often rely more on historical obligation data and other information than the COST-generated estimates. COST does not currently have the capability to estimate other types of costs, such as procurement, most equipment maintenance, and some contract costs. As a result, all the services develop this portion of GWOT budget requests using historical obligation data and other information. DOD components are required to use COST as part of the GWOT budget request process. The DOD financial management regulation that provides financial policy and procedures for small-, medium-, and large-scale campaign level military contingency operations requires that COST be used to develop a cost estimate for the deployment of military personnel and equipment. The regulation further states that the DOD Comptroller will issue specific guidance providing factors and cost criteria necessary to develop an estimate, and that the COST estimate will address the funding requirements for operations and maintenance and military personnel costs. The DOD Comptroller issues guidance that directs the development of a fiscal year’s GWOT budget request. This guidance specifically directs the use of COST to calculate operations costs related to GWOT, and also provides information guiding COST use, such as the level and intensity of operations to be assumed. Much of the guidance details the type and level of detail that must be provided in supporting materials that should accompany components’ estimated GWOT budget requests. COST does not develop estimates for items that are not attributable to the deployment or sustainment of personnel and equipment, such as procurement, most types of equipment maintenance, and certain major contracted needs and services. As discussed later, the services must use other processes to develop this portion of their GWOT budget requests. Neither DOD Comptroller guidance nor DOD financial management regulations prescribe the use of any particular method of developing estimates for these categories. A 2006 memo from the Deputy Secretary of Defense expanded allowable costs for GWOT in several categories, especially reset-related procurement and equipment maintenance. These types of costs accounted for about 70 percent of the total GWOT budget request for fiscal year 2008, which is a significant increase over previous years. Due to this increase in costs outside the tool, the COST-related portion of GWOT budget requests fell from about 80 percent to about 30 percent of the total, although the amount of the COST-generated estimate remains stable at between $40 billion and $55 billion per year. The Army uses COST as intended by relying on the tool to generate an estimate for many personnel and operations costs for GWOT budget requests. During the Army’s development of an estimate using COST, minor adjustments to COST’s standard settings are made to better match realities on the ground. For example, an official developing an estimate in COST might reduce the costs for the transportation of equipment if a unit will be using equipment already in theater instead of taking equipment with them. DOD officials and IDA representatives stated that COST is better suited for Army ground forces; therefore, the Army relies on the final estimate developed by COST and submits this information to the DOD Comptroller as part of its GWOT budget request. About 40 to 45 percent of the Army’s final GWOT budget requests are typically for the operation and maintenance category of appropriations, and an Army budget office official stated that the majority of this portion is estimated by COST. Army officials further stated that COST is an effective tool for cost estimation because it is frequently updated with cost data the Army submits to IDA. An Army model and database that contain cost information for personnel and equipment are the sources for much of the Army-related data used by COST and also are primary sources for developing the Army’s base budget requests. The Air Force, Marine Corps, and Navy all fulfill the requirement to develop an estimate for their respective service’s personnel and operations funding requirements for GWOT using COST; however, these services significantly alter the COST results to match estimates they have developed outside COST, using historical obligation data and other information. Officials from the Air Force, Marine Corps, and Navy budget offices reported various concerns regarding the functionality and accuracy of COST as reasons for relying more on historical obligation data and other information. Several budget officials from each of these services reported that COST routinely overestimated some costs. As a result, most of the changes they make to COST results, based on historical obligation data, are decreases in the amount estimated by COST. For example, a service budget official stated that for one fiscal year’s GWOT budget request, the total COST-developed estimate was $100 million more than the estimate developed by the service for the same types of costs using historical obligation data. Specifically, COST overestimated transportation costs by about $275 million, while underestimating certain personnel support costs by about $200 million, among other discrepancies. Navy officials stated that COST often overestimated some types of transportation costs and the results had to be manually adjusted to match historical obligation data or other information. Similarly, Marine Corps officials reported that most adjustments they make to the COST output result in a decrease of the estimate. An Air Force budget official stated that COST overestimated some transportation costs by about $1 billion in a prior year’s estimate, while other costs were not captured and therefore underestimated. While these discrepancies were adjusted prior to submission as a GWOT budget request, this official stated that an IDA representative had since suggested strategies to develop a more accurate transportation estimate in future uses of the tool. Furthermore, a Navy official stated that COST is unable to project certain costs associated with civilians, and hence might underestimate the total costs due to this exclusion. For example, the official stated that COST does not automatically estimate costs for civilian support positions associated with an operational unit, such as a ship or ground unit. Aside from accuracy concerns, officials from the Air Force, Marine Corps, and Navy reported that COST and the cost breakdown structure that forms the basis of COST’s organization and resulting estimations better represent Army ground forces than the unique characteristics of the other forces. Navy officials stated that COST automatically estimates food, ice, and water for all units because deployed ground forces require these items. However, the Navy funds these items for sailors on deployed ships through base budget funding because these costs are incurred regardless of a ship’s location. The tool has not been refined to accurately estimate these costs; therefore, Navy officials must manually remove these types of costs from an estimate for GWOT funding. Several service officials stated that, because of the limitations to COST, the required process of using COST to develop a cost estimate was duplicative of their preferred method of using historical obligation data and other information better suited to their specific service to develop a GWOT budget request. Service officials stated that COST is a useful tool for estimating costs for small-scale and short-duration operations or for situations for which information is unknown or new, such as the recent troop surge or for other general rough-order-of-magnitude estimates produced early in operation planning while options are being weighed by decision makers. However, officials stated the tool does not perform as well for estimating costs for the lengthy deployment and sustainment phases associated with a large campaign such as GWOT. Furthermore, COST is not able to estimate all costs associated with GWOT. Because COST does not have the capability to estimate costs such as procurement, reset-related equipment maintenance, and contracted needs and services, service budget officials report using historical obligation data, other models and formulas, and other information, such as deployment information, to estimate these costs. Reset-related procurement estimates for GWOT are devised in multiple ways. For example, to estimate costs to procure new equipment to replace lost or damaged equipment, officials stated that incident reports are tracked to provide information on how many pieces of equipment are needed to replace battle losses. For procurement to replace equipment that has reached the end of its useful life because of GWOT’s higher operating tempo, formulas, based on historical data, provide information on the normal extent of wear and tear for certain types of equipment, and wear above the normal extent is attributed to GWOT. Most types of equipment maintenance are not estimated by COST, such as intermediate- and depot- level maintenance; therefore the services again rely on historical obligation data to develop estimates for the GWOT-related costs. For example, Army logistics officials track the units that are scheduled for redeployment and develop estimates for the cost of resetting a particular unit’s equipment based on the type of brigade the unit is part of, such as a heavy brigade combat team or Stryker brigade combat team, and the average cost of resetting that type of brigade unit developed from historical obligation data, adjusted for inflation. For other items outside the scope of COST—such as the Logistics Civil Augmentation Program costs for the Army, intelligence needs, and contracts for other needs such as linguists—functional experts within each service provide the service budget offices with information to develop estimates. This information is based on contract task orders or needs assessments developed by in- theater commanders. For example, in-theater commanders submit requests for additional linguists to Army intelligence officials, and the estimated cost for these linguists is developed based on historical costs for the same type of linguist. Army budget officials stated that contracting costs, including the Logistics Civil Augmentation Program, linguist, and security services, are one of the most expensive cost categories that falls outside of COST. Officials from all services stated that, after many years of ongoing operations in support of GWOT, they believe few requirements are truly unknown or based on emerging needs, so therefore they are comfortable relying on historical obligation data and other information to develop estimates for these types of costs. For an example of how DOD develops a GWOT budget request and the use of COST and other methods of developing an estimate, see appendix III. DOD has taken steps to improve the performance and reliability of COST; however, COST could benefit from a review of the tool’s adherence to best practices for high-quality cost estimation as outlined in our Cost Assessment Guide. Revisions have been made to the tool to improve its performance, and frequent updates are made to the data used by the tool. However, a review of COST according to the best practices for cost estimation could provide decision makers with information on the extent to which the tool generates reliable estimates and identify opportunities for improvement. Our guide defines high-quality cost estimates as well documented, comprehensive, accurate, and credible. While we did not undertake a full assessment of COST against best practices, during the course of our review we identified features of COST’s estimation process that meet best practices and other features that would benefit from further review. For example, COST adheres to several best practices for a comprehensive and accurate cost estimate, such as frequent updates to the structure of COST and the data that COST uses to generate estimates. While COST appears to largely encompass the types of costs that are incurred to deploy and sustain Army ground forces and their related equipment, COST may not comprehensively and accurately estimate costs for the other services. COST also relies on GWOT obligation data from DOD’s Supplemental and Cost of War Execution Reports that we have identified as being of questionable reliability, and DOD is taking steps to improve. These might be areas for which a thorough and full review of the tool could improve the resulting estimates that are used to develop GWOT budget requests. IDA has made changes to the structure of COST, either at the request of the DOD Comptroller or the services, or on its own initiative, and has refined the tool many times over the past several years to improve functionality or performance. Recent refinements to COST included changes that provide the ability to alter the percentage of officer and enlisted personnel within a unit and the types of diagnostic and summary reports that the tool can create. A 3-year development effort culminated in June 2007 with the release of a new version of COST supported by new software and hardware that increased functionality and performance. This new version allows the user to simultaneously develop estimates for multiple operations within the same contingency. The services and others frequently submit new information for tool updates. For example, components are asked to submit updated cost factor data prior to the development of any request for supplemental emergency funding. Additionally, the DOD Comptroller has asked IDA to review the inputs, assumptions, and processes the services used to generate COST estimates for GWOT budget requests since fiscal year 2005. The reviews revealed issues that were consistent across the services or significant enough to warrant attention in future use of COST. For example, a review found that COST users estimated an excessive use of airlift for the movement of cargo with no scheduled cargo for the return flight. Additionally, a review identified confusion regarding the use of different operational tempo factors and pay offsets. IDA consolidated these and other issues identified in the review of fiscal years 2007 and 2008 into a lessons learned briefing and checklist to assist the services as they use COST in the development of future GWOT budget requests. While DOD has taken steps to revise and update COST to improve effectiveness, COST has not been assessed according to best practices for cost estimation that define reliable, high-quality cost estimates. A review of regulations, guidance, and best practices for cost estimation and best practices established by professional cost analysts, and compiled in our Cost Assessment Guide, identified four characteristics of high-quality, reliable cost estimates. As shown in table 1, cost estimates should be well documented, comprehensive, accurate, and credible. DOD Comptroller officials stated they are confident in the tool’s ability to provide reasonable estimates because COST is frequently updated. However, neither DOD, nor any other entity, has assessed COST and its resulting cost estimates against these best practices. While we did not perform a full assessment of COST against best practices, during the course of our work we identified some features of COST that meet best practices for cost estimation and other areas that could benefit from further review. Well Documented: A well documented cost estimate is based on data that have been gathered from actual historical costs and technical experts, analyzed for cost drivers, and collected from primary sources. These best practices appear to be met by COST. Additionally, any adjustments made to COST’s standard settings are flagged and must be accompanied by an explanatory note that details why information was changed, and these situations are reviewed by DOD Comptroller officials. Furthermore, best practices also require that previous cost factors and data are stored after updates so that an estimation process is repeatable and can be later verified. The newest version of COST does have this capability and IDA maintains records of cost factors and other data. However, best practices also require that data used in a model should be traced back to the source documentation and any normalization steps should be documented. The services are responsible for ensuring data are reliable and neither IDA nor DOD, including the services, traces all data back to the source documents. Additionally, IDA officials stated that unusually high or low data might be removed and no record of these actions would be kept, and best practices require that these sorts of steps should be documented. Further review could reveal if these or other areas might need more work to ensure the estimate is well documented. Comprehensive: Estimates for personnel and operations costs developed by COST appear to meet several, but not all, of the criteria for comprehensive cost estimates. For example, the cost breakdown structure, which defines the cost elements within COST and forms the foundation of formulas within the tool, has more than three levels of detail, the structure is updated as changes occur, and each element is defined in a cost breakdown structure dictionary included in the financial management regulation for contingency operations. These steps are all considered best practices for a comprehensive cost estimate. However, our analysis of the cost breakdown structure in the financial management regulation revealed that there are some errors and ambiguities in the structure’s definitions that might allow for double counting of costs. Furthermore, while COST appears to largely encompass the types of costs that are incurred to deploy and sustain Army ground forces and their related equipment, it may not comprehensively estimate some of the types of costs incurred by the other services. For example, COST does not develop an estimate for the costs of certain Navy ground support units, such as intelligence, that are not associated with a naval fleet. Moreover, COST is not used by the Air Force to develop estimates for the cost of transporting people in non-combat situations, such as the transport of military or civilian personnel from the International Zone to a forward operating base, for example. An Air Force official stated that this is because the tool automatically assumes that any flying hour-related expenses are operational or combat-related and these costs should instead be attributed to transportation-related expenses. Further review could identify cost data or formulas within the tool that could be refined to better suit the other services or might reveal some types of costs for which COST should not be used to develop an estimate. Additionally, the tool does not comprehensively estimate all GWOT costs, such as procurement and many types of equipment maintenance. The GWOT cost estimate presented in appendix III illustrates the revisions made to COST results and the types of costs that were estimated outside of COST for a particular estimate that was to be comprised primarily of military personnel and operations costs. Accurate: Similarly, the tool’s estimates appear to meet some, but not all, of the best practices for accuracy. For example, an accurate estimate should be based on cost factors that reflect updates and changes. IDA does ask the components to submit updated cost factors, which are sometimes based on historical obligation data, prior to every run of the tool for a fiscal year’s GWOT request, but the components are not required to update the factors. IDA reviews the cost factor submission to ensure general consistency across years, but the services and other components that submit data are ultimately responsible for the data and are not required to validate the data prior to their inclusion into the tool. According to IDA representatives, IDA does not validate the data sources or obtain assurances that the data submitted are reliable, because this requirement is not included in its contract with DOD. The services do not validate all data submitted to IDA for updates, nor do they submit updates for all factors that need updating. Additionally, some cost factors are developed from the Cost of War reports and other data are compared against these reports as a validation check. These practices might benefit from review since our previous work has raised concerns about the reliability of reported GWOT obligation data. For example, we have reported that there is a lack of transparency over certain obligations in the Cost of War reports and we have identified inaccuracies in these reports. Consequently, we were unable to ensure that DOD’s reported obligations for GWOT are complete, reliable, and accurate, and believe that they should therefore be considered approximations. However, we acknowledge that DOD has taken steps to address our recommendations and the department has several initiatives underway to further improve the reliability of GWOT obligation data. Additionally, according to service officials, many adjustments to COST results for transportation and other types of costs are made to decrease estimates. This raises concerns regarding the accuracy or applicability of some data in COST. Credible: Many of the best practices associated with a credible cost estimate require sensitivity analysis, risk and uncertainty analysis, and a comparison against an independent cost estimate to be performed. Sensitivity analysis has been performed to understand cost drivers and this type of analysis can be performed during estimate development as characteristics of the operation change. However, IDA representatives stated that risk analysis is unnecessary since the largest sources of risk stem from the changing and unpredictable nature of warfare and policy changes that also cannot be predicted. Additionally, uncertainly analysis can assess the impact that the variability of certain unknown factors will have on resulting estimates. For example, uncertainty analysis of possible fuel price changes could result in various cost scenarios that might occur depending on future fuel prices. Finally, cost results should be compared to an independent cost estimate which is another best practice for a credible cost estimation process. A Joint Staff or other official will often develop a COST estimate to compare against an estimate developed by a service official to ensure the appropriate assumptions were used. A thorough assessment against best practices would reveal if these or other areas might need more work to ensure the estimate is sufficiently credible. Estimating needs and costs that will occur in the future is not an exact science, and the use of cost estimation tools or historical obligation data, along with other factors, can be reasonable means of projecting future costs. However, due to the significant and ever-increasing size of GWOT budget requests, every attempt should be made to ensure that a cost estimation process is sound. COST has been used to generate hundreds of billions of dollars in budget requests for GWOT over the past several years and the services are required to use it. While the tool and underlying data have been refined and updated, COST’s overall effectiveness for estimating GWOT costs has not been assessed. Best practices criteria to measure COST’s extent of sufficient documentation, comprehensiveness, accuracy, and credibility could provide additional information about the tool’s effectiveness at generating high-quality cost estimates and steps of the process that could be further improved. Without a thorough review of COST according to these best practices, decision makers within both DOD and Congress cannot be assured that estimates generated by the tool are developed using valid data and sound processes. Officials from across DOD and the services report that COST performs well for predicting budget needs for ground forces and for small-scale operations of short duration, or for situations in which detailed information is unknown. However, in light of service officials’ concerns that COST does not perform as well for their needs and might generate estimates that are too high in certain areas, it is important that DOD review the applicability of COST for all of the services or investigate ways in which to make the tool better suit the needs of all the services. To ensure that DOD budget requests for GWOT are based on a sound cost estimation process, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to Arrange for an independent review of COST against best practices for cost estimation. Based on that review, and taking into consideration how each service uses COST, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to Consider options for refining COST, determine the appropriate items or types of costs for which COST should be applied, and identify methods to be used when COST is not appropriate. In written comments on a draft of this report, DOD agreed with both of our recommendations for executive action. These written comments additionally provided examples of steps DOD has taken or plans to take that it considers actions that address aspects of our recommendations. Also, DOD provided us with technical comments which we incorporated in the report where appropriate. DOD’s comments are reprinted in appendix IV. DOD agreed with our recommendation that the DOD Comptroller arrange for an independent review of the Contingency Operations Support Tool (COST) against best practices for cost estimation. In its comments on this recommendation, DOD agreed with the concept of an independent review of the tool against best practices, and further noted that the Air Force Studies and Analyses Agency (AFSAA) conducted a review of COST’s use in developing cost estimates for the air war over Serbia. This review compared the tool’s output against the actual reported costs compiled by the Defense Finance and Accounting Service. DOD stated that the recommendations for improvement identified in the review were incorporated into the tool. While the review and the incorporation of its recommendations into the tool are positive steps, the review did not assess COST’s use in estimating the broad scope of costs that are associated with a large-scale campaign such as GWOT. For example, the scope of the AFSAA review of COST was limited to the 3-month air war over Serbia and primarily reviewed costs of the Air Force. Furthermore, the AFSAA review was not a thorough review against best practices for cost estimation, which requires cost estimates to be well documented, comprehensive, accurate, and credible. Therefore, we continue to recommend that an independent and thorough review of the tool against best practices for cost estimation be pursued. This type of review would include an assessment of the risk and uncertainty associated with the inputs to the tool and the accuracy of the underlying equations and data the tool relies on to estimate costs. DOD additionally stated that (1) COST’s factors, processes, and algorithms are updated as needed and the tool is updated to reflect changes to congressionally determined factors, such as military pay rates, and (2) COST relies on many of the same service-specific cost factors that are used during the development of baseline budgets. While we acknowledge in this report that factors and other aspects of the model are regularly updated, these actions are a check for accuracy by the users of the data and our recommendation specifically calls for an independent review of COST. Finally, DOD stated that, due to our recommendation, the DOD Comptroller will issue guidance that will be incorporated into DOD’s financial management regulation that includes a process for updating COST to ensure it reflects the most current budgetary assumptions and a process for evaluating the functionality of the model to determine if adjustments are needed. As this revision of the financial management regulation has not been finalized, we did not assess this planned action. However, this positive step, once completed, should be taken into account as part of an independent and thorough review of COST against best practices for cost estimation. DOD agreed with our recommendation that, based on an independent review, it should consider options for refining COST, determine the appropriate items or types of costs for which COST should be applied, and identify methods to be used when COST is not appropriate. In its comments, DOD stated that it uses every opportunity to refine and improve the COST model, such as the multi-level review of COST results and assumptions during the development of GWOT budget requests. Additionally, changes to COST are made based on training sessions and feedback from COST users. Finally, DOD stated that an extensive review process is in place for each non-COST line item of GWOT budget requests. While we did not assess the process DOD uses to review non-COST line items, our report acknowledges that DOD has refined and updated COST many times as information has changed and the needs of the department have evolved. However, we reiterate our view that the tool should be subject to an independent review, and COST should be further refined based on the findings of that review, as needed. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller), and the Director, Office of Management and Budget. Copies of this report will also be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Sharon Pickup at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess how the Department of Defense (DOD) uses the Contingency Operations Support Tool (COST) and other processes to develop Global War on Terror (GWOT) budget requests, we reviewed and analyzed relevant documents and interviewed key DOD and service officials and representatives from the Institute for Defense Analyses (IDA). Documents that we used for our review included, but were not limited to, (1) relevant DOD directives, instructions, and memoranda related to budgeting processes; (2) DOD financial management regulations that provide policy and procedures for contingency operations; (3) DOD guidance for the preparation and submission of requests for incremental funding for GWOT; and (4) service budget office guidance for the preparation and submission of GWOT budget requests. We obtained testimonial evidence from officials representing the Office of the Under Secretary of Defense (Comptroller) and the Joint Staff regarding the processes used to develop GWOT budget requests and the role of COST in that process. Specifically, we obtained their perspectives on COST’s effectiveness and accuracy, as well as the processes employed to develop estimates for costs that are outside the scope of the tool’s estimating capabilities. We similarly interviewed key service officials in the financial management or budget office responsible for developing GWOT cost estimates for contingency operations in the Air Force, Army, Marine Corps, and Navy to understand their experiences using COST during the development of prior years’ GWOT budget requests, including strengths and weaknesses of the tool. We additionally interviewed service officials that were identified as functional experts for the types of costs that must be estimated outside of the tool, such as procurement, reset-level equipment maintenance, and intelligence needs. We discussed what processes they use to develop estimates for these types of costs. We attended several briefings regarding COST presented by IDA and attended training sessions on the tool. We interviewed IDA representatives about the tool and reviewed numerous briefings about COST’s use in cost estimation, the structure of the tool, and the tool’s development. Finally, in order to understand how DOD developed an actual GWOT budget request and the use of COST and other methods to develop that estimate, we asked DOD to demonstrate how an estimate was developed for a case study, which was the $6.3 billion estimate for military operations that was included as part of DOD’s October 2007 $42.3 billion amendment to the Fiscal Year 2008 GWOT supplemental request for emergency funding. We chose this example as our case study because it was a recent estimate and was assumed to be comprised primarily of military personnel and operations costs, due to the description of this estimate in the justification document for the amendment to the Fiscal Year 2008 GWOT budget request. DOD provided the initial and approved estimate for this request by service and by appropriation category. We discussed this estimate with DOD and service officials, including the assumptions and processes that were used to develop this estimate, the reasons for changes between the initial and approved estimates, and the types of costs estimated by COST or outside of COST. We did not validate the assumptions used to generate this estimate or the data DOD presented in this example. To assess what actions DOD has taken to ensure COST adheres to best practices for cost estimation, we reviewed applicable best practices and compared DOD’s efforts against those best practices we found to be consistently associated with reliable, high-quality cost estimation. We reviewed DOD guidance regarding requirements for the verification, validation, and accreditation of tools and simulations used by DOD. We reviewed documents obtained from DOD and IDA and had discussions with DOD officials and IDA representatives about COST; for example, we reviewed numerous briefings about COST and the training manual that documents the tool’s specifications, development, and use in detail. We interviewed DOD Comptroller and IDA representatives about improvements and updates that have been made to COST and the purpose of those updates. We obtained testimonial evidence from DOD Comptroller officials and IDA representatives about the steps to develop COST and the processes that surround the tool’s use as part of developing a GWOT budget request, and identified steps that appeared to meet certain criteria of established best practices and those that appeared to warrant further review. We did not perform a full review of COST against all best practices, but presented examples of how the tool meets certain best practices to provide some context to decision makers about what might be considered strengths of the tool and we highlighted some areas that might benefit from a full and independent review of COST. These examples are meant to serve as illustrative detail to provide more information to decision makers, but should not be considered the results of a complete, thorough, and independent review of the tool. We conducted this performance audit from July 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Define the estimate’s purpose significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. Data have been traced back to the source documentation. A technical baseline description is included. All steps in developing the estimate are documented, so that a cost analyst unfamiliar with the program can recreate it quickly with the same result. All data sources for how the data were normalized are documented. The estimating methodology and rationale used to derive each cost breakdown structure element’s cost are described in detail. The estimate’s level of detail ensures that cost elements are neither omitted All cost-influencing ground rules and assumptions are detailed. The cost breakdown structure is defined and each element is described in a The estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. It has few, if any, mathematical mistakes; those it has are minor. It has been validated for errors like double counting and omitted costs. It has been compared to the independent cost estimate for differences. Cross-checks have been made on cost drivers to see if results are similar. It is updated to reflect changes in technical or program assumptions and new phases or milestones. Estimates are replaced with the earned value management estimate at completion and the independent estimate at completion from the integrated earned value management system. Major assumptions are varied and other outcomes recomputed to determine how sensitive outcomes are to changes in the assumptions. Risk and uncertainty analysis is performed to determine the level of risk The results are cross-checked and an independent cost estimate is developed to determine if other estimating methods produce similar results. To better understand how the Department of Defense (DOD) develops a Global War on Terrorism (GWOT) budget request and the use of the Contingency Operations Support Tool (COST) and other methods of developing an estimate, we asked DOD to demonstrate how the services used COST and other methods to develop the $6.3 billion budget estimate for military operations that was included as part of DOD’s October 2007 $42.3 billion amendment to the fiscal year 2008 GWOT supplemental request for emergency funding. DOD justification materials for the amendment describe this $6.3 billion portion as needed to support the continued sustainment and redeployment of the five Army brigades and two Marine Corps infantry battalions that were considered part of the troop surge. This portion of the amendment was also requested to support the simultaneous deployment of combat support forces augmenting these combat forces and other costs related to the presence of the troop surge. Table 3 presents DOD’s estimate for this $6.3 billion portion of the amendment meant to fund additional operations, broken into the COST- generated portion and the portion that was estimated by other means. Typically, service budget officials develop cost estimates for GWOT, but officials stated that time constraints for this specific estimate required that Joint Staff and DOD Comptroller officials develop the initial estimate using both COST and other processes outside of COST as necessary. After discussions with service budget officials, and the September 10, 2007, testimony of the Commander of the Multi-National Force-Iraq before Congress, the estimates were revised. This is reflected in the approved estimate for each service, again broken into the COST portion and the “non-COST” portion. Each service’s estimate is further broken down into estimates for Military Personnel appropriations and Operation and Maintenance appropriations. As shown in table 3, the Army’s estimate of nearly $5.8 billion comprises the majority of the $6.3 billion total estimate. COST-generated estimates for military personnel or operation and maintenance did not change significantly for the Army from its initial estimate to the approved final estimate. Substantial changes were made, however, to the portions of the estimate that are outside the scope of COST. Army budget officials reported that the large decrease in requested military personnel funding was due to lower mobilization levels than originally predicted, adjusted overstrength levels, and reduced permanent change of station and subsistence costs. The COST portion for both military personnel and operations for the Army decreased slightly from the initial to the approved estimate. Army budget officials reported that the approximately $2 billion increase in operation and maintenance costs that was derived outside of COST was due to force protection and other equipment or services needed to support the troop surge and the additional operations performed by surge troops. Table 4 provides more details on these costs. Similarly, major adjustments were made to the non-COST portions of the Air Force, Marine Corps, and Navy estimates for either military personnel costs, operation and maintenance costs, or both appropriation categories, depending on the service. These changes were due to changed assumptions regarding deployment and specialized needs. For example, the Navy’s non-COST personnel estimate was increased for costs associated with permanent change of station and active duty for special work needs. The Marine Corps increased its non-COST personnel request by $163 million in anticipation of increased requirements for reserve and Individual Ready Reserve activations to active duty. The adjustments to the Marine Corps and Air Force COST-generated portions were likewise substantial. The Marine Corps’ increase for personnel costs reflected in the COST-attributed portion reflects an anticipated increase in counter- insurgency operations in Afghanistan, while the Air Force estimate was refined to exclude a KC-10 and the Air National Guard and Air Force Reserve personnel that would accompany the KC-10. From this case study, it is clear that changing the assumptions regarding the deployment of personnel and equipment can have substantial impact on the COST-related portion of a GWOT budget request. Also, significant changes were made to the non-COST portion of the request, which in this example increased from $775 million to about $2.1 billion, including the nearly $1 billion offset in Army personnel cost estimates. The information regarding the non-COST portion of this example reveals the significance and size of estimates generated outside the tool, even in a situation where the majority, if not all, of the costs would be assumed to be related to personnel and operations. We did not validate any of the data DOD presented in the above discussion, or any of the assumptions or other information used by DOD to develop this estimate. In addition to the contact named above, Ann Borseth (Assistant Director), Grace Coleman, Susan Ditto, Linda Keefer, Lonnie McAllister II, Lisa McMillen, Charles Perdue, Suzanne Perkins, Karen Richey, and Karen Werner made key contributions to this report.
Since the September 2001 terrorist attacks, Congress has provided about $800 billion as of July 2008 to the Department of Defense (DOD) for military operations in support of the Global War on Terrorism (GWOT). GWOT budget requests have grown in scope and the amount requested has increased every year. DOD uses various processes and the Contingency Operations Support Tool (COST) to estimate costs for these operations and to develop budget requests. GAO assessed (1) how DOD uses COST and other processes to develop GWOT budget requests and (2) what actions DOD has taken to ensure COST adheres to best practices for cost estimation. GAO interviewed DOD officials and others to determine how the services develop GWOT budget requests using COST and other processes. GAO also used its Cost Assessment Guide as criteria for best practices for cost estimation. The services use COST as part of their process to develop a GWOT budget request. While the Army relies more on the estimate resulting from COST, the other services adjust the results of COST to reflect estimates they generate outside of COST, based on historical obligation data and other information. DOD's financial management regulation and other guidance require components to use COST to develop an estimate for the deployment and sustainment of military personnel and equipment for ongoing operations in support of GWOT. While all services use COST to develop an initial estimate, Air Force, Marine Corps, and Navy budget officials alter the results of the tool to match information provided by lower level commands and historical obligation data that they believe are more accurate than the COST-generated estimate. These officials stated that the tool routinely overestimates some costs and therefore most changes made are decreases in the amount estimated by COST. These officials believe that the requirement to use COST to develop a GWOT budget request is a duplicative process to their preferred method of using historical obligation data and other information better suited to their specific service. For example, they stated that COST better represents the needs of Army ground forces and the tool has not been refined to be as effective for estimating needs for their service's mission. These officials also mentioned that COST is better suited for developing estimates for smaller-scale contingency operations than for the lengthy deployments and sustainment phases associated with a large campaign such as GWOT. To develop estimates for items that are outside the scope of COST, such as procurement and certain contracts, the military services rely primarily on needs assessments developed by commanders and historical obligation data. DOD has taken steps to improve the performance and reliability of COST; however, COST could benefit from an independent review of the tool's adherence to best practices for high-quality cost estimation as described in GAO's Cost Assessment Guide. COST has been refined many times and cost factors are routinely updated in an effort to use the most current information available to develop an estimate. DOD officials stated they are confident in the tool's ability to provide reasonable estimates because COST is frequently updated. However, COST has not been assessed against best practices for cost estimation to determine whether COST can provide high-quality estimates that are well documented, comprehensive, accurate, and credible. While GAO did not undertake a full assessment of COST against best practices, it determined that some features of the tool meet best practices while other features would benefit from further review. For example, the tool adheres to several best practices for a comprehensive and accurate cost estimate, such as frequent updates to the structure of COST and the data the tool uses to generate estimates. However, COST relies on GWOT obligation data that GAO has identified as being of questionable reliability. A thorough, independent review of COST against best practices could provide decision makers with information about whether the tool creates cost estimates for GWOT expenses that are well documented, comprehensive, accurate, and credible.
In 1968, the Congress added Section 242 to the National Housing Act establishing the Hospital Mortgage Insurance Program. In considering this amendment to the National Housing Act, the House Committee on Banking and Currency cited a serious shortage of hospitals and the need for existing hospitals to expand and renovate. Private lenders seemed reluctant to provide capital financing at reasonable terms. The purpose of the program is to “assist the provision of urgently needed hospitals for the care and treatment of persons who are acutely ill . . ..” Consequently, Section 242 authorized HUD to provide insurance for hospital mortgages secured from lenders to finance the construction and renovation of hospitals. Many hospitals need to borrow money from lenders to finance construction and renovation projects. Lenders often raise capital by selling bonds to investors and use the hospitals’ mortgage payments to pay bondholders. Mortgage insurance, like private bond insurance, guarantees that bondholders will be paid if the hospital stops making payments on its loan. According to the Health Care Financing Study Group, about 60 percent of hospitals that seek financing require insurance to enhance their credit because they cannot get a loan on their own financial strength. Eighty-three percent of these hospitals can get private bond insurance but about 17 percent cannot because private insurers consider them too risky. Some hospitals that cannot get private mortgage insurance apply to FHA’s hospital insurance program. FHA’s Hospital Mortgage Insurance Program staff and HHS’ Division of Facilities Loans staff jointly manage the hospital program. The Congress gave HUD statutory responsibility for the program. The House Committee on Banking and Currency, in recommending that HUD be given this responsibility, cited FHA’s more than 35 years of experience with promoting housing construction through its housing insurance programs. The Committee was concerned, however, that HUD’s staff did not have specialized knowledge of health care needed to administer this program. As a result, the Committee recommended and the Congress enacted the requirement that a state agency must certify that a hospital is needed before it can participate in the program. Also, the Committee expected HUD to draw upon HHS’ hospital expertise to devise standards for insuring hospitals’ mortgages. Through a memorandum of agreement, HUD formally delegated authority to HHS to review and approve proposals for hospitals’ mortgage insurance. HUD retained authority to make the final insurance commitment and endorse the mortgage note. The Hospital Mortgage Insurance Program requires hospitals to have the state certify the need for the proposed projects and then meet underwriting criteria before insurance applications can be approved. Since 1988, hospitals have obtained FHA insurance approval to construct acute care facilities, ambulatory care centers, and operating rooms and to renovate maternity and emergency departments and surgical suites. In addition, hospitals have obtained approval to purchase equipment, install new computer and fire alarm systems, and build parking facilities. The use of hospital inpatient services, however, has declined over time. Current trends indicate a greater focus on cost containment and delivering health care on an outpatient basis. Overall, the financial performance of the hospital program has reflected a net positive cash flow from operations over the past 25 years, according to HUD data. However, in several years, the program has experienced financial losses. The bulk of the losses occurred between 1989 and 1991, when HUD had to pay lenders about $147 million because of hospital defaults. The current composition of the program’s portfolio with the concentration of insured loans in New York, changes in state policies, trends in the health care market, and the probability of future changes in federal health care policies pose risks that may threaten the future stability of the program. Two reasons given in a 1992 HUD study for why some hospitals defaulted on their loans were changes in the policies and practices of state and local governments and changes in Medicare and Medicaid reimbursement. The hospital program has made a positive net contribution of $221 million to HUD’s General Insurance Fund, even though there have been years with negative cash flows (see fig. 1). Information obtained from FHA shows that from fiscal year 1969 through 1994, FHA collected $370 million in premiums and fees and paid $200 million in insurance claims and $13 million in salaries and other administrative expenses. FHA recovered about $64 million of claim payments from mortgage payments and the sale of the mortgages or properties. As of September 30, 1994, 19 hospitals had defaulted; FHA disposed of 10 and retained loan management responsibility for the remaining 9 hospitals. For these 9 hospitals, the total unpaid principal balance is $108 million and accrued delinquent interest is $44 million. (See app. I for a description of the hospital program’s financial performance from fiscal year 1969 through 1994.) As of August 1995, the hospital program portfolio was comprised of 100 projects in 18 states and Puerto Rico (see fig. 2). The portfolio has an aggregate unpaid principal balance of about $5 billion. (See app. II for individual unpaid principal balances of FHA-insured hospital projects, by state.) (Figure notes on next page) The majority of the hospital program projects, 63 percent, are in New York. The unpaid principal balance on mortgages for these projects is about $4.2 billion or 87 percent of the portfolio’s aggregate unpaid principal balance. Also, 9 of the 10 largest hospital mortgages are in New York. These mortgages account for about $2.4 billion or 50 percent of the portfolio’s total unpaid principal balance. Included in these mortgages is a $591 million loan, the largest single loan amount FHA has insured in the history of the program. Since 1988, 17 of the 20 projects that FHA insured have been for New York hospitals. In addition, as of August 1995, 6 of the 10 mortgage insurance applications under review by HHS and FHA were for projects in New York. The hospital program has become a major financing vehicle for many New York hospitals. Several officials stated that New York hospitals rely on FHA mortgage insurance, in part, because the state’s reimbursement system hinders hospitals’ ability to access capital in the private market. “New York’s restrictive reimbursement system makes it the most regulated nationwide,” according to a Moody’s Investors Service report. Except for Medicare, New York utilizes an all-payer fixed rate system to reimburse hospitals. The state controls all third-party payers’ rates of payments by setting a fixed payment for each hospital based on patient diagnoses. The rate-setting system is a regulatory method of budgeting for hospitals. The goals of the rate-setting system are cost containment and access to hospital care. However, New York state officials said that this system constrains hospitals’ profitability, which weakens their creditworthiness. According to a Moody’s Investors Service report, New York hospitals’ credit ratings are the weakest in the nation. In other states, hospitals’ credit ratings are generally stronger, which enables many of them to access capital in the private market. These hospitals primarily rely on bond financing backed by their revenues and projected ability to make loan payments or by commercial bond insurance instead of FHA’s Hospital Mortgage Insurance Program. In contrast, private insurers are reluctant to back bond sales to finance some New York hospital projects because the hospitals are considered too risky. The lack of portfolio geographic diversification and the large individual unpaid loan balances in New York pose a risk to the program. The concentration of the portfolio in New York makes the program susceptible to New York policies and other factors specific to the state. The strength of a portfolio lies in its diversity because portfolio diversification decreases the risk from losses. In addition, a single default of a large loan could lead to insurance claims that could significantly burden the program. A 1992 HUD report stated that the concentration of FHA-insured projects in a single state and large loan amounts are major controllable risks to the program that should be avoided or minimized. FHA does not limit the number of projects in a particular state nor does it cap individual loan amounts it insures as a means of controlling risks to the program. The legislation authorizes the Secretary of HUD to set the terms and conditions under which HUD will insure projects, but the law does not specifically authorize FHA to limit the number of projects accepted into the program from a geographic area or to limit the loan amounts it insures. In fact, in 1974, the Congress removed existing caps on loan amounts. FHA officials stated that they are taking action to diversify the portfolio by marketing the program to attract hospitals from other states. For example, FHA officials reported working with mortgage bankers to increase program awareness to hospitals outside New York. They reported that, as of August 1995, they had received four applications from hospitals in Illinois, New Jersey, Pennsylvania, and Puerto Rico. By expanding the portfolio, FHA also increases the program’s total outstanding mortgage amount. Officials involved in the financing of hospital projects told us that hospitals in other states may not be interested in the FHA program for several reasons, including the program’s high premiums, lengthy application process, and a lack of program awareness. For some future hospital projects, FHA is considering ways to reduce the risk of financial losses. For example, FHA is considering a proposal to establish risk-sharing arrangements with the public and private sector. According to FHA officials, the risk-sharing partner would assume underwriting responsibilities, have an equity position in the hospital, and share in any losses that result from defaults. In an October 1993 report, we noted that HUD terminated FHA’s multifamily housing coinsurance program in January 1990. The program enabled FHA to share the risk of insuring a multifamily mortgage with participating lenders. However, problems with the program resulted from deficient conceptual design and failures in administration. Changes in state health care policies that reduce hospitals’ revenues can negatively affect the financial stability of hospitals, particularly the financially weaker hospitals in FHA’s hospital program. Recent changes in New York’s Medicaid policy would reduce hospitals’ patient revenues and could increase program hospitals’ risk of default. The New York state fiscal year 1996 budget contains health care cost-cutting measures that are estimated to reduce state Medicaid hospital spending by $138 million, resulting in an estimated total hospital revenue loss of $553 million. State analyses of the reduction in Medicaid spending for individual hospitals estimate that FHA-insured hospitals will lose $170 million in Medicaid revenue. Also, individual program hospitals may lose between 0.31 percent and 4.25 percent of total revenues. Some New York hospitals’ already marginal operating margins may deteriorate further as a result of the loss in Medicaid revenue. Our analysis of 1994 Health Care Financing Administration data for 52 program hospitals in New York indicates that 49 had negative operating margins. The average operating margin for the 52 hospitals was –5.6 percent. Our analysis shows that, on average, operating margins for the 52 hospitals would deteriorate by 26 percent in 1 year because of the state’s reduction in Medicaid spending. Thus, the ability of some of these hospitals to absorb the cuts and possible future state Medicaid spending reductions without defaulting on their FHA-insured loans is questionable. In the past, state policy changes have precipitated hospital defaults. For example, three hospital defaults in Illinois resulted in a $27 million loss to the program. According to a 1992 HUD report, two of these defaults were caused, in part, by the state setting a Medicaid reimbursement rate that was too low to cover the hospital’s cost of treating Medicaid patients or the state delaying Medicaid reimbursement to hospitals. The extent to which New York hospitals are able to reduce expenses will affect their ability to withstand revenue losses. According to FHA, HHS, and New York health care officials, hospitals are expected to reduce expenses and implement revenue enhancers to mitigate Medicaid revenue losses and remain viable. Hospitals with large Medicaid caseloads are particularly vulnerable to reductions in Medicaid spending. Our analysis of 1994 data from 52 New York program hospitals shows that for about one-third of the hospitals, their Medicaid inpatient days were greater than 25 percent. Plans developed by New York program hospitals to respond to the state’s Medicaid cuts include cost-containment measures, such as reducing staff, salaries, and benefits and revenue enhancement measures, such as decreasing the length of stay and increasing admissions. Hospital and hospital organization officials reported that some hospitals had already begun taking cost-cutting measures before the budget decision was made. In reaction to the cuts, FHA required New York hospitals awaiting application approval to submit sensitivity analyses on the impact of the cuts. In addition, HHS required New York program hospitals to submit an action plan for responding to the cuts. After evaluating the hospitals’ responses, FHA and HHS increased their monitoring efforts for those hospitals identified as most vulnerable to the cuts. In addition to changes in state policies, future changes in federal health care policies can also restrict hospitals’ revenues. For example, the Fiscal Year 1996 Congressional Budget Resolution proposes cumulative Medicare reductions of $270 billion, from current law projections, over the next 7 years. In addition, the Budget Resolution proposes reducing Medicaid outlays by about $180 billion. As the congressional debate on deficit reduction continues, other proposals for containing the cost of federal health care spending on Medicare and Medicaid could surface. Changes in the delivery of health care can adversely affect the viability of hospitals that do not take action to successfully control costs and compete in the marketplace. One major shift in the way health care is delivered is the change from a focus on hospital inpatient care to outpatient care. From 1983 through 1993, there were 5.4 million or 15 percent fewer community hospital admissions nationwide. Over the same period, the average length of stay for patients admitted to hospitals declined from 7.6 to 7.0 days. American Hospital Association (AHA) data show for the same 10-year period that hospital occupancy rates declined by 10 percent and 522 community hospitals closed—a decline of 9 percent. In contrast, more dramatic than the decline in inpatient hospital use was the increase in hospital outpatient visits. Community outpatient visits increased about 75 percent over the 10-year period. This change in outpatient volume reflects an overall restructuring of the health care delivery system. Some of the factors driving the trends in health care include advances in technology that allow more care to be delivered in outpatient settings; changes in reimbursement incentives, such as the introduction of diagnostic related groups under the prospective payment system in the early 1980s; and the growth of enrollment in managed care health plans. As these trends continue, the need for hospital acute care beds will continue to decline. Health care association representatives cite managed care as a significant trend facing some hospitals. Because of the increased enrollment in managed care plans, hospitals that cannot become a part of a managed care network or compete in this environment stand to suffer financially from a loss of market share. Understanding the overall impact of these health care trends on the future need of the program would require further analysis which was beyond the scope of this review. Any such analysis should have to consider, at a minimum, (1) the characteristics of program hospitals compared with nonprogram hospitals accessing capital, (2) the ability of program hospitals to obtain financing on the private market without FHA mortgage insurance, (3) the costs and benefits of the program including the public good that the program serves, and (4) the program’s underwriting criteria and premium structure. The growth of managed care in New York can negatively affect some FHA-insured hospitals’ financial condition and, as a result, increase the risk of financial loss to the insurance program. In 1993, the penetration of managed care plans in New York was more than 24 percent. Also, there is a push in the state for the adoption of mandatory Medicaid managed care. Managed care emphasizes health care cost control, which includes avoiding unnecessary admissions and lengthy stays. Managed care also focuses on cost and utilization control measures. However, few New York hospitals have experienced managed care pricing and utilization controls. New York hospitals may be at a disadvantage in a managed care market because they generally have high lengths of stay. In addition, according to a Moody’s Investors Service report, “in a managed care market where the key variable is cost, the generally high-cost urban teaching facilities which are disproportionately located in New York, will definitely be at a disadvantage.” In addition, these hospitals have large teaching and research costs and significant fixed costs tied to their large physical plants and debt loads. The potential effect on teaching hospitals can be important to the program because, according to FHA data, the program insures 44 teaching hospitals of which 34, or 77 percent, are in New York. Hospitals that reduce costs and develop cooperative relationships with other health care providers may be able to mitigate the negative financial impact of managed care. Some program hospitals in New York and other states are affiliating and forming networks with other health care providers to reduce costs and increase service area. For example, one hospital reduced costs by establishing an affiliate in which financial and support services were consolidated and shared within its provider network. In addition, several hospitals reported affiliating with community hospitals and physician groups, as well as developing satellite clinics to broaden their patient base. An HHS official stated that, in reviewing hospitals’ applications, HHS considers whether the hospitals are preparing for managed care and addressing other health care trends. In addition, according to an HHS official, HHS examines affiliate contracts and insures that the contracts are not a drain on the hospitals’ finances. Also, program hospitals are required to obtain FHA approval for some mergers and affiliate transactions. FHA officials also reported that FHA consultants consider health care trends in their review of hospitals’ applications. FHA’s loan loss reserve estimate of $458.25 million, as of September 30, 1994, is not reliable because of weaknesses in the methodology that FHA used to calculate the estimated loan losses. The assumptions that FHA used to estimate key variables such as default probabilities and the actual loss rates were not directly linked to or justified by a detailed documented analysis of loss exposure in the hospital mortgage insurance portfolio. In an October 1994 report we discuss this principle as it applies to depositary institutions. Further, FHA’s methodology did not incorporate some health care market trends that are likely to impact the future financial performance of program hospitals. The net effect of the methodological flaws on the reserve estimate is unclear because FHA’s default assumptions and their exclusion of market trends could overstate or understate the loan loss reserve estimate. In estimating loan loss reserves, FHA—which is subject to the Government Corporation Control Act—is required to follow generally accepted accounting principles (GAAP) for financial statement reporting purposes. However, in our October 1994 report, we stated that this authoritative accounting guidance, established for private sector institutions, does not provide sufficiently detailed direction for establishing loan loss reserves. As a result, our evaluation of the methodology used by FHA is based on this general GAAP principle for loss recognition and our experience in applying other principles in other situations involving the estimation of loan loss reserves. FHA’s assumptions regarding default probabilities and loss rates were not supported by analysis of the loss exposure of each individual insured loan or other evidence that justified the estimates used. Specifically, FHA computed the probability of each program hospital appearing on HHS’ Credit Watch List and then used these probabilities as proxies to measure the default probability of each hospital in the portfolio. The probability of a hospital being on the Credit Watch List, however, is not a valid proxy for estimating the default probabilities for the entire portfolio because a hospital appearing on this list is a more common occurrence than a hospital defaulting. HHS’ data show that from 1984 to 1994 there were on average 167 hospitals in FHA’s portfolio. During this period, 16 hospitals (or 9.6 percent) defaulted on their loans and there were 82 hospitals on the Credit Watch List (49 percent). HHS data indicate that the majority of the default probabilities that FHA used to calculate the loan loss reserve were higher than the actual default rate of hospitals in the program. FHA’s approach for measuring default probabilities resulted in estimates of program hospitals’ default probabilities that ranged from about 3 to 80 percent with the majority of the default probabilities in the 10 to 40 percent range. However, FHA’s approach may have underreserved for loans that have high default probabilities because FHA did not consider the full unpaid principle balance when applying the loss percentages.Moreover, FHA’s use of the Credit Watch List overstates the hospitals’ default probabilities for loans less likely to default. FHA officials reported that they preferred to use the Credit Watch List as an indicator of the probability of default because, in their view, the Credit Watch List provides a prospective approach to estimating defaults. Regarding the loss rates, FHA applied percentages that were in some instances arbitrarily set and not linked to documented evidence of the individual insured loan’s likely losses. For example, FHA assigned the historical average loss rate of 70 percent to the hospitals it predicted were most likely to default on their mortgages (that is, hospitals with estimated default probabilities of 50 percent or more) and graduated downward the loss rate for hospitals that had estimated default probabilities lower than 50 percent. The 70-percent loss rate was based on losses HUD experienced from the sale at foreclosure or property disposition of eight of the nine hospital mortgages taken into inventory and sold since 1974. However, a better method for estimating the loan loss reserve would be to do a comprehensive analysis of the individual loss exposure for defaults considered probable—hospital loans with 50 percent or higher default probabilities. This entails not only reviewing the financial condition of the hospital, which FHA did, but considering other factors such as the likelihood of foreclosure versus FHA continuing to carry the loan. Further, FHA had no justifiable basis for the loss rate percentages applied to the hospitals that had default probabilities lower than 50 percent. FHA’s rationale was that in the future it could recover more from disposing of hospitals with default probabilities below 50 percent because these hospitals are considered to be stronger financially, based on the hospitals’ financial condition in 1994. FHA arbitrarily assumed that these hospitals would default later and have a higher value at the time of sale because they would have a broader patient base and higher net patient revenue. We question the validity of these assumptions because FHA provided no analysis to support the loss rates applied to hospitals with a lower than 50 percent probability of default. Because FHA had no basis for the loss rate percentages used for these categories of loans, it may be misstating the loan loss reserve estimate. FHA’s loan loss reserve methodology did not incorporate newly developed events, such as health care market trends, that can affect the future financial condition of program hospitals. For example, by omitting analyses of the potential impact of managed care, the loan loss reserve did not consider developing events that can impact program hospitals’ revenues. A reduction in revenue related to managed care could result in program losses. Overall, FHA’s exclusion of health care market trends in its methodology may have understated or overstated the loan loss reserve estimate depending on the impact that the specific market trend has on the program hospitals. While FHA officials acknowledged the importance of health care trends, they stated that they had not developed an approach to incorporate such factors into their analysis. HUD’s mission is broad enough to encompass the purpose of the hospital program. HUD’s overall mission includes increasing opportunities for housing and community development and, through FHA, providing mortgage insurance for construction projects. The purpose of the program is to assist with providing for urgently needed hospitals. In the report supporting the establishment of the hospital program, the House Committee on Banking and Currency cited FHA’s experience with promoting construction through its insurance programs. Subsequently, the Congress made providing mortgage insurance for hospital construction a part of HUD’s mission by giving the department statutory responsibility for the program. HUD officials reported that through FHA the program supports the department’s mission because it (1) provides an opportunity for hospitals to obtain financing for construction and renovation projects that they may not otherwise obtain in the private market and (2) promotes one of the department’s goals of economic lift by increasing employment, economic development, and neighborhood stabilization. The program also has as one of its specific goals promoting neighborhood stability and economic lift. Although FHA officials believe that the hospital program is consistent with HUD’s mission, the extent to which the program accomplishes the department’s goals and thereby supports its mission is not routinely measured. For example, HUD does not measure the extent to which local employment increased as a result of the program or the effect an insured project had on stabilizing a community. Performance measurement data would be useful for HUD to determine the strategic importance of the program to its mission and to evaluate the extent to which program benefits or outcomes outweigh program risks. Although no legal requirement existed for performance measurement, the Government Performance Results Act (GPRA) of 1993 requires federal agencies to submit a strategic plan to the Congress in the fall of 1997 and an annual performance plan in fiscal year 1999. In response to GPRA requirements, HUD officials stated that HUD established performance measures for some of its major programs. These measures include increasing the number of first-time home buyers and increasing benefits to low- and moderate-income home buyers. However for the hospital program, HUD officials stated that the agency has not developed performance measures, in part, because of the program’s relatively small size and HUD’s lack of data systems to track specific performance measures. FHA has limited health care expertise to independently manage the program. FHA’s headquarters staff has overall responsibility but shares program responsibilities with HHS staff because of HHS’ experience with hospitals and health care. Managing the program requires, in part, (1) familiarity with health care regulations, insurance practices, reimbursement systems, and trends; (2) an understanding of the indicators of a hospital’s financial condition; and (3) knowledge of the unique construction guidelines that apply to hospitals. According to a 1992 HUD report, HHS has staff with skills and experience in business administration, financial analysis, and accounting in the health care industry, as well as architects and engineers who specialize in overseeing the construction of health care facilities. The majority of the tasks related to managing the initial phases of the program’s loan cycle—loan development and management—have been delegated to HHS. FHA has primary responsibility for managing the latter stages of the program’s loan cycle—loan assignment and property disposition (see app. III for a description of each agency’s responsibilities during the phases of the loan cycle). A 1992 HUD report shows that FHA and HHS’ efforts to manage the program have produced mixed results. The report raised some concern about their past performance in loan development and management and the management of assigned loans and disposition of HUD-owned hospitals. However, the report concluded that, for the most part, HHS staff had done a good job and HUD’s staff was getting more involved and gaining experience in working with troubled hospitals. As agreed with your staff, our review did not include an evaluation of FHA and HHS’ performance in program management. The hospital and finance agency officials we interviewed raised concerns about the length of time it takes to get mortgage insurance applications and loan modifications approved by HHS and FHA. Our analysis of 12 loan applications approved since September 1990 shows that the average time from the date an application was first submitted to HHS to FHA’s final approval was more than 18 months. In contrast, a Price Waterhouse study reported that private insurers approve mortgage insurance applications for health facilities in 2 to 4 weeks. In addition, according to HUD’s 1992 report, the median timeframe for selected modification approvals was more than 9 months. Several hospital and finance agency officials said that the application and loan modification processes are lengthy primarily because of the number of offices involved in reviewing the applications. FHA and HHS officials attribute some of the delay to hospitals not responding to their questions in a timely manner. The lengthy approval processes may hinder hospitals’ ability to take advantage of favorable market interest rates, several officials said. One hospital reported that it had to pay an additional 65 basis points on its interest rate because of the time that elapsed between HHS’ recommendation to approve the application and FHA’s final approval. FHA recognizes that the approval processes are lengthy and stated that a reasonable goal for approving applications is 6 months. FHA and HHS recently initiated efforts to streamline the application process. These efforts include using a team approach to analyze applications and involving FHA’s field staff earlier in the process. However, FHA officials stated that their approval timeframes will generally never match those of private sector insurers because the hospitals that FHA insures are financially weaker and require closer screening and evaluation. Although the hospital program had made a positive dollar contribution to the General Insurance Fund as of fiscal year 1994, the accumulation of more than $4 billion of insured projects and the large loan amounts in New York pose risks to the future stability of the program. The continued buildup in New York may further exacerbate this risk. Further, trends in health care and changes in state and federal health care policies that reduce hospitals’ revenues will impact program hospitals. FHA officials are aware of the risks of concentration and health care changes associated with the current portfolio. Portfolio concentration is a controllable program risk for the future. But the law that authorizes the Secretary of HUD to set the terms and conditions under which HUD will insure projects does not specifically authorize FHA to use as options for diversifying the portfolio, limiting the number of projects accepted into the program from a geographic area, or limiting the amounts it insures. Health care trends and changes in health care policies are risks beyond FHA’s control. Hospitals currently in the FHA program must make adjustments to respond to these changes or they could suffer significant financial losses. To reduce the potential financial losses associated with future insured mortgages, FHA is considering risk sharing with the public and private sectors. However, the risk to the current portfolio remains. Flaws in FHA’s methodology for estimating loan losses limit the reliability of FHA’s loan loss reserve estimate. The implications of health care trends for program hospitals were not factored into FHA’s methodology for estimating potential loan losses. In addition, the approach that FHA used to determine default and loss rate assumptions was not reliable. FHA did not consider the full loss exposure in estimating reserves for hospitals that it identified as having high default probabilities. As a result of these flaws, the loan loss reserve estimate could be understated or overstated. While FHA has developed performance measures for some of its major programs in response to GPRA, it has not developed performance measures for the hospital program. Performance measures would help HUD evaluate the program’s effectiveness. Given the risks associated with the portfolio’s geographic concentration and the possible implications for the program of current health care trends, the Congress may wish to explore further with HUD officials options for reducing the program’s risk by, for example, limiting the program’s risk exposure in a particular state and capping mortgage insurance amounts. To improve the reliability of FHA’s loan loss reserve estimate, insure future compliance with federal performance measurement requirements, and minimize potential financial losses from future projects, we recommend that the Secretary of HUD perform a comprehensive analysis of individual loan loss exposure when default is considered probable; link the loan loss reserve estimate to documented analyses that justifiably support loss rates and default percentages; and consider newly developed events, such as health care trends and policy changes, that can affect the performance of loans in estimating loan loss reserves; develop performance measures and begin collecting the data needed to track the performance of the Hospital Mortgage Insurance Program; and pursue risk-sharing arrangements in which a private or public entity would share in potential financial losses from hospital defaults on future FHA-insured projects only after a thorough evaluation of the benefits and drawbacks of risk-sharing ventures, taking into account past experiences of FHA’s multifamily housing programs. On November 22, 1995, we provided a draft of this report to HUD and HRSA for comment. Although HRSA did not provide comments, HUD generally agreed with the report’s findings and conclusions. In response to our recommendations, HUD reported that it will (1) incorporate additional data on market trends and health care policy changes into FHA’s loan loss reserve methodology as such data become available and can be quantified; (2) develop and implement performance measures for the program in fiscal year 1997; and (3) conduct front-end risk analysis and incorporate multifamily’s risk-sharing experience into its plans for the hospital risk-sharing program. (See app. V.). HUD did not, however, concur with our evaluation of its 1994 loan loss reserve methodology. Contrary to what we concluded, HUD stated that it (1) used the financial position of the hospitals, not their appearance on the Credit Watch List to predict the probability of default, (2) based its loss rates on a review of all losses incurred in foreclosure or property disposition sales since the beginning of the program, (3) considered the full unpaid principal balance in estimating the loan loss reserve, and (4) included health care market trends through its analysis of the current financial condition and trends in the financial condition of individual hospitals. HUD’s comment that FHA used the financial condition of the hospitals, not appearance on the Credit Watch List, to predict probability of default is inconsistent with the documentation that FHA provided on the method used for estimating the program’s loan loss reserves. FHA’s documentation states that financial indicators “were used to predict the probability that a hospital would appear on HHS’ Watch List.” FHA averaged the probabilities estimated by these indicators to convert “the predictors of appearance on the Watch List to a likelihood of default.” Further, as stated in the report, our review of HHS data showed that the majority of default probabilities that FHA used were higher than the actual default rate of hospitals in the program. Clearly, FHA did not adjust the predicted probabilities of default for this difference. Regarding the loss rates, HUD commented that FHA’s analysis was based on all losses incurred in foreclosure and property disposition since the inception of the program. HUD also stated that the loss rates were adjusted downward for mortgages with probabilities of default lower than 50 percent based on the assumption that hospitals with a better financial condition would be worth more at foreclosure. As discussed in our report, the 70-percent average loss rate that FHA used for hospitals with high default probabilities was based on actual losses experienced in the foreclosure or property disposition of only eight mortgages taken into inventory and sold since 1974. Thus, FHA’s historical analysis was not statistically significant and was based on information that was not adjusted for current real estate market trends. We believe that FHA’s use of this historical analysis to determine loss reserves for loans where default is considered more likely than not (that is, hospital loans with 50-percent or higher default probabilities) may overstate or understate the reserves on these loans. We believe that individual loan analysis of mortgages in the current portfolio provides for a more accurate means to measure loss exposure on loans where default is considered more likely than not. Although as a matter of generally accepted practice, using historical data may under some circumstances be appropriate for groups of loans with a lower than 50-percent default probability, FHA arbitrarily adjusted a questionable 70-percent loss rate downward for such loans and provided no supporting analysis to justify the resultant loss rates. We believe that this analysis was inappropriate for this group of loans with lower default probabilities. Therefore, these loss rates do not provide a reliable basis for estimating FHA’s reserves. With respect to accounting for the full unpaid principal balance in estimating potential losses, HUD stated that it “multiplied the full unpaid principal balance by the probability of default and then by the loss rate—a standard approach to factoring the probability of default into a loss estimate.” However, this approach has the effect of reducing the unpaid principal balance. Proper application of GAAP requires 100 percent of the unpaid principle balance for reserving purposes when default is more likely than not to occur. Including default probabilities in the reserve calculation may be appropriate for loans where default is not considered more likely than not, but once that threshold has been determined, the full amount of the loan balance should be considered in calculating the loss estimate. HUD stated that its methodology reflected current health care market trends. We agree that some health care market trends may be reflected in hospitals’ financial statements. However, some rapidly evolving health care market trends, such as managed care, may not be reflected in the hospitals’ financial statements that HUD uses because of the time lag in financial reporting. FHA’s loan loss reserve methodology does not include a mechanism to identify and adjust for such trends. Historical trends should be adjusted to reflect changes in economic and business conditions, such as managed care, in order to provide a reasonable estimate of current loss exposure. Data on hospitals’ utilization rates may be used in analyzing health care trends. HUD also commented on other issues that did not accurately reflect the information presented in our report. For example, HUD commented that we found the program to be “consistent with and contributing towards the mission of HUD.” However, this is not a conclusion of our report. Our report cites the statements of HUD officials that the program supports and is consistent with the Department’s mission. We concluded that HUD’s mission is broad enough to encompass the purpose of the hospital program, not that it contributes to the mission of HUD. (See p. 18.) HUD also commented that it agreed with our concern that the proposed federal Medicare and Medicaid cuts could have a “significant adverse impact on the hospital industry, including some hospitals with mortgages insured by FHA.” Our report does not make a value judgment about the proposed federal Medicare and Medicaid reductions on the hospital industry or hospitals in the program. Instead, we report that future changes in federal health care policies can restrict hospital revenues and increase risks to the program. (See p. 12.) While HUD commented that our report noted “many urban community and teaching hospitals need credit enhancement but cannot meet all of the standards of the private insurers,” we did not differentiate among which types of hospitals need credit enhancement. HUD provided additional reasons for the program’s concentration in New York other than the state’s reimbursement system. Despite these reasons and recent actions taken in efforts to address these risks, the program’s concentration and the large individual unpaid loan balances in New York continue to pose program risks. Specifically, the concentration of the portfolio in New York makes the program susceptible to New York policies and other factors specific to the state. (See p. 10.) HUD also noted actions that it is initiating to geographically and economically diversify its portfolio. According to HUD comments, these actions include increasing program awareness and developing new products to meet market demands. Although we recommended that HUD pursue risk-sharing arrangements and suggested that the Congress consider exploring with HUD options for reducing program risks; for example, by limiting the program’s risk exposure in a particular state and capping mortgage insurance amounts, we do not endorse expanding FHA’s Hospital Mortgage Insurance Program. By expanding the program, FHA increases the program’s total outstanding mortgage amount. In fact, because the overall impact of health care trends and policy changes is unclear, we stated that to understand the overall impact of these changes on the future of the program would require further analysis given its original purpose and the current composition of the portfolio. (See p. 13.) We are sending copies of this report to appropriate congressional committees; the Secretary of HUD; the Secretary of HHS; the Director, Office of Management and Budget; and other interested parties. We also will make copies available to others on request. Please contact me at (202) 512-7119 if you or your staff have any questions. Other major contributors are listed in appendix VI. Net cash flow from operations for the year ($1) (12) (50) (86) (133) (26,867) (159) (21,985) (169) (243) (337) (363) (378) (12,105) (418) (516) (548) (663) (673) (869) (876) (5,351) (898) (901) (34,606) (844) (11,324) (21,240) (872) (91,179) (883) (66,687) (828) (4,202) (773) (4,180) (714) $370,110 ($199,730) ($13,207) (continued) (continued) (continued) Total (100 projects) Provide applicant guidance and assistance (including preapplication conference) Conduct initial site visit to hospital Review and approve construction plans, specifications, and contracts Recommend to HUD approval or disapproval of hospital’s application Make final underwriting determinations, conduct any needed legal reviews, issue firm commitment, close and initially endorse loan Conduct preconstruction conference, monitor construction work, and process requests for advances of mortgage proceeds Review cost certification, inform lender of maximum insurable mortgage amount, and process final advance Arrange final closing and finally endorse mortgage Monitor hospital’s financial performance by reviewing financial statements and conducting periodic site visits Receive, review, and recommend to HUD approval or disapproval of special requests and loan modifications (for example, partial release of security, transfer of physical assets, bond refundings, or major capital projects) Approve special requests and loan modifications Conduct site visits to troubled hospitals to determine actions needed to prevent or cure defaults Review quality and condition of insured hospital loan portfolio and determine amount of loan loss reserve Receive/process assignment of loan and pay insurance claim Review assigned hospital’s operational performance and financial condition and conduct site visits as needed Receive, review, and recommend to HUD approval or disapproval of proposed workout agreements or mortgage modifications Bill for and collect mortgage payments Analyze hospital’s situation, evaluate alternative uses, secure appraisal, make decision to foreclose, and arrange and hold foreclosure sale (continued) The specific objectives of our review were to (1) identify factors, including those related to health care market trends, that could affect the stability of the program’s portfolio and provide information on the program’s financial performance; (2) evaluate the methodology FHA used to estimate the program’s fiscal year 1994 loan loss reserve; (3) evaluate the relationship between the purpose of the hospital mortgage insurance program and HUD’s mission; and (4) determine whether FHA has the expertise to manage the program. To identify factors that could affect the stability of the program’s portfolio, we (1) researched the literature and used HUD’s 1992 internal report on the hospital mortgage insurance program; (2) interviewed program officials in FHA and HHS headquarters and field offices; (3) interviewed senior financial officers from seven hospitals in New Jersey, New York, Puerto Rico, and Texas; (4) interviewed representatives from the Health Care Financing Study Group, New Jersey Health Care Facilities Financing Authority, New York State Medical Care Facilities Finance Agency, Goldman, Sachs & Co., Merrill Lynch and Co., AMBAC Indemnity Corp., Municipal Bond Investors Assurance Insurance Corp., Greater New York Hospital Association, Healthcare Association of New York State, State of New York Department of Health, the law firm of Krooth & Altman, and other state health and hospital organizations that are knowledgeable about or involved with the program; and (5) convened a panel of investment bankers and hospital financial officers. We used the Health Care Financing Administration’s Health Care Provider Cost Report Information System, the New York State Department of Social Services Medicaid Provider Ranking List, and the New York State Department of Health’s estimation of Medicaid cost containment to demonstrate the effect of New York’s fiscal year 1996 Medicaid spending reductions on program hospitals. We calculated 1994 operating margins for 48 of 57 New York program hospitals. Nine hospitals did not have 1994 cost report information available or did not have the state’s estimation of Medicaid cost containment. We reduced calendar year 1994 net patient revenues by the New York State Department of Health estimation of Medicaid cost containment. Two assumptions of our analysis were that (1) the effects of the proposed changes on net patient revenue would be the same in each year and (2) the hospitals took no action to reduce expenses. To evaluate the methodology FHA used to estimate its 1994 hospital loan loss reserve, we reviewed the description of the hospital loan loss analysis and other related documents. We evaluated the methodology and discussed the statistical estimation model and assumptions FHA used with FHA and HHS officials. Also, we interviewed investment bankers and bond insurers to determine conventional approaches private industry uses in estimating loss reserves. As agreed with Committee staff, we did not assess the accuracy of the estimated amount of the program’s loan loss reserve. To evaluate the relationship between the purpose of the hospital program and HUD’s mission, we reviewed and analyzed the applicable laws, regulations, and policy statements related to the Department’s and FHA’s missions. We reviewed the legislative history to determine the purpose of the program. We also interviewed FHA officials to discuss how the program’s purpose supports HUD’s mission. To determine whether FHA has the expertise to manage the program, we interviewed agency officials and representatives from hospitals and state health and hospital organizations, as previously mentioned. Our review of FHA’s expertise to manage the program did not involve an evaluation of risks to the program resulting from program management or organization. Our 1990 report and internal HUD studies have previously addressed organizational issues. The approach to accomplishing the objectives of this review was discussed with and agreed to by staff from both the Senate and House Banking Committees. In addition to those named above, the following individuals also made important contributions to this report as advisors and technical assistants: Linda Calbom, Robert C. DeRoy, Austin J. Kelly, Ann McDermott, Luann M. Moy, David Patrick Redmon, Mary W. Reich, Daynah K. Shah, and William J. Carter-Woodbridge. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Federal Housing Administration's (FHA) Hospital Mortgage Insurance Program, focusing on: (1) factors that could affect the stability of the program's portfolio and financial performance; (2) FHA 1994 loan loss reserve estimate; (3) how the program relates to the Department of Housing and Urban Development's (HUD) mission; and (4) whether FHA has the expertise to manage the program. GAO found that: (1) although the Hospital Mortgage Insurance Program has had a net positive cash flow since its inception, the program faces financial risks that could affect its future stability; (2) New York hospitals account for 87 percent of the program's $5 billion in unpaid principal and have 9 of the 10 largest unpaid principal balances; (3) New York hospitals unduly rely on the FHA mortgage insurance program because the state's restrictive reimbursement system hinders their ability to attract private-sector capital; (4) state actions and future health care policy changes and trends could further threaten hospital solvency; (5) FHA loan loss reserve estimates are unreliable because FHA used questionable assumptions about default probabilities and loss rates and did not consider health care market trends; (6) the extent to which the mortgage program contributes to the HUD mission is unclear because HUD does not routinely measure program outcomes; (7) FHA staff do not have sufficient health care expertise to manage key program functions and must rely on Health and Human Services staff experience to monitor hospitals' financial performance; and (8) hospital officials and program users are concerned about the length of the program's application and loan modification processes.
TVA is an independent, wholly owned federal corporation established by the TVA Act of 1933 (TVA Act), as amended. The act established TVA to improve the quality of life in the Tennessee River Valley by improving navigation, promoting regional agricultural and economic development, and controlling the floodwaters of the Tennessee River. To those ends, TVA built dams and hydropower facilities on the Tennessee River and its tributaries. To meet the subsequent need for more electric power, TVA expanded beyond hydropower to other types of power generation such as natural gas, coal, and nuclear plants. As of September 30, 2005, TVA sold electricity at wholesale rates to 158 retail distributors that resell electricity to consumers, and sold electricity directly to 61 large retail customers. As illustrated in figure 1, TVA’s service territory includes most of Tennessee and parts of Alabama, Georgia, Kentucky, Mississippi, North Carolina, and Virginia. The area covers 80,000 square miles with a population of more than 8.6 million. From its inception in 1933 through fiscal year 1959, TVA received appropriations to finance its internal cash and capital requirements. In 1959, however, the Congress amended the TVA Act to provide TVA the means to self-finance its power program and required it to repay a substantial portion of appropriations it had received to pay for its capital projects. At the same time, the Congress required that TVA’s power programs be self-financing through revenues from electricity sales. For its capital needs in excess of funds generated from operations, TVA was authorized to borrow by issuing bonds and notes. TVA’s authority to issue bonds and notes is set by the Congress and cannot exceed $30 billion outstanding at any given time. Until recently, TVA had been administered by a three-member board of directors appointed by the President of the United States and confirmed by the U.S. Senate. An Executive Committee worked with the board to determine TVA’s strategic mission and future direction, provide management oversight, and ensure policies of the board were carried out. The Consolidated Appropriations Act, 2005, which was signed into law in December 2004, changed the structure of TVA’s management. The act contained provisions that restructured the board from three full-time members to nine part-time members, established the position of Chief Executive Officer (CEO) to be appointed by the board, required TVA to begin filing financial reports with the Securities and Exchange Commission (SEC), and required TVA’s new board to create an Audit Committee to be composed solely of board members independent of management. The audit committee will be responsible for reviewing inspector general and external audit reports and making recommendations to the board. The legislation specifies that seven of the nine board members must be legal residents of TVA’s service area and that the members will be appointed by the President and confirmed by the Senate. After a transition period, members will serve 5-year rather than the current 9-year terms. In general, the board will establish TVA’s strategic direction and policies while the CEO will oversee their implementation as well as TVA’s overall operations. The new board became effective on March 31, 2006, when six new board members took the oath of office and joined two existing members to hold the first board meeting under the new governance structure. Along with annual reporting to the SEC, in fiscal year 2006 TVA will also be required to comply with certain provisions of the Sarbanes-Oxley Act of 2002, including the requirement that its officers certify annual and quarterly financial reports and report on the effectiveness of internal controls over financial reporting. TVA’s external auditor, in addition to auditing and issuing an opinion on TVA’s financial statements, will be required to issue an opinion on the effectiveness of TVA’s internal controls over financial reporting. Based on the current guidance from the SEC, TVA will file the first report on internal controls with its September 30, 2007, financial statements. Under the TVA Act, as amended, TVA has not been subject to most of the regulatory oversight requirements that commercial utilities must satisfy. Legislation has also limited competition between TVA and other utilities. When the TVA Act was amended in 1959, it prohibited TVA, with some exceptions, from entering into contracts to sell power outside the service area that it and its distributors were serving on July 1, 1957. This is commonly referred to as the “fence” because it limits TVA’s ability to expand outside its July 1, 1957, service area. In addition, the Energy Policy Act of 1992 (EPAct) exempted TVA from being required to allow other utilities to use its transmission lines to send power to customers within its service area, effectively reducing the opportunities for TVA’s wholesale customers to choose other suppliers. This exemption is often referred to as the “anti-cherrypicking” provision. TVA is still subject to some forms of indirect competition common to all utilities. For example, the cost of power would affect decisions by TVA’s customers to move or expand outside TVA’s service area or by businesses to move into its service area. In addition, customers can decide to generate their own power for on-site use. However, as long as the legislative framework continues to insulate TVA from direct competition for its wholesale customers, it will remain in a position similar to that of a regulated utility monopoly. For more than 20 years, the federal government has been taking a variety of steps to restructure the electricity industry with the goal of increasing competition in wholesale markets and thereby increasing benefits to consumers, including lower electricity prices and a wider variety of retail services. Electricity restructuring is evolving against a backdrop of constraints and challenges, including shared responsibility for implementing and enforcing local, state, and federal laws affecting the electricity industry and an expected substantial increase in electricity demand by 2025, which will require significant investment in new power plants and transmission lines. Prior to this restructuring, electricity was generally provided by electric utilities that exclusively served all customers within a specific geographic region. Under these conditions, the federal government, through the Federal Energy Regulatory Commission (FERC) and its predecessors, regulated wholesale electricity sales (sales for resale) and interstate transmission by electric utilities and set prices at cost-based rates. Because the utilities were monopolies, states regulated retail markets, approving utility company investments and rates paid by customers. In 1978, the federal government laid the groundwork for restructuring and competition in the electricity industry with the Public Utility Regulatory Policies Act, which opened wholesale power markets to electricity producers that were not regulated utility monopolies. In the 1990s the federal government greatly expanded these efforts. First the EPAct provided for broader participation in wholesale electricity markets by nonutilities and allowed these entities to produce and sell electricity at market prices. Second, in 1996 the FERC issued Orders 888 and 889, which greatly expanded opportunities for competition by requiring utilities to provide access to their transmission lines to all users under the same prices, terms, and conditions. This change allowed the new nonutilities to compete with utilities and others for the opportunity to sell electricity in wholesale markets on more equal terms. By 2002 a number of states had made efforts to introduce competition to the retail markets that they oversee, allowing nonutilities to compete with utilities and others for the opportunity to sell electricity directly to consumers. Beginning in 2000, some restructured wholesale and retail electricity markets encountered a number of problems. From the summer of 2000 through early 2001, California saw a sharp increase in wholesale electricity prices, electricity shortages leading to rolling blackouts, and the deteriorating financial stability of its three major investor-owned utilities. These problems, along with the largest blackout in U.S. history along the East Coast in 2003, drew attention to the need to examine the operation and direction of the industry. Efforts to expand restructuring slowed down as many states analyzed the factors that contributed to these problems, among them failure to meet increasing demand for electricity with new generation and transmission capacity. TVA management and many industry experts, however, expect that TVA will eventually be drawn into the restructuring of the electric utility industry and will eventually lose its legislative protections from competition. There have already been some indications of such changes. For instance, S.1499, introduced in July 2005, would remove any area within Kentucky from coverage by the “anti-cherrypicking” provision in the EPAct. If the bill becomes law, TVA would be required to transmit power from another supplier over its transmission lines for use inside the Kentucky portion of its service area without being able to similarly expand its service area. The bill was referred to the Senate Energy and Natural Resources Committee, where it remained as of August 15, 2006. Our prior reports have indicated that TVA’s high debt and related interest expense could place it at a disadvantage in continuing to offer competitively priced power if it were to lose its legislative protections from competition. TVA’s management has also recognized the need to reduce its debt and other financing obligations to increase its flexibility to meet competitive challenges. In July 1997, TVA issued a 10-year business plan with steps necessary to improve its financial position for an era of increasing competition. Two key strategic objectives of the plan were (1) to reduce the cost of power by reducing debt and the corresponding financing costs, and (2) increase financial flexibility by reducing fixed costs. To help meet these objectives, the plan called for TVA to reduce its debt by half over a 10-year period to about $13.2 billion by increasing its electricity rates beginning in 1998, reducing certain expenses, and limiting capital expenditures. TVA did not meet the 1997 debt reduction goal because it used cash intended for debt reduction to cover greater than estimated annual operating costs and capital expenditures. In fiscal year 2000, TVA began entering into alternative financing in the form of lease-leaseback arrangements to obtain a lower cost of capital than it could by selling bonds. TVA entered into these arrangements in fiscal years 2000, 2002, and 2003 to refinance 24 existing power generators that were designed for use during periods of peak power demand. TVA financed and built the generating units and leased them to investors in exchange for cash. It then leased the generators back and is making payments to investors. TVA also implemented other alternative financing arrangements that allowed its customers to prepay for power in exchange for discounted rates. For example, in November 2003, TVA entered into an energy prepayment agreement with its largest customer, Memphis Light, Gas, and Water Division (MLGW). Under this agreement, MLGW prepaid TVA $1.5 billion for electricity to be delivered over a 15-year period. TVA also offered a discounted energy units program in fiscal years 2003 and 2004, under which TVA customers could purchase power, usually in $1 million increments, in return for a discount on a specified quantity of power over a certain period of years. TVA did not offer the DEU program in 2005. During our review, TVA’s management told us they have no current plans to enter into additional alternative financing arrangements. Generally accepted accounting principles require that lease-leaseback and other alternative financing arrangements be classified as liabilities. In 2003 we reported that the lease-leaseback arrangements, while not considered debt for purposes of financial reporting, had the same effect on TVA’s financial condition as traditional debt financing. The Office of Management and Budget (OMB) treats the cash proceeds TVA receives from private parties at the inception of lease-leaseback arrangements as borrowing. Accordingly, in the President’s Budget for fiscal year 2004, OMB began classifying TVA’s lease-leaseback arrangements as debt. Table 1 shows that although TVA reduced its outstanding statutory debt by about $4.3 billion from fiscal years 1997 through 2005, its use of alternative financing arrangements rose, adding nearly $2.5 billion to its total financing obligations as of September 30, 2005, resulting in a net reduction of about $1.8 billion. In fiscal year 2004, burdened with total financing obligations of almost $26 billion, TVA’s board adopted a new strategic plan for reducing debt that called for increasing revenue, controlling costs, and reducing the growth of capital expenditures. However, TVA also began measuring its debt reduction more realistically and transparently in terms of TFOs, which, as shown in table 1, are comprised of its statutory debt as well as its liabilities under alternative financing arrangements. Since issuing its strategic plan in 2004, TVA has raised its power rates twice—a 7.52 percent increase in firm wholesale electric rates effective October 1, 2005, and a 9.95 percent increase effective April 1, 2006. On July 28, 2006, TVA’s board approved a 4.5 percent decrease in firm wholesale power rates in conjunction with a fuel-cost adjustment clause. Utilities surrounding the Tennessee Valley also increased rates in 2005, and 12 of the 14 surrounding utilities have fuel-cost adjustment clauses that allow them to pass increases in the price of fuel to customers automatically. TVA is working with distributors and the Tennessee Valley Public Power Association (TVPPA) to develop future wholesale pricing options and new long-term contract options. To determine how TVA plans to meet the debt reduction goal identified in its 2004 strategic plan, we: (1) interviewed TVA officials, (2) reviewed documentation and analyses supporting TVA’s debt reduction plan including its 2004 strategic plan and budget submissions for fiscal years 2006 and 2007, and (3) reviewed TVA’s fiscal years 2004 and 2005 annual reports, information statements, and audited financial statements. To assess the reasonableness of TVA’s approach in developing its debt reduction plan, we interviewed TVA officials responsible for developing the 2004 Strategic Plan and performing analyses with the Competitive Risk Model and the Enterprise Risk Model. To assess these models, we obtained documentation describing the structure of the models and the sources of variables used in the models, and discussed this information with relevant TVA staff. We examined the structure of the models in order to ascertain whether the relationships between the variables in the models were logical and included the most important sources of costs and revenues, and considered the extent to which the data are independent, widely used, and relevant. To identify the key factors that could impact TVA’s ability to successfully carry out its debt reduction plan we (1) interviewed officials from TVA, TVA’s Office of Inspector General, the Tennessee Valley Public Power Association, and the Knoxville Utilities Board; (2) reviewed prior GAO reports on issues confronting TVA; (3) reviewed TVA’s fiscal years 2004 and 2005 annual reports, information statements, and audited financial statements to determine the types of revenue and costs TVA had reported; and (4) interviewed an official from CBO with expertise in issues pertaining to TVA. To identify the impact that growth in demand for power in the Tennessee Valley may have on TVA’s ability to meet its debt reduction plan, we (1) interviewed officials from TVA, TVA’s Office of Inspector General, the Tennessee Valley Public Power Association, and the Knoxville Utilities Board; (2) reviewed prior GAO reports on issues confronting TVA; (3) reviewed TVA’s fiscal years 2004 and 2005 annual reports, information statements, and audited financial statements to determine the types of revenue and costs TVA had reported; and (4) interviewed an official from CBO with expertise in issues pertaining to TVA. During the course of our work, we contacted the following organizations: Congressional Budget Office Tennessee Valley Authority Tennessee Valley Authority, Office of Inspector General Tennessee Valley Public Power Association, Chattanooga, Tennessee Knoxville Utilities Board, Knoxville, Tennessee We provided a draft of this report to officials at TVA for their review and incorporated their comments where appropriate. We conducted our work from June 2005 through August 2006 in accordance with generally accepted government auditing standards. TVA set a goal of reducing statutory debt by $3 to $5 billion in its 2004 strategic plan. Subsequently, TVA expanded the scope of its debt reduction efforts to include debt-like transactions such as lease-leasebacks and energy prepayment arrangements, referred to in this report as alternative financing. TVA calls this larger group of obligations total financing obligations, or TFOs. In its 2007 budget, TVA increased its TFO reduction goal to $7.1 billion. This includes reducing statutory debt by $6.7 billion and alternative financing obligations by $0.4 billion. TVA plans to meet this goal by increasing revenue, controlling the growth of its operating expenses, and limiting capital expenditures. TVA projects it will gain additional revenue through its October 2005 rate increase, a fuel-cost adjustment clause to adjust rates up or down automatically when fuel prices change, and increased sales from growth in the demand for electricity. TVA’s plan also calls for controlling the growth of operating costs and limiting spending on capital expenditures to $12.1 billion through fiscal year 2015. TVA’s management told us that they are committed to reducing TFOs and that achieving the $7.1 billion TFO reduction goal would give TVA an estimated 3.1 interest rate coverage ratio by fiscal year 2015. As of fiscal year 2005, TVA’s interest coverage ratio was 2. The interest coverage ratio is a quick way to identify a company’s ability to pay interest on debt, which TVA uses to gauge its financial health. TVA officials said the 3.1 ratio would allow TVA to be a financially flexible enterprise and continue to offer competitive electricity rates. Table 2 shows TVA’s annual and cumulative targets for reducing total financing obligations for fiscal years 2004 through 2015. TVA exceeded its targets for reducing TFOs for the first 2 years of the plan. In fiscal year 2004, TVA reduced its TFOs by $278 million, or 24 percent more than its target of $225 million. In fiscal year 2005, TVA reduced its TFOs by $301 million, or 34 percent more than its target of $225 million. The projections supporting TVA’s current TFO reduction goal show that the annual increases in operating revenue over the fiscal year 2004 level for fiscal years 2005 through 2015 will total $16.7 billion. TVA plans to use the additional revenue to cover projected increases in operating costs and capital expenditures, and to reduce TFOs. About $9.6 billion of this additional revenue will come primarily from increased sales from growth in demand. TVA also projects that about $5.7 billion will come from the October 1, 2005, rate increase. From fiscal years 2007 through 2015, TVA expects about $1.4 billion to come from the fuel-cost adjustment (FCA) clause that will be added to customer contracts in fiscal year 2007. The FCA will automatically increase or decrease rates to cover changes in the cost of fuel and purchased power. TVA plans to use the budgeted fuel and purchased power estimates for fiscal year 2006 as the baseline for fuel and purchased power prices it pays. In subsequent years, it will compare those prices to the baseline and automatically adjust rates upward or downward for changes in these expenses. Although the FCA will not generate additional cash that can be applied to TFO reduction, it will prevent increases in the cost of fuel and purchased power from eroding cash balances that TVA planned to apply toward TFO reduction. The revenue projections supporting the current TFO reduction goal do not include several factors, such as the 9.95 percent rate increase that took effect on April 1, 2006, the 4.5 percent decrease approved on July 28, or any future rate increases. The April 1, 2006, increase took effect after TVA approved its 2007 budget and was undertaken to cover projected increases in the cost of fuel and purchased power. The rate decrease was approved in conjunction with the FCA. Future rate increases (excluding the FCA) were not included because TVA plans to use them as necessary to cover increases in operating costs (excluding fuel and purchased power) that exceed estimates that were used in formulating the current TFO reduction goal. The revenue projections also assume that an environmental surcharge that was added to rates on October 1, 2003, to fund anticipated clean air compliance costs for the next 10 years will be discontinued at fiscal year end 2013, as originally planned. TVA’s TFO reduction plan includes an emphasis on controlling the growth of operating costs. Management plans to constrain TVA’s baseline operating and maintenance (O&M) costs, excluding fuel and purchased power, by limiting the growth of these expenses to one-half of a percentage point below inflation, as measured by the consumer price index (CPI). TVA estimates that this will make about $1.1 billion in cash available from fiscal year 2007 through fiscal year 2015. TVA plans to hold O&M expenses down by implementing better discretionary spending discipline through top-down budgeting guidance and performance measures, and then maintaining the efficiency gains throughout the planning period. The plan includes establishing overall financial targets and allocating them to TVA’s individual business units. TVA officials also project that bringing Browns Ferry Nuclear Unit 1 (BFN 1) on line will help control the growth of operating costs. A 2002 analysis prepared by TVA shows that the completion of BFN 1 will allow TVA to reduce the cost of its fuel, purchased power, and other operating costs. Because completion of BFN 1 is embedded in TVA’s current forecasts, it could not provide current projections of the incremental savings from completing and bringing BFN 1 on line. The 2002 analysis projected that TVA’s cash flow would improve when BFN 1 is brought on line in May 2007, and TVA would recover all of its costs from the project, including interest expense, by 2015. This analysis, however, could not consider subsequent changes, such as the significant increases in power supply costs that have occurred since 2002, which will increase TVA’s projected savings from bringing BFN 1 on line. TVA also projects that its interest expense will be reduced over time as it lowers the balance of its outstanding debt. TVA’s TFO reduction plan includes $12.1 billion from fiscal year 2006 through fiscal year 2015 for capital expenditures to complete BFN 1, meet known requirements of the Clean Air Act, and cover ongoing efforts to uprate its generating assets and maintain transmission assets. Any changes in this amount would affect the cash available for TFO reduction. Table 3 shows TVA’s planned capital expenditures by major category from fiscal year 2006 through fiscal year 2015. To help meet its capital expenditure goals, TVA will consider deferring or canceling capital projects when necessary and adjusting its investment criteria to reflect changes in its customer contracts and commitments. TVA’s plan includes estimated capital expenditures for its current environmental program to reduce sulfur dioxide, nitrogen oxide, and particulates, which are expected to reach a cumulative total of about $5.7 billion by 2010. TVA had already spent about $4.4 billion, or 77 percent of this amount, by September 30, 2005. TVA’s plan, however, does not factor in costs for additional reductions in airborne pollutants that it may be required to meet in the future, or the potential cost to comply with proposed legislation that would require reductions in carbon dioxide. Projections for meeting TVA’s TFO reduction goal do not include capital expenditures for building any major new generating assets through 2015, other than completing BFN 1. Overall, we found TVA’s approach to developing its TFO reduction goal was reasonable. TVA used a strategic planning process to develop its current goal, which focused on its core mission as a long-term provider of low-cost electricity. As part of this process, TVA looked not only at its financing obligations, but at external business and market risks. To assess these outside risks, TVA performed detailed competitive analyses and modeled different market scenarios to estimate its future competitive environment. It considered the results of these market risk analyses in formulating its strategic plan and determining the initial range of possible debt reduction through 2015. As part of its annual internal budget process, TVA used an accounting model to project annual cash flows and refine its goal. TVA continues to project cash flows annually and to analyze changing market conditions as necessary using the accounting model. TVA assessed its competitive environment and performed detailed analyses of business and market risks to determine the effect of possible future conditions on its ability to reduce debt. Among the tools used in TVA’s strategic planning process was a competitive risk model (CRM). The CRM is a scenario model that shows the range of financial outcomes TVA might face if electricity industry restructuring moved forward and its distributors were free to choose alternative suppliers. Scenario analysis develops a set of potential events and conditions that management may wish to consider, and calculates the likely impact on cash flow and debt reduction in each. TVA’s CRM shows the probability of loss of load, or customer demand for energy, over many market scenarios. The model calculated the potential impact of each market scenario on TVA assuming that distributors could choose other suppliers and modeled the potential for loss of load using three pricing scenarios: holding prices flat at current levels, setting prices equal to TVA’s projected costs, and setting prices equal to the projected average competitor price. The results were then used to produce probabilities of different potential financial outcomes to identify types of market conditions under which load loss was likely to occur. TVA included the following assumptions in the CRM: it would begin facing competitive pressures in 2008; its contracts would include provisions for distributors to satisfy some of their power needs from sources other than TVA, referred to as partial requirements; and it could sell power elsewhere. TVA conducted its competitive risk analysis in 2003. In a little less than one-third of the scenarios, the CRM showed that TVA could lose load if other utilities had both cheap natural gas and high reserve margins, or unused available capacity. Because natural gas prices have risen and movement toward electricity competition has slowed, TVA has not considered it necessary to run the model again. TVA used the results of its competitive risk analysis as well as professional judgment in developing its 2004 strategic plan and the initial range of $3 billion to $5 billion for its statutory debt reduction goal. The plan looks at the larger picture of what TVA needs to do to succeed in a more competitive environment. It concluded that TVA needs to concentrate on four areas over the next few years. These are: developing new, more differentiated pricing structures, services, and contract terms that more closely tie the cost and risk of TVA’s products to their terms and pricing; addressing issues related to wholesale market design and transmission pricing, including how it will interface with surrounding markets to ensure reliable power and how it will charge for transmitting power inside its service area when distributors can choose other suppliers; accelerating debt reduction to increase financial flexibility; and maintaining and operating company assets to continue to meet electricity supply obligations safely and reliably. TVA uses the Enterprise Risk Model (ERM) as part of its annual internal budgeting process to refine its TFO reduction targets by determining likely cash flow in given situations. The ERM is a simplified cash-based accounting model that can project key financial data by modeling TVA’s system based on a power supply plan and a long-range financial plan. The ERM uses Monte Carlo simulation to assess the probable range of uncertain inputs, or variables, such as interest rates or coal prices, redispatch the TVA system, and recalculate cash flows multiple times while showing a range of probable values for each variable. The ERM’s Monte Carlo simulations use 13 variables that include key costs and key determinants of revenue: electricity market peak ($/MWh) electricity market off-peak ($/MWh) natural gas prices ($/mmBtu) coal prices ($/mmBtu) long-term interest rates (%) short-term interest rates (%) total operating and maintenance expenses capital expenditures selling, general and administrative expenses benefits expense coal plant availability nuclear plant availability hydro generation For example, a simulation might use key costs such as prices for coal and natural gas, and combine this information with key determinants of revenue, such as peak and off-peak electricity prices, and quantities sold at those prices. The output of the model is an estimate of the annual net cash flow for TVA. For each scenario estimated, the model shows net cash flows and financing obligations repayment over each of the next 20 years for the values assumed in that scenario. Assuming that this net cash flow is applied to reducing financing obligations, the model provides an estimate of the level of obligations at the end of the simulation, which can then be used to refine projections used in coming up with its goals. The model uses a variety of reliable sources for estimates of the key input variables. For instance, the variability of rainfall for hydropower is calculated using historical data. Interest rates are based on forecasts from Global Insight and the Wall Street Journal. The volatility of commodity prices for coal or natural gas is estimated with a combination of historical data and projected trends. Other sources may also provide reasonable estimates for key variables in the model, however. For example, some of the commodities used in the model, such as natural gas, have active options markets, which could help identify more accurate estimates of the range of possible future prices in volatile markets. For example, when Hurricane Katrina destroyed a large number of natural gas rigs in the Gulf of Mexico, there was an enormous increase in implied volatility for natural gas prices. This was because no one knew how long it would take to repair the rigs or what the market consequences would be of a sudden withdrawal of a large percentage of the natural gas supply. In such a case, the options market may provide a more accurate estimate of price volatility than historical activity and might result in a more comprehensive characterization of the distribution of possible TFO reduction levels. In designing the ERM and using its output to devise its current goal for reducing financing obligations, TVA made the following key business assumptions: Brown’s Ferry Nuclear Unit 1 will be completed on time, TVA will not self-fund any new baseload generation, distributors who have given notice they will not be renewing contracts TVA will meet or exceed current environmental regulations, TVA’s credit rating remains AAA, and distributors do not gain rights to partial requirements or transmission. TVA has generally made reasonable assumptions concerning the level and variability of the key inputs to its Monte Carlo model. As with any modeling effort, there are some inherent limitations, and areas in which the modeling may be improved. TVA’s key business assumptions, while reasonable, limit the range of outcomes from the model by making certain events appear more fixed or settled than they are. Allowing the range of possible outcomes attached to some of the fixed assumptions to be modeled as variables may better reflect the uncertainty attached to TVA’s TFO reduction estimates. For example, TVA could determine a range of likely dates for the completion of Brown’s Ferry Nuclear Unit 1, and use these dates as part of the Monte Carlo simulation. Another example might be to use the range of possible costs from potential environmental legislation as inputs to the model. Modeling these and other fixed assumptions as variables might better illustrate the range of outcomes for TVA to evaluate in setting and refining its TFO reduction goals. We identified several key factors that could impact TVA’s ability to successfully carry out its plan. Some factors are more difficult for TVA to control than others. The timing of electricity industry restructuring, potential increases in interest rates, and costs associated with meeting potential new environmental regulations are factors outside TVA’s control. Future rate increases and a fuel-cost adjustment clause are factors that will help TVA cover unforeseen costs, which will help TVA meet its TFO reduction goal. TVA’s planned reduction in interest expense could be affected by increases in interest rates. Although the TFO reduction plan includes the capital expenditures TVA estimates it will need to comply with all existing environmental regulations, the plan does not include potential capital expenditures needed to comply with any changes to the current environmental regulations. Building new generating capacity could require capital expenditures not included in the plan. Restructuring is the major reason TVA has undertaken TFO reduction, and its timing and the organizational and structural changes it may impose are key variables in TVA’s plans. TVA’s management and industry experts believe TVA may eventually lose its legislative protections from competition and have to compete with other utilities. Even if TVA does not lose its legislative protections, its management has recognized the need to take action to better position the agency to be competitive in an era of increasing competition and customer choice. TVA management undertook both the 1997 business plan and the 2004 strategic plan to position TVA to meet the challenges it would likely face in the coming restructured marketplace. The extent to which TVA would be affected by loss of its legislative protections from competition would be influenced by (1) when TVA loses its protections, which would affect how much time it has to continue to improve its competitive position; (2) how TVA would be structured to operate in a competitive environment, including whether it would be given the ability to compete for customers outside its service area; and (3) how TVA’s financial condition compares to its competitors at the time it loses its protections from competition. Loss of its protections from competition could affect TVA’s ability to set rates at levels sufficient to recover all costs, which could negatively impact the amount of cash available to reduce TFOs. According to a TVA official, one option TVA could pursue to help meet its goal for reducing TFOs is to negotiate long-term contracts with its customers. Long-term contracts would help reduce TVA’s risk by providing a steady revenue stream for a certain period of time. If TVA’s distributors were to gain the rights to purchase a portion of their electric power requirements from other utilities, it could have a negative material effect on TVA’s ability to meet its TFO reduction goal. For example, excluding the Kentucky portion of TVA’s service area from the anti- cherrypicking provision of the EPAct is currently under consideration. In the event this legislation is enacted, TVA officials believe other distributors would seek similar treatment. Future rate increases and a fuel-cost adjustment clause allowing TVA to adjust rates for the rise and fall in the prices of fuel and purchased power that result from changes in market conditions will help TVA meet its TFO reduction goal. TVA’s TFO reduction goal reflects the October 2005 rate increase and the FCA that TVA plans to implement in fiscal year 2007. The plan does not reflect any additional rate increases through 2015. TVA estimates that the FCA will cover net increases in the cost of fuel and purchased power of $1.4 billion from fiscal year 2007 through 2015, which will free this amount of cash to apply toward TFO reduction. In addition, TVA’s management told us that they would consider additional rate increases if necessary to cover increases in operating costs other than fuel and purchased power. In determining whether to raise rates, TVA’s management recognizes that they would need to consider current markets and any potential negative consequences, such as the impact on power sales and the regional economy. The April 2006 rate increase and any future increases will help TVA cover any unforeseen increases in projected operating costs or capital expenditures, as well as shortfalls in projected revenue. TVA will be challenged to meet its goal of reducing projected O&M expenses by $1.1 billion from fiscal year 2007 through 2015. TVA has been focusing on reducing O&M expenses since it issued its 1997 business plan, and has already taken many steps to trim these expenses. TVA officials have said that the $1.1 billion savings will come from baseline O&M expenses, which TVA defines as the ongoing costs of operating and maintaining its internal business units that are routine and recurring. In fiscal year 2005, these expenses represented about $1.3 billion, or about 54 percent of the $2.4 billion reported for O&M expenses, and about 20 percent of TVA’s total operating expenses. According to a TVA official, the growth limit for the baseline O&M expenses will be applied to the total for all business units and any excess increases in these expenses by one unit will have to be absorbed by the other business units. For example, the amount budgeted for one of TVA’s business units in fiscal year 2007 was $30.7 million over what it would have been if it had been limited to projected inflation less one half of a percentage point, and according to a TVA official, this excess will have to be absorbed by the other business units in order for TVA to meet its overall growth limit. TVA projects that it will continue to reduce annual interest expense as it reduces the balance of outstanding debt and, if the situation presents itself, refinance debt at lower interest rates. Like all outstanding debt approaching maturity dates, TVA’s interest expense is subject to interest rate risk. As TVA’s outstanding debt matures, the portion that is not repaid will need to be refinanced at current rates, thus exposing TVA to the risk of rising interest rates and higher interest costs. TVA has reduced its annual interest expense from more than $2 billion in fiscal year 1997 to about $1.3 billion in fiscal year 2005, a 35 percent reduction. TVA was able to lower its interest expense by refinancing debt at lower interest rates, reducing the outstanding balance of debt, and entering into alternative financing arrangements. Alternative financing arrangements help reduce interest expense because they are classified as liabilities in TVA’s financial statements. This means that rather than being classified as interest on debt, the costs of these arrangements are recorded as increases in operating expenses or reductions in revenue. TVA attributes approximately 80 percent of the reduction of interest expense from fiscal year 1997 to 2005 to refinancing debt at lower interest rates. As of September 30, 2005, TVA had about $8.3 billion in outstanding debt that will mature and either need to be repaid or refinanced over the next 5 years ($3.1 billion in long-term debt and about $5.2 billion in short-term debt). By the end of this 5-year period, for every 1 percentage point change in TVA’s average borrowing costs for the $8.3 billion, its annual interest expense would increase or decrease by about $83 million. If future interest rates are higher than the rates used in TVA’s projections, TVA may have difficulty meeting its targets for reducing interest expense. Although TVA’s TFO reduction plan includes all of the capital expenditures it projects will be needed to comply with existing environmental regulations, the plan does not include potential capital expenditures needed to comply with any changes to the current environmental regulations. According to TVA’s 2005 Information Statement, several existing regulatory programs are being made more stringent in their application to fossil-fuel units and additional regulatory programs affecting fossil-fuel units have been announced. According to TVA, its TFO reduction plan does not include the estimated future costs to comply with more stringent regulations because it is difficult to predict how these regulations would affect TVA. However, TVA officials estimate that the cost to comply with future regulations could run between $3.0 billion and $3.5 billion through 2020. TVA officials said they would include an estimate of these costs in the plan if their level of certainty ever increases. The plan also does not include the potential cost of complying with legislation that has been introduced, but not yet passed, in the Congress to require reductions in carbon dioxide. If this legislation is enacted, TVA estimates that the cost of complying with it could be substantial. The extent to which new environmental regulations affect any utility depends on several factors, including the type and condition of its generating equipment, the portion of its power generated by fossil fuels, the types of controls it chooses to meet the new environmental regulations, and the availability of excess generating capacity. Compared to surrounding regions, TVA has roughly the same amount of coal-fired capacity, nearly twice as much nuclear, nearly four times as much hydro, and less than half as much natural gas fired capacity. Figure 2 shows TVA’s generation mix compared to the surrounding North American Electric Reliability Council (NERC) regions. The extent to which different producers will be affected by new environmental regulations, and the resultant impact on their power prices, is unknown at this time. Although new environmental regulations would likely present challenges to TVA in meeting its TFO reduction goal, they may not necessarily affect its competitive position relative to its neighboring utilities. Building new generating capacity during the current TFO reduction period to meet the projected demand for power beginning in 2015 would likely cause TVA to incur new debt and use cash that is currently projected to be available to reduce TFOs. TVA officials told us they plan to meet load growth in the TVA service area through 2015 by completing BFN 1, increasing the capacity of existing generating units, and purchasing power from the marketplace. TVA’s current projections include the capital expenditures it projects will be needed to meet this plan. TVA also projects that it will need additional generating capacity beginning in 2015. TVA plans to satisfy this need by partnering with other power providers. Its current goal assumes that it will not finance any new baseload plants, other than BFN1, through 2015. If growth in demand or market changes force TVA to build new generation, as happened after its 1997 plan, TVA’s ability to reduce TFOs could be affected. TVA officials told us they recognize that in order to improve TVA’s financial situation, it will need to operate within its means and reduce TFOs. TVA will require continued management commitment to continue reducing financing obligations. According to officials, TVA did not meet the debt reduction goal in the 1997 business plan because the amount of cash left over after meeting its other business needs was not sufficient to meet the goal. Since issuing its 2004 strategic plan, TVA’s management has demonstrated its commitment by exceeding the planned targets for the first 2 years of the TFO reduction plan. In addition, their actions have included adding annual TFO reduction targets as revenue requirements in the budgets used for its annual rate reviews, tying portions of its overall incentive payroll compensation to accomplishing the TFO reduction goal, and demonstrating a willingness to raise rates to meet the goal. Although TVA has a new board structure as of March 31, 2006, the continued commitment of the board toward TFO reduction will be needed to meet the current goal. The growing demand for power could affect TVA’s ability to meet its goal since TVA’s current projections assume that it will not invest in any new generation through 2015, other than restarting BFN 1. TVA’s plan includes the capital expenditures needed to expand generating capacity in existing generating facilities to meet projected increases in demand for power through 2015. By 2015, however, TVA estimates that it will need more baseload generation to meet growth in demand. As a result, it will need to take action to meet that need during the current TFO reduction period. TVA officials are considering a number of options to meet this projected increase in demand for power, including partnering with outside parties. TVA’s current plan assumes that one option for meeting the growth in demand for electricity is by uprating, which is the process of increasing the capacity of existing generating assets. To its 30,644 megawatts of generating capacity, TVA currently plans to add: 1,280 total megawatts of capacity a year by restarting Browns Ferry Unit 1 in fiscal year 2007; 125 megawatts each, for a total of 250 megawatts a year, by uprating or adding capacity to Browns Ferry Units 2 and 3; approximately 15-30 megawatts of capacity a year through 2015, or a total of approximately 150 to 300 megawatts of annual capacity by the end of the TFO reduction period, by continuing to modernize its hydropower facilities; 36 total megawatts a year by uprating the Raccoon Mountain Pumped 16 total megawatts a year by uprating the Cumberland Fossil Plant through 2010. TVA also plans to meet future needs by continuing to purchase low-cost power from the Southeastern Power Administration and through other long-term contracts. In addition, TVA plans to purchase power from the market when it is cheaper than generating its own power. Even with these plans in place, TVA expects that it will still need new baseload capacity beginning in 2015. TVA officials told us they will consider partnering with others to help finance the acquisition of new assets or they will consider building new assets themselves if they cannot find a suitable partner. TVA expects a partner would help share risk. Although the benefits, costs, and risks would vary depending on the type of partnership it eventually enters into, according to TVA officials, forming a partnership would help meet new demand for electricity while reducing the cash requirements for building new generating assets. As of April 2006, TVA management did not have any firm plans for a partnership, but were discussing potential partnerships with several interested parties. One partnering option TVA is considering includes working with the NuStart Consortium, which selected TVA’s Bellefonte site as one of the two potential sites in the country for a new advanced design nuclear plant. In the late 1980s, TVA stopped construction on Bellefonte, a nuclear plant which has never been operated. NuStart plans to use the Bellefonte site, as well as one other potential site, on applications for licenses it plans to submit for new nuclear plants, but currently there have been no decisions to construct a plant. Another option being considered by TVA is entering into a partnership with another industry consortium to build an Advanced Boiling Water Reactor on the Bellefonte site. TVA and TVPPA also indicated that TVA’s customers are interested in partnering with TVA. Partnering with a customer would allow TVA to earn fee income for operating a new generating asset, while its customer would finance and own all or a share of the asset. TVA officials also noted that TVA’s customers have not owned generating assets before and, as a result, may not have the needed in-house expertise, or be familiar with the risks involved. Despite ongoing conversations between TVA and potential partners, however, there are no current firm plans to partner with another party, and TVA could not provide us with criteria it would use in selecting partners. As a result, it is difficult to determine TVA’s likelihood of finding suitable partners to help meet the growth in demand projected in its service territory. One of TVA’s largest distributors noted that TVA could also pursue other options to reduce the demand for power. These include giving customers access to obtaining a portion of their power needs from other suppliers or changing the rate structure to provide incentives to reduce the peak demand for electricity. In 2002, we reported that TVA’s demand-side management programs, which are designed to reduce the amount of energy consumed or to change the time of day when it is consumed, were limited in scope and impact when compared to similar programs managed by other utilities and recommended that, as appropriate, TVA expand its demand-side management programs. TVA officials told us they have continued to expand the use of demand-side-management programs, which will reduce the amount of power TVA would need to generate or purchase from the market. TVA’s decision to complete BFN 1 reversed a policy dating from the late 1990s to rely primarily on purchasing power from other power suppliers when its own power system cannot meet demand. Building new capacity itself provides two potential key benefits for TVA. First, TVA would likely be able to generate power at a lower cost than purchasing a like amount of power from other utilities, thereby reducing its cost of power. Second, a decision to build new generating capacity would give TVA control over its source of power and remove the uncertainty of having to rely on other utilities for power. It would reduce the chances that TVA would need to purchase power from the market when there may be limited excess capacity and high prices, but increases the risk that its generating costs could be higher than market prices. According to TVA, if it can recover the cost of building new generating assets through rates, increased demand would have no effect on its ability to meet its TFO reduction goal. However, TVA officials acknowledged the need to be sensitive to rate increases, stating that raising rates too quickly could trigger action that would jeopardize its relationship with customers and ultimately threaten its current monopoly status. TVA’s $7.1 billion goal for reducing TFOs through 2015 assumes that any demand for power not met by its generating capacity will be purchased from the marketplace. TVA’s 1997 business plan also assumed that it would not invest in any new generating capacity. Ultimately, the need to build its own additional generating capacity in lieu of purchasing power from the market in the late 1990s meant that TVA increased its capital expenditures and reduced the amount of cash available for debt reduction, which contributed to its failure to meet the debt reduction goal in its 1997 business plan. Although TVA currently has no specific plans to build new generation, any decision to build new generating assets would likely affect its ability to fully meet its TFO reduction goal. TVA’s TFO reduction goal was based on a strategic planning approach and an assessment of market risks and projected cash flow. As with any effort that incorporates economic models, there are some limitations and areas where they could be improved. While TVA’s key business assumptions are reasonable, holding them fixed, rather than modeling them as variable assumptions, limits the range of outcomes from the model. Modeling different scenarios under which TVA may need to meet new environmental regulations or pay for new capacity, for instance, would allow TVA to better illustrate the possible range of outcomes, and thus the uncertainties of many factors in its plan. In addition, while TVA uses a variety of factors to estimate key variables, in the case of commodity prices, expanding the sources would provide a more comprehensive characterization of the range of possible TFO reduction levels in situations where markets are volatile. Finally, given the numerous factors that could affect TVA’s ability to meet its goal, management’s continued commitment to reducing TFOs will be necessary to keep TVA on course. We are making two recommendations to the Chairman of the Board of Directors of the Tennessee Valley Authority to (1) explore additional data sources for estimates of key input variables in the Enterprise Risk Model, and (2) better illustrate the range of outcomes in the Enterprise Risk Model used for planning purposes. Specifically, we are recommending that: TVA consider incorporating the variability surrounding certain assumptions that are now held fixed, such as the starting date for Browns Ferry Nuclear Unit 1 or possible new environmental legislation. In cases where professional judgment is used to quantify the uncertainty, the effects of incorporating that judgment should be documented. TVA augment its sources for projections of key model inputs with sources such as commodity prices and the volatility of those prices. Market prices for commodities with active futures and options markets can be used to determine the expectations of market participants concerning prices and their volatility. In written comments on a draft of this report, TVA’s Acting Chief Executive Officer, President, and Chief Operating Officer agreed with our report and recommendations. We also discussed technical comments with TVA officials, which we have incorporated into the final report as appropriate. TVA’s written comments are reproduced in appendix I. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to appropriate House and Senate committees, interested members of the Congress, TVA’s board of directors, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-6131, or martinr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Donald Neff (Assistant Director), Lisa Crye, Austin Kelly, Mary Mohiyuddin, and Brooke Whittaker made key contributions to this report.
Competition in the electricity industry is expected to intensify, and restructuring legislation may dramatically change the way electric utilities do business in the future. To be competitive, the Tennessee Valley Authority (TVA) needs to reduce fixed costs and increase its flexibility in order to meet market prices for power. TVA plans to reduce its financing obligations, which include statutory debt and other financing arrangements, by $7.1 billion by the end of fiscal year 2015. GAO was asked to (1) describe how TVA plans to meet its goal for reducing financing obligations, (2) assess the reasonableness of TVA's approach in developing its plan, (3) identify key factors that could impact TVA's ability to successfully carry out its plan, and (4) identify how TVA's plans for meeting the growing demand for power in the Tennessee Valley may impact its ability to reduce financing obligations. To fulfill these objectives, GAO interviewed TVA officials and others, and reviewed budget submissions, financial projections, and other documentation supporting the plan. TVA plans to reduce its financing obligations by about $7.1 billion from fiscal years 2004 through 2015 by increasing revenue, controlling the growth of its operating expenses, and limiting capital expenditures. TVA's financing obligations include statutory debt, which it plans to reduce by $6.7 billion, and alternative financing obligations such as energy prepayments, which it plans to reduce by $0.4 billion. Overall, GAO's review found TVA's approach to developing its plan to reduce financing obligations reasonable. TVA performed detailed competitive analyses and modeled different market scenarios to estimate its future competitive environment, then used its internal budget process to project annual cash flows and refine its goal with a cash-based accounting model. Many of the variables used in the models were based on recognized data sources. Augmenting these sources with prices from options markets could provide more accurate estimates in volatile markets. TVA also made fixed assumptions about actions it would take, such as building new power generation, and events, such as the advent of new environmental regulations. While these assumptions are reasonable, they carry uncertainty that is not reflected in the model. Modeling them as variables might better reflect that uncertainty and provide broader information for planning purposes. GAO identified several key factors that could impact TVA's ability to successfully carry out its plan. Factors such as the timing of electricity industry restructuring, potential increases in interest rates, and costs associated with meeting potential new environmental requirements, are key factors that are difficult for TVA to control. TVA has more control over other key factors, such as its decisions on whether or not to construct new power generating facilities before 2015 and to limit operating and maintenance expenses, but these are also affected by outside forces and contain an element of uncertainty. Future rate increases and a fuel-cost adjustment clause are factors that should help cover any unforeseen costs, capital expenditures, or revenue shortfalls. TVA's plan includes the capital expenditures it believes will be needed to expand capacity of existing generating facilities to meet the growing demand for power in its service area through 2015; however, any new or unplanned expenditures prior to 2015 could lessen TVA's ability to achieve the $7.1 billion goal. By 2015, TVA has estimated that it will need more baseload generation to meet growth in demand. TVA officials are considering a number of options to meet this projected increase in demand for power, including partnering with outside parties to build new generation. TVA's current projections assume that it will not invest in any new generation through 2015 other than restarting Browns Ferry Nuclear Plant Unit 1; however, any new or unplanned capital expenditures could use cash otherwise intended to be used to reduce financing obligations.
China became the 143rd member of the WTO on December 11, 2001, after almost 15 years of negotiations. These negotiations resulted in commitments to open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. The United States and other WTO members have stated that China’s membership in the WTO provides increased opportunities for foreign companies seeking access to China’s vast market. China is already a major destination of U.S. investment overseas, and China and the United States are already major trading partners. The results of China’s negotiations to join the WTO are described and documented in China’s final accession agreement, the Protocol on the Accession of the People’s Republic of China, which includes the accompanying Report of the Working Party on the Accession of China, the consolidated market access schedules for goods and services, and other annexes. China’s WTO commitments are complex and broad in scope. They range from the rules for how China’s trade regime will be reformed in accordance with WTO principles to specific market access commitments for goods and services. A number of commitments are to be phased in over 10 years. Commitments related to reforming China’s trade regime require a specific action from China, such as reporting particular information to the WTO, while others are more general in nature, such as those that affirm China’s adherence to WTO principles. Many commitments seek to improve the rule of law. Generally, China agreed to ensure that its legal measures would be consistent with its WTO obligations. These rule of law-related commitments include broad reforms to publish and translate trade-related laws and regulations and apply them uniformly at all levels of government and throughout China. China will have to adhere to internationally accepted norms to protect intellectual property rights and enforce relevant laws and regulations relating to patents, trademarks, copyrights, trade secrets, and integrated circuits. China also made a substantial number of other rule of law-related commitments regarding transparency, judicial review, and nondiscriminatory treatment of businesses. The accession agreement also includes market access commitments for goods, including commitments that will reduce tariffs on agricultural and industrial products from about 14 percent in 2001 to less than 10 percent in 2010, as well as commitments to reduce or eliminate many other trade barriers such as quotas or licensing requirements on some of these products. Further, China made commitments to allow greater market access in 9 of 12 general service sectors, including sectors that are important to U.S. companies such as banking, insurance, and telecommunications. However, some limitations, including those that require joint ventures with Chinese partners in some sectors or restrict the amount of foreign investment, will continue. It is important to note that, in addition to the commitments set forth in the accession agreement, WTO membership confers obligations and rights on China. For example, membership obligates China to adhere to more than 20 existing multilateral WTO agreements that cover various areas of international trade. China, like all other WTO members, must adhere to the WTO’s three main agreements governing key areas of international trade: (1) the General Agreement on Tariffs and Trade, (2) the General Agreement on Trade in Services, and (3) the Agreement on Trade-Related Aspects of Intellectual Property Rights. Other specialized multilateral WTO agreements that apply to China include the Agreement on Trade-Related Investment Measures, the Agreement on Agriculture, the Agreement on Technical Barriers to Trade, and the Agreement on Subsidies and Countervailing Measures. Numerous sections of China’s protocol and working party report refer to or reiterate specific provisions of a number of these underlying WTO agreements. Membership also gives China various rights under WTO rules. For example, the Understanding on the Rules and Procedures Governing the Settlement of Disputes gives access to a formal mechanism for resolving disputes over WTO trade-related issues. Eventual implementation of China’s WTO commitments should result in greater freedom for American firms to invest and trade in China, according to the U.S. Department of Commerce. The United States was the second largest foreign investor in China in 2000. China was the ninth largest destination for U.S. exports in 2001. In 2001, U.S. companies exported about $18 billion of merchandise to China. The major exports included transport equipment, electrical machinery, office machines, general industrial machinery, oilseeds, and fruits. Figure 1 shows the increasing level of U.S. foreign direct investment in China. Figure 2 shows the increasing levels of U.S. trade with China. See appendix V for additional information regarding U.S.-China investment and trade. U.S. companies reported that many WTO-related commitment areas were important to them, according to our survey of U.S. companies with business activities in China. The 30 commitment areas included market access, investment measures, fundamental market, and rule of law-related reforms. Thirteen of the 30 commitment areas were important to a majority of the responding companies. Commitment areas related to enhancing the rule of law emerged as most important both for those who responded to our survey and for those who took part in our structured interviews. The relative importance of some other commitments varied among manufacturing and services firms but was consistent with the nature of their operations. In interviews, company representatives emphasized and explained the importance of key commitments. We found that survey respondents were particularly focused on reforms related to enhancing the rule of law. At least three quarters of respondents selected intellectual property rights; consistent application of laws, regulations, and practices; and transparency most frequently when asked to rate the importance of individual commitment areas to their companies (based on a list of 30 WTO-related commitment areas that we specified in the survey). These results did not materially change when we compared the survey results for small- and medium-sized companies and large companies. Other than those related to rule of law, respondents most frequently selected trading rights (the right to import or export); tariffs, fees, and charges; and scope of business restrictions as the commitment areas important to their companies. All 30 commitment areas were important to at least one quarter of our respondents. Table 1 shows the number of survey respondents that said each WTO-related commitment area was important to their company. The relative importance that companies assigned to rule of law-related commitment areas (compared to those related to other reforms) generally remained consistent for agricultural, manufacturing, and services firms. These three types of firms most frequently identified rule of law-related commitments as important. However, given the nature of their businesses, these firms differed in the areas that they identified next most frequently. Manufacturers assigned greater importance to tariffs, fees, and charges. Manufacturers as well as agricultural firms also assigned greater importance to trading rights (the ability to import and export) and customs procedures and inspection practices. This is consistent with the needs of manufacturers to move goods into and out of China. Services firms assigned greater importance to commitment areas related to the scope of business restrictions and market access for services, consistent with the historical limitations on their ability to operate in China and their need for approval from the Chinese government in order to do business there. Structured interviews with representatives of U.S. companies in China generally corroborated the survey results regarding important WTO-related commitment areas. Many survey respondents and company representatives we interviewed emphasized the importance of China’s rule of law-related reforms to the functioning of their operations in China and pointed to specific problems with current laws, regulations, and practices. For example, in order to explain the importance assigned to certain commitment areas, one company representative noted that because licensing procedures in China are notoriously opaque, transparency is a priority for his company. One survey respondent’s written comments emphasized the importance of equal treatment under the law, saying that “getting a level playing field for U.S. business versus Chinese competitors is critical.” Another representative of a large multinational company told us that rule of law-related reforms are important, because China’s application of product standards and other measures varies greatly between localities. Furthermore, this company representative said that Chinese officials often promulgate regulations and procedures without sufficient comment periods, the regulations themselves are often deliberately vague, and intellectual property rights (IPR) violations are still rampant. Another individual told us about the biggest problem his company had encountered in China, which involved the rule of law. Several years ago, his company won a court judgment against a client who had failed to pay on a contract. Despite winning the judgment, this U.S. company had been unable to collect the debt, and the court had failed to take steps to enforce the judgment. In addition to rule of law-related reforms, several company representatives also explained the importance to their companies of a number of other reforms, including (1) reductions in tariffs and nontariff barriers (market access) and (2) the liberalization of investment-related measures. First, several manufacturers discussed the importance of obtaining increased market access through tariff reductions. One representative of a food manufacturer explained that China’s implementation of its WTO commitments would make it easier and cheaper to import the ingredients used in his company’s products. Another representative of a manufacturing firm reported that high tariffs were one of the most important issues for his company. He noted that although tariffs have already come down on several imported products, his company still pays high tariffs on certain other products. Concerning nontariff barriers and reforms to China’s quota system, one company representative explained that his company sometimes encounters problems in selling products in China because quotas reserve a portion of the market for domestic manufacturers. This has caused problems because it lead to inventory overruns for his company. Second, several representatives of various U.S. companies in China also highlighted the importance of other reforms, such as commitments liberalizing investment-related measures. For example, one company explained that repatriating profits made in China still involves a lengthy process. Representatives of another U.S. multinational company that we interviewed complained about the difficulties that China’s restrictive foreign exchange regime creates. This company maintains a separate holding company in order to move renminbi (China’s currency) from one joint venture to another within China. Finally, several companies that provide services in China pointed out that numerous commitment areas are important to them (indirectly) because they are important to their clients’ ability to do business in China (directly). Respondents to our survey expected China’s WTO accession and implementation of its commitments to improve their ability to do business in China and to increase their business with China. More than three quarters of the companies generally expected a positive impact on their business from China’s WTO commitments, and most expected to begin to see this impact relatively soon—though some company representatives indicated that the full impact would take time. Companies identified their current business goals for China, and most companies we interviewed said that their company’s goals had already changed to reflect China’s WTO membership. Furthermore, companies generally expected that their business activities in China would increase as a result of China’s implementation of its WTO commitments, including their volume of exports to China, market share in China, and distribution of products there. A majority of the companies participating in our survey and interviews expected a positive impact to eventually result from implementation of China’s WTO commitments. Some companies had already experienced an impact, and most companies expected to begin to experience an impact within the next few years. Companies provided a variety of explanations for their expectations regarding the eventual impact. When asked what impact they expected China’s WTO commitments would have on their companies, most survey respondents reported that they expected a positive or very positive impact. These positive expectations were generally consistent for firms in all sectors, but services firms had a somewhat higher percentage (93 percent) of positive and very positive expectations compared to manufacturing firms and agricultural firms (79 percent and 62 percent, respectively). Some respondents expected little or no impact on their business in China. A smaller group of respondents reported that they thought China’s WTO membership would have a negative impact on their business. Figure 3 shows the expected impact of implementation of China’s WTO commitments for respondents to both our mail survey and to the structured interviews in China. Most companies expected to begin to experience an overall business impact from China’s WTO commitments in the next few years, but this view was not unanimous. About one-fifth of survey respondents reported that they expected an immediate impact or had already experienced an impact. More than half of the survey respondents expected to begin to experience an overall business impact from China’s WTO commitments sometime from less than 1 year to 4 years. In contrast, a small fraction of respondents expected to begin to experience an impact in 5 to 10 years or never expected to experience an impact from China’s WTO commitments. About one-tenth of respondents did not know or had no basis to judge how soon their companies might begin to experience an impact. Table 2 summarizes these company responses. Company representatives that we interviewed in China made comments that lend support to the general expectation that change will be positive, but many believed that China’s implementation would be incremental and would be tempered by China’s desire to protect its workers. One representative said that his company planned to boost investments and build a national business in China. He explained that his company expected to obtain trading rights and national treatment in taxes as a result of WTO. Others explained that demand for services had already increased, that the trajectory of the reforms resulted in immediate benefits, and that the lowering of trade barriers had helped a great deal. One representative noted that some impact began before accession, but that the full impact from WTO implementation will be incomplete for some time to come. Similarly, another individual stated that all the commitments would eventually be implemented but that the time needed for implementing different commitments would vary considerably. A representative who agreed that it will take time for WTO concepts to take hold also said that the impact is dependent on how forceful the central government continues to be about WTO reform. Our survey of U.S. companies with a presence in China, which was conducted soon after China’s accession to the WTO, asked respondents to identify their current goals for doing business in China. The company goals that survey respondents most frequently identified were “establish a presence for the future,” “increase exports to China,” and “benefit from lower labor costs,” as shown in table 3. Identification of these three company goals follows logically from survey respondents’ identification of the commitment areas that their companies considered important. First, because many companies may want to establish a presence for the future, this may explain why rule of law reform is important. Second, many companies may want to increase exports to China. This may drive their interest in lower market access restrictions resulting from China’s WTO commitments. Third, some companies may want to take further advantage of China’s low wages once China implements some fundamental market reforms and liberalizes some investment measures. Even firms with a history of more than 10 years in China identified these goals as priorities for their companies. Manufacturing, agricultural, and services firms reported slightly different priorities in their companies’ most frequently identified goals. A larger percentage of manufacturers identified benefiting from lower labor costs (49 percent) and the cost or quality of raw materials (27 percent) in China as company goals compared to agricultural or services firms (38 and 18 percent, and 0 and 12 percent, respectively). Similarly, a larger percentage of manufacturers identified increasing exports to China (50 percent) as a company goal than either agricultural or services firms (38 percent and 27 percent, respectively). More than half of the companies that we interviewed in China reported that China’s official membership in the WTO had changed their company’s goals and expectations for future business opportunities in China to a moderate or great extent. Specifically, 16 company representatives said their goals had changed to a great extent while 12 reported their goals had changed to a moderate extent. When asked to describe the extent to which their company’s goals had changed, responses varied widely and focused on their companies’ operations. On one end of the spectrum were several companies whose operations had already changed as a result of China’s recent WTO membership. For example, one corporate representative in China noted that China’s WTO membership had resulted in big increases in demand for his company’s services and that his firm will also be allowed to provide a wider range of services to its clients in the future. In another example, a service provider looking forward to broader distribution rights speculated that if China had not gained WTO membership, his company would likely have withdrawn completely from China. At the other end of the spectrum were companies whose goals remained relatively unchanged. These companies focused on their ability to operate in China regardless of China’s WTO commitments. One company representative said, “We’re here, and we know how to operate.” This respondent emphasized that he hoped that China’s WTO membership would open new opportunities, but his company was not relying on that possibility. Other representatives said that, “WTO does not change our operations at all,” and that “we’ve been here a long time without WTO.” Another company representative explained that the future will be more open but that his company had already been planning for expansion for 5 to 10 years because China is a major focus of its Asia operations. Similarly, another company representative noted that “WTO will improve the market down the road, but the market exists with or without WTO.” Finally, representatives of several companies that we interviewed expressed uncertainty regarding how China’s WTO membership would affect their companies’ goals and operations in China. Several firms with agricultural interests or dependent on agricultural inputs for their manufacturing operations discussed their early concerns about China’s implementing rules and regulations, which indicated problems that could affect their goals. One firm cited China’s promulgation of agricultural regulations as well as recent prohibitions on investment in specific agricultural sectors. A representative of another agricultural firm noted that as a result of agricultural compliance issues that Chinese officials had raised since China’s WTO accession, his business had experienced greater difficulties doing business in China. A representative of a U.S. company operating a joint venture in China summarized his company’s uncertain expectations as follows: “Future goals depend on the whole dynamic of forcing the Chinese government to be more liberal and systematic as well as more fair in terms of legal protection.” In addition to expressing views on their companies’ goals in doing business with China, survey respondents also indicated how the implementation of China’s WTO commitments would affect their companies’ activities. About 85 percent of the respondents that completed this survey question expected their companies’ overall activities in China to greatly or somewhat increase. Almost 15 percent of the respondents expected their overall activities to stay the same, while only one respondent expected overall business activities to greatly decrease. Expectations for how the implementation of China’s WTO commitments would affect specific company activities are shown in figure 4. Corporate representatives whom we interviewed reiterated these expectations for an overall increase in company activities. Some companies told us that they hoped to increase their manufacturing base, their number of offices in China, and/or their investment in China in the next few years. Other companies predicted that implementation of China’s WTO commitments could create many additional business opportunities. One company representative noted that there is currently a “boom mentality” in China. For a few business activities, survey respondents reported mixed expectations regarding how they would be affected by the implementation of China’s WTO commitments, as shown in figure 4. Specifically, an almost equal percentage (about 50 percent each) of the respondents reported that the geographic diversity of their companies’ investments in China would either increase or stay about the same. With regard to investment in existing facilities in China, 50 percent of the respondents expected their companies’ activities to increase, while 47 percent expected their companies’ activities to stay about the same. Fifty-four percent of the respondents indicated that their investment in new facilities in China would increase, while 44 percent indicated that their investments would stay about the same. At least 5 percent of the companies responding expected two activities to decrease. First, ventures with Chinese partners will reportedly decrease for 16 percent of the respondents. Implementation of China’s WTO commitments will allow foreign-invested enterprises in some sectors to operate wholly owned foreign enterprises rather than being restricted to joint ventures. For example, a services firm that we interviewed reported that it has been restricted by requirements that it have Chinese partners but hopes to transition to a wholly owned foreign enterprise in 3 to 5 years. Second, competition from foreign or Chinese firms located in China will reportedly decrease for 9 percent of the respondents. One respondent addressed his company’s hopes that WTO will “level the playing field” for U.S. companies in China. For example, leveling the playing field for U.S. companies could include encountering decreased competition from China’s state-owned enterprises as market forces lead to the further closure of inefficient and unprofitable companies and as foreign-invested (joint venture) enterprises lose the benefit of discriminatory practices once China’s WTO-related reforms are implemented. U.S. companies with business activities in China expected a number of impediments to implementation of China’s WTO reforms. Views varied concerning the expected ease or difficulty of implementing specific commitment areas, but we were able to identify some common themes from our survey and structured interviews. Companies generally expected many of China’s WTO commitment areas to be relatively difficult for Chinese officials to implement, especially many of those they considered important. Large numbers of respondents expected reforms to be generally difficult or did not know what to expect in terms of WTO implementation throughout specific locations in China and/or in terms of China’s various levels of government. Companies also described numerous challenges that China faces as it continues to reform its economy and implement its WTO commitments. Companies with a presence in China had different expectations regarding Chinese officials’ implementation of particular WTO commitments. Our analysis of survey responses showed that 7 of the 30 commitment areas listed in the survey ranked “High” in difficulty, 13 commitment areas ranked “Medium” in difficulty, and 10 of the commitment areas ranked “Low” in expected difficulty for the Chinese government to implement. Furthermore, when asked whether each WTO commitment area would be easy or difficult for the Chinese government to implement, respondents expected that two-thirds of the commitment areas listed in the survey would be relatively difficult to implement. However, large numbers of respondents (sometimes as many as 50 respondents) did not know what to expect concerning the ease or difficulty of specific commitment areas or said some were not applicable to their business. Companies most frequently identified five commitment areas related to rule of law reform as the WTO commitments that they expected to be relatively difficult to implement. Respondents also expected some non rule-of-law-related commitment areas to be relatively difficult for Chinese officials to implement. These included reforms to the operation of state- owned enterprises, which are related to fundamental market reforms; and standards, certification, registration, and testing requirements, which are related to market access reforms. On the other hand, most of the commitment areas that are related to reforming China’s foreign investment measures were considered relatively less difficult for China to implement. Table 4 shows the company responses indicating the commitment areas deemed difficult for the Chinese to implement (in terms of the number of “difficult” responses each commitment area received). Our interviews with representatives of U.S. companies in China generally supported these findings. In general, the commitment areas expected to be the most difficult to implement were those that survey respondents also identified as most important. Specifically, our analysis identified rule of law-related commitment areas as the commitment areas of greatest importance to respondents’ companies and also showed that respondents expected these reforms to be the most difficult for China to implement, as shown in table 5. Reforms to China’s state-owned enterprises (SOE) were also expected to be difficult for China’s government officials to implement but were relatively less important to survey respondents. Companies we interviewed explained that reforming SOEs creates a huge challenge for the Chinese and will take a number of years, but that other reforms will have a more immediate impact on their companies’ ability to do business in China and are consequently more important to them. Our analysis identified another group of commitments respondents expected to be both moderately important and moderately difficult for the Chinese to implement. This group is comprised of 37 percent of the commitment areas and includes a range of fundamental market, market access, and investment measure-related reforms. Some commitment areas related to these three types of reforms may appear easier for Chinese officials to implement, because, in some cases, the reforms require specific regulatory or legal changes rather than more systemic legal changes associated with rule of law. Finally, our analysis showed that various other commitment areas were expected to be relatively less difficult for the Chinese to implement. However, while tariffs, fees, and charges and trading rights were expected to be less difficult for Chinese officials to implement, these commitment areas were still rated “High” in importance. It is important to note that all the commitment areas that emerged as low in both importance and difficulty were still important to at least 20 percent of survey respondents. Companies we surveyed also expressed mixed views regarding the ease or difficulty expected for implementation in various locations and for different levels of government in China. For example, almost as many respondents expected reforms to be very or somewhat easy for the national/central level of government as those who expected reforms to be very or somewhat difficult. Similarly, an almost equal number of respondents expected officials in China’s major cities to have an easy or difficult time making reforms. Nevertheless, survey respondents expected China’s local and provincial governments and autonomous regions to have the most difficulties implementing reforms. Table 6 provides a summary of these responses. Company representatives whom we interviewed provided a number of explanations for these expectations by level of government. For example, one company representative compared China to Europe, in that each Chinese region is like a different country and they each have different rules and regulations. Another company official explained how differences within China play out at the local level. He believed that the local officials have a different mind-set, because existing approval processes are their livelihood (he cited import inspections as an example). In his view, local officials want to protect the domestic market and local suppliers from losing jobs. A specific example provides further context for understanding how these differences relate to specific business operations. As one company representative explained, he believed that there is no unified customs system and therefore, customs procedures will be slow to change. In his view, state directives often do not get to local customs authorities and authorities may interpret the directives differently. Another reason that might help explain the varied expectations by level of government is that many respondents’ knowledge about different parts of China is limited. Specifically, more respondents selected “Don’t know” than “Yes” or “No” for 24 of the 26 locations listed in the survey when asked whether they expected that reforms would be relatively difficult for the Chinese to implement. Beijing and Shanghai were the only locations where more respondents expected reforms to be relatively easy to implement (compared to the number of respondents who did not know whether reforms would be difficult and the number of respondents who expected reforms to be difficult to implement). With regard to Beijing, 43 percent of the respondents expected reforms would be relatively easy to implement, while 22 percent of the respondents expected reforms to be difficult to implement. In Shanghai, 54 percent of the respondents expected that reforms would be relatively easy to implement, while 15 percent of the respondents expected reforms to be difficult to implement. Respondents also expected reforms to be more difficult to implement in locations where fewer U.S. companies have a presence in China. These locations included the western provinces (Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang, and Tibet), Guizhou and Yunnan, Sichuan, Heilongjiang, and Inner Mongolia. Companies’ overall positive expectations for their future in China were tempered by their views regarding the domestic challenges China faces as it implements WTO-related reforms. Company representatives described the numerous challenges China faces during implementation of its WTO commitments. Their comments covered a spectrum of opinions ranging from the broad view that, overall, China’s accession would be great for China and the United States, to less enthusiastic expectations based on their companies’ experiences in China. One representative said that WTO has huge potential for his company and implementation of China’s commitments could radically improve his company’s ability to operate. Another representative explained how expectations were somewhat mixed within his own company: “It should be positive, but our immediate expectations are not very high. The intentions are good, and the Chinese have made radical reforms. The desire is there on the part of the Chinese.” Other U.S. companies noted a somewhat neutral reaction to China’s early months of WTO membership. A representative of one U.S. multinational that has invested in China since the 1980s said that WTO has not improved matters much for his company. A service provider said his company has held off on optimistic expectations, because the results of implementation could be positive or negative depending on available opportunities to provide services in China and regulations forthcoming from the Chinese government. Another respondent expressed his company’s skepticism as follows: “ very concerned that China will do as they see fit, without regard to the WTO agreement.” Other respondents referred to the uncertainty of doing business in China through observations that implementation would only extend as far as necessary in order to satisfy policies on paper often differ from actual practice, particular government agencies will make implementation difficult in China despite seeming to comply , and the pace of change and degree of compliance will likely be irregular. Company representatives’ open-ended comments regarding the challenges ahead generally focused on three common explanations for these anticipated challenges: China’s ability to implement rule of law-related reforms, China’s need to protect its domestic interests, and China’s culture and its emphasis on relationships. Several company representatives explained why they believe that China will experience the greatest challenges in implementing its rule of law- related WTO commitments. One respondent to our survey characterized China’s current “standard” approach to legislation as follows, “Pass a gray law, see how it goes, modify interpretation accordingly.” Another respondent noted that “for rule of law issues, there isn’t a structure in place and a structure won’t be in place for a long time.” Several other company representatives whom we interviewed provided numerous examples to illustrate the difficulties anticipated with other rule of law-related reforms. Other explanations focused on the inherent barriers to transparency such as China’s history and culture; the evolutionary nature of the issue (for example, several respondents said that transparency is not a common concept in China); and a tendency to draft regulations and procedures that are deliberately vague. One company representative reported that Chinese agencies are likely to provide conflicting answers to business inquiries. In addition, several companies also cited consistent application of laws as a potential difficulty. At the central government level, China is committed to implementing WTO, but all cities and provinces understand WTO differently, in the view of one U.S. corporate representative. According to another interview respondent, China is so big that even if there is a central government policy, the policy may still vary throughout the country at different levels of government. Other company representatives cited reforms to enforcement of laws and regulations relating to intellectual property rights (IPR) as another area fraught with difficulty. Respondents noted that Chinese officials have enacted reforms resulting in overall improvements to IPR protection, but that challenges continue. One company provided a specific example of a current problem that is expected to continue: “Local companies copy our product and packaging, even though these companies are operated by the local governments. Trademark registration takes at least 6 to 8 months. Even if a company wins a copyright case, you can’t get the government to enforce the copyright violation, so it’s not worth the time and money expended on filing a case.” IPR violations were also described by one company representative as a game embedded in China’s culture. Several respondents predicted that IPR would remain a problem in the years to come, because the government will not risk destabilizing the labor force by enforcing IPR laws and regulations. Company representatives also described the domestic challenges China faces in protecting its domestic interests while implementing WTO commitments. Company representatives told us that they expected WTO reforms to be part of a long-term process, but they believed that the Chinese leadership is dedicated to living up to their WTO commitments. One individual noted that any change must be considered within the broader context of China’s political and economic environment. In his words, “The final test will be how such change affects or impacts on domestic interests including the domestic economy and political interests and pressures.” Several respondents referred to the balancing act that China faces as it implements reform and dismantles state-owned enterprises while still needing to protect the labor force and maintain economic stability. One company assessed the likely result by saying that while China may comply with the letter of its commitments, the process of reforming sectors populated by state-owned enterprises will be slow in order to protect those industries and to avoid displacing large numbers of workers. One individual said that China will not forgo domestic stability to meet WTO requirements and therefore, domestic firms will continue to be favored. A number of companies noted the need to remain attentive to cultural differences in doing business with China and their expectations that implementation will be a lengthy and gradual undertaking. Several corporate representatives mentioned the continued importance of personal relationships, known as “guanxi” in China. For example, one respondent noted that in China relationships are more important than policy and policy is more important than the rule of law. Another respondent echoed this theme and encouraged patience with the Chinese and their culture. Specifically, with reference to China’s WTO commitments, he said, “Patience and understanding will reap rewards. China bashing will not.” Another company representative advised that although it might be necessary to apply pressure to ensure that Chinese officials adhere to WTO rules, it is important to do so in a manner that does not cause officials to lose face. If implemented, China’s commitments will open China’s economy and reform its trading activities, thereby expanding U.S. companies’ opportunities for investing in China and for exporting goods, agricultural products, and services to China. Understanding U.S. companies’ expectations is fundamental for policymakers to judge the degree to which the benefits of China’s WTO membership are being realized. As you have requested, over the next several years we will continue to gather the views of the business community regarding China’s implementation of its WTO commitments and report the results to you. We are sending copies of this report to interested congressional committees. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VII. The Chairman and the Ranking Minority Member of the Senate Finance Committee and the Chairman and the Ranking Minority Member of the House Committee on Ways and Means asked us to undertake a long-term body of work relating to China’s membership in the World Trade Organization (WTO) and the monitoring and enforcement of China’s commitments. This work includes examining through annual surveys, the experience of U.S. firms doing business in China. The 2002 GAO Survey of U.S. Companies on China-WTO Issues and related interviews discussed in this report provided an opportunity for us to assess business views and expectations for their work in China while also exploring methods for soliciting business views on an annual basis in the future. Our objectives for this initial preparatory survey were to assess U.S. businesses’ (1) views about the importance of WTO-related commitments to their business operations in China, (2) views about the anticipated effects of China’s WTO-related reforms on their businesses, and (3) opinions regarding China’s prospects for implementing these reforms. To respond to our objectives, we mailed surveys to 551 selected chief executive officers (CEO) or presidents of U.S. companies with a presence in China; conducted structured interviews in China with representatives of 48 U.S. companies, two foreign-owned companies with operations in the United States and China, and representatives of two U.S. trade associations with representative offices in China; met with representatives of other U.S. business associations in China and the United States; and considered other surveys of U.S. businesses in China. We reviewed relevant documents related to China’s WTO accession and implementation to understand the context for questions included in our survey. These documents included China’s accession agreement, referred to as the Protocol on the Accession of the People’s Republic of China. This is a set of legal documents totaling more than 800 pages that describes China’s WTO commitments. It includes the protocol itself and the accompanying Report of the Working Party on the Accession of China. These documents describe how China will adhere to WTO principles and technical guidelines. Additionally, the agreement includes schedules for how and when China will grant market access to foreign goods and services, and several other annexes. We also consulted with high-level officials from several public and private sector agencies, organizations, and firms to obtain input on question development, survey administration, and prior surveys on related issues. Specifically, we refined our survey based on consultations with U.S. government agencies and offices including the Office of the U.S. Trade Representative, the Department of State, the Department of Commerce (International Trade Administration and Bureau of Economic Analysis), the Department of Agriculture, and the Congressional Research Service to discuss draft survey questions and methods of survey administration. In addition, we circulated the draft survey instrument among and sought comments from the following organizations: the American Chambers of Commerce in the People’s Republic of China, the American Farm Bureau, the Coalition of Service Industries, the Emergency Committee for American Trade, the National Association of Manufacturers, the U.S. Chamber of Commerce, and the U.S.-China Business Council. Finally, we participated in a feedback session with representatives of three private sector firms with a presence in China prior to pretesting our draft survey instrument. We conducted telephone pretests with representatives of five U.S. firms with a presence in China, which resulted in additional refinements to the survey instrument. In the survey, we asked U.S. businesses to identify their business activities in China based on 25 agriculture/manufacturing categories and 18 services categories. For both agriculture/manufacturing and services, businesses could also choose an “other” category that allowed them to write in a description of their business activities in China. The agriculture/manufacturing categories were based on the Department of Commerce’s North American Industry Classification System. The services categories were based on the WTO’s services classifications and the U.S. Bureau of Economic Analysis’s services classifications used in its reports on services trade. The categories are shown in table 9 in appendix IV. Preparation of the sample for the survey on China-WTO issues required multiple steps. Our target population was the population of U.S. companies with a presence in China. However, consultation with the agencies, organizations, and firms previously mentioned confirmed that a comprehensive list of U.S. companies with investments in China did not exist. Consequently, we collected membership directories from the American Chambers of Commerce in the People’s Republic of China (Beijing; Chengdu, Sichuan; Guangdong; and Shanghai) and contact lists of American companies in China from the Department of Commerce’s U.S. & Foreign Commercial Service’s (FCS) offices in Beijing, Chengdu, Guangzhou, Shanghai, and Shenyang. We selected the names of all U.S.- incorporated companies from these lists and entered them into a database. In cases where the nationality of incorporation was not identified, we obtained this information to the extent possible through searches of publicly available information and contacts with individual companies in the United States and China. We combined company names from these directories into a single list. A total of 3,139 company names was included in the database after we combined the names of U.S.-incorporated companies from these sources. We excluded companies located in the United States, but whose ultimate parents were incorporated outside the United States (for example, we excluded companies incorporated in the British Virgin Islands). A total of 1,945 company names remained in the database after completion of various automated steps to identify duplicate listings such as listings of multiple subsidiaries of the same parent company. Additional manual searching of the company names in the database and various business directories identified subsequent duplicate entries and further reduced the list to 1,695 company names. This list of 1,695 company names represented known U.S. companies registered with one of the five FCS offices and/or a member of one of the four American Chambers of Commerce in China with publicly available membership lists. We then selected a random sample of 1,000 company names from the combined list of 1,695 companies. We chose to select a sample of 1,000 companies based on the expectation that an unknown number of the 1,000 companies would be identified as subsidiaries of other parent companies in the sample, or subsidiaries of companies not incorporated in the United States, and/or would not have contact information that could be located using publicly available information and individual contacts with the companies. We searched a variety of sources, including the Leadership Directories’ Yellow Book, Nexis, and Internet search engines, in order to locate each company’s parent company name in the United States and/or to confirm the nationality of incorporation, locate corporate headquarters’ mailing addresses in the United States, identify the CEO or other most senior company officer’s name, and locate telephone numbers for reminder telephone calls. We also searched corporate Web sites, called companies, and looked at other business directories to locate mailing addresses, contact names, and telephone numbers. This process resulted in a final mailing list of 551 active U.S. companies with a presence in China and parent companies incorporated in the United States. The disposition of the random sample of 1,000 U.S. company names (after completion of our search for corporate contact information) is outlined in table 7. 2002, we mailed a second copy of the survey to all companies from which we had not received a survey response. Telephone reminder calls to nonrespondents began on April 17, 2002, and continued through May 24, 2002. We mailed questionnaires to company CEOs and presidents at their headquarters offices in the United States, but a range of company officials in the United States and China, including managing directors, directors of international trade, and vice presidents, among others, completed the questionnaires. Of the sample of 551 U.S. companies with a presence in China that we surveyed, responses indicated that 505 were eligible for inclusion in the sample. We received 191 usable questionnaires from eligible companies, for an overall response rate of 38 percent (191 usable responses/505 eligible sampled elements=38 percent). Because of this low response rate, we restricted our analysis to the subset of firms that participated in our survey, and we did not make estimates about the larger population of all U.S. businesses with a presence in China. The low response rate to the survey would threaten the validity of estimates made using these data, particularly if those not providing data were materially different from those who did provide data. As a result, the representativeness of weighted estimates for the population might be subject to significant bias. Because the survey results represent only responses received and are not projected to the population of U.S. companies with a presence in China, sampling errors have not been calculated. Therefore, we present survey results in this report in unweighted form, representing only those firms that participated in our study and that provided answers to the individual questions analyzed. Other potential sources of errors associated with the questionnaires, such as question misinterpretation and question nonresponse, may be present. We included steps in the development of the questionnaire, the data collection, and data analysis to reduce possible nonsampling errors. Specifically, as previously discussed, we solicited feedback on a draft of the survey from numerous internal and external parties. We pretested the questionnaire with eligible representatives of U.S. companies with a presence in China to help ensure that our questions were interpreted correctly and that the respondents were willing to provide the information required. All nonrespondents received a follow-up copy of the survey and a follow-up telephone call. All data were double-keyed during entry. We performed computer analyses to identify inconsistencies or other indications of errors and had all computer analyses reviewed by a second independent analyst. The final disposition of the 551 surveys is presented in table 8. We conducted structured interviews with representatives of 48 U.S. firms in Beijing, Guangzhou, Shanghai, and Shenzhen, China. These structured interviews gave us an opportunity to discuss survey responses in greater detail as well as to gain an understanding for the context of these responses, to determine whether responses to these questions varied between interview respondents in China and survey respondents based in the United States, to discuss the questions and survey administration issues with survey nonrespondents, and to obtain information from firms not included in the survey sample (including firms that did not appear in our random sample). Consequently, the firms that we interviewed included survey respondents and nonrespondents as well as companies not included in the mail survey sample. In addition to these criteria, our invitation list for the interview sessions also represented a broad cross section of the business sectors invested in China. We also interviewed representatives from two U.S. trade associations with offices in China in order to gain further insight into the possible range of business views. We discussed topics during the interviews that included the anticipated effects of China’s WTO membership, WTO compliance issues, and background issues to help us with our future work. Tabulations of the interview responses were independently verified. All firms that we interviewed or surveyed were assured that their responses would remain confidential. In spite of this, due to the sensitive and/or proprietary nature of the topics discussed, it is possible that the data presented in this report reflect the views of respondents only to the extent to which they felt comfortable sharing them with an independent agency of the U.S. Congress. In addition, respondents had varied knowledge of China’s WTO commitments and their application to their line of business. We did our work in Washington, D.C., and in Beijing, Guangzhou, Shenzhen, and Shanghai, China. We performed our work from July 2001 to September 2002 in accordance with generally accepted government auditing standards. The United States General Accounting Office on your company’s experience in conducting (GAO), an independent agency of Congress, business with China. The questionnaire has been asked by Congress to study China’s should take about 20 minutes to complete. World Trade Organization (WTO) Please designate one person to have overall commitments. As part of this work, we are responsibility for completing and returning surveying U.S. companies with business this questionnaire for your company and interests in China. In particular, this survey provide the following information so we can seeks to obtain U.S. business views on (1) call or e-mail that person if additional reforms in China associated with information is necessary. implementing WTO commitments, (2) expectations for China’s implementation of WTO commitments, and (3) anticipated changes to business opportunities resulting from China’s WTO membership. Your responses will allow us to present meaningful information to Congress. This survey will serve as a foundation for future GAO efforts to advise Congress about the status of China’s compliance in implementing its WTO commitments. If you would like to return the questionnaire information that could identify individual by fax, please fax to: (202)-512-2550. If you respondents. Most of the questions in the survey can be chinasurvey@gao.gov. Thank you very much for your cooperation comments you would like to add to your and assistance. QUESTIONNAIRE TO GAO) 1. Less than 100 2. What types of business relationships does 2. From 100 to 249 your company currently have with China? Enterprise (WOFE) 8. Limited Liability Company 2. Less than 50 9. Company Limited by Shares 3. From 50 to 99 10. Other (Please describe.) 4. From 100 to 249 5. From 250 to 499 11. None of the aboveGo to Question 4 6. From 500 to 1,000 7. More than 1,000 3. How long has your company engaged in a business relationship with China? (Check 8. How soon does your company expect to make a profit in China? (Check one.) N=186 1. Less than 2 years 2. From 2 to 5 years 2. Within 3 years 3. From 6 to 9 years 3. Within 4 to 5 years 4. From 10 to 20 years 4. Within 6 to 10 years 5. More than 20 years 5. Within 10 years 6. 10 years or more 4. Does your company import from China, 8. Other (Please describe.) export to China, both import from and export to China, or neither import from nor export to China? (Check one.) N=189 1. Only import from China 2. Only export to China 3. Both import from and export to 4. Neither import from nor export to ChinaGo to Question 6 9. Which of the following categories describe your business activities with China? (Check all that apply.) 1. Agriculture, forestry, fish, & hunting 27. Banking & all other financial services 3. Beverages & tobacco 29. Professional services (legal, accounting, medical, etc.) 6. Petroleum & coal 30. Computer, database, & related services 7. Chemicals including pesticides, 8. Plastic, rubber, clay, & related products 9. Wood & paper products 34. Retail / Franchises 10. Furniture & related products 35. Wholesale / Distribution 11. Textiles, apparel, accessories, & leather 37. Construction & related services 12. Machinery (except electronic) 13. Computer & peripheral 15. Audio & video equipment 40. Health-related & social services 16. Semiconductor & other electronic 41. Recreational, cultural, & sporting 17. Navigational, measuring, medical, & 42. Tourism and travel-related services 18. Manufacturing & reproducing magnetic 43. Other (Please specify.) China’s economic reforms during the past 5 any? (Check one.) N=190 years have improved the climate for U.S. businesses in China? (Check one.) N=191 1. Very positive impact 1. To a very great extent 3. Little or no impact 2. To a great extent 3. To a moderate extent 5. Very negative impact 4. To some extent 5. To little or no extent 6. Don’t know/No basis to judge ………………………………………. 6. Don’t know/No basis to judge 12. How soon do you expect you will begin to experience an overall business impact on your 14. Currently, what are your company’s goals in company, if any, from China’s WTO doing business with China? (Check all that commitments? (Check one.) N=168 1. Immediately/ Has already occurred 1. Establish a presence for the future 2. Less than one year 3. From 1 to 2 years 2. Benefit from lower labor costs in 4. From 3 to 4 years 5. From 5 to 6 years 3. Benefit from foreign investment 6. From 7 to 10 years 7. More than 10 years 4. Benefit from the cost or quality of 8. Never, no effects expected 5. Establish a regional base in China 9. Don’t know/No basis to judge 6. Expand a regional base in China 7. Establish a distribution network in 8. Expand a distribution network in 9. Increase exports to China 10. Other (Please specify.) _______________________________ 15. For each of he items listed below, how will your company’s activities be affected by the implementation of China’s WTO commitments (increase or decrease in investments, exports, etc.)? (Check one box for each item.) (1) (2) (3) (4) (5) (6) (7) 10. Number of agents or 11. Market share in China N=174 12. Competition from foreign or Chinese firms located in China 13. Other (Please describe and rate 14. Overall business activities in China N=178 Please answer the following two questions for each of the Chinese locations listed in the grid below. 19. Does your company have a facility or other presence in this location? (Check one box for each item.) 20. Do you expect that reforms will be relatively difficult for the Chinese to implement in this location? (Check one box for each item.) Do you anticipate that reforms will be relatively difficult for the Chinese to implement in this location? location? (1) (2) (1) (2) (3) 1. Beijing Q19, N=169; Q20, N=162 2. Tianjin Q19, N=136; Q20, N=120 3. Hebei Q19, N=115; Q20, N=102 4. Shanxi Q19, N=115; Q20, N=103 5. Inner Mongolia Q19, N=114; Q20, N=104 6. Liaoning (except Shenyang) Q19, N=114; Q20, N=102 7. Jilin Q19, N=114; Q20, N=102 8. Heilongjiang Q19, N=112; Q20, N=100 9. Shenyang Q19, N=119; Q20, N=105 10. Shanghai Q19, N=169; Q20, N=157 11. Jiangsu Q19, N=118; Q20, N=108 12. Fujian Q19, N=120; Q20, N=107 13. Zhejiang Q19, N=113; Q20, N=103 14. Shandong Q19, N=120; Q20, N=107 15. Jiangxi Q19, N=113; Q20, N=101 16. Anhui Q19, N=113; Q20, N=101 17. Guangdong Q19, N=147; Q20, N=131 18. Guangxi Q19, N=116; Q20, N=101 19. Hainan Q19, N=112; Q20, N=100 20. Henan Q19, N=109; Q20, N=97 21. Hubei Q19, N=116; Q20, N=102 22. Hunan Q19, N=112; Q20, N=97 23. Sichuan Q19, N=119; Q20, N=104 24. Chongqing Q19, N=117; Q20, N=104 25. Guizhou & Yunnan Q19, N=115; Q20, N=102 26. Any Western province (Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang, & Tibet) Q19, N=116; Q20, N=98 21. Some reforms will have to be made by different levels of government. Based on your company’s experience, how difficult or easy do you believe it will be to make the reforms at each of the levels of government listed below? (Check one box for each item.) (6) (1) (2) (3) (4) (5) 22. How likely is your company to contact the following groups or individuals if your company encounters difficulties related to China’s implementation of its WTO commitments? (Check one box for each item.) (1) (2) (3) (4) (5) (6) agencies or officials (Please specify.): N=165 4. Chinese consultants N=178 5. U.S. trade associations representing your company’s 6. U.S. Embassy or Consulate in 7. U.S. Trade Representative N=178 8. U.S. Department of Agriculture 9. U.S. Department of Commerce 10. U.S. Department of State N=178 11. U.S. Congress N=175 12. Other (Please specify): N=23 23. Is your company providing or planning to provide assistance or training in the following areas to Chinese officials or businesses? (Check one box for each item.) (1) (2) (3) 2. General training in China’s WTO commitments N=178 3. Protection of Intellectual Property Rights (IPR) N=179 4. Customs facilitation N=177 5. Revising laws & regulations to comply with WTO 6. Enforcement of contracts & judgments N=180 7. Ways to increase transparency N=182 8. Quota administration N=179 9. Testing certification N=180 10. Distribution rights N=180 11. Other (Please specify.) N=26 24. If your company has provided or is planning to provide assistance or training to Chinese officials or businesses, please describe the training in the space below and/or attach additional sheets describing the training. 25. Is there anything else you would like to tell us regarding China’s accession to the WTO, including any problems or concerns that China’s WTO commitments do not address? (Please attach additional sheets if necessary.) Thank you for your participation! U.S. General Accounting Office Structured Interview of U.S. Companies in China about China-WTO Issues Q1) Currently, what are your company’s operations in China? (Skip if company responded to survey.) 1. 2. 3. 4. 5. 6. 7. Wholly-Owned Foreign Enterprise (WOFE) 8. Limited Liability Company 9. Company Limited by Shares 10. Other (Please describe.) The effects of China’s WTO membership Q2) To what extent, if any, has China’s official membership in the WTO changed your company’s goals and expectations for future business opportunities in China? 5. No basis to judge Please describe: Q3) From your company’s perspective, what are China’s most important WTO commitments? (Prompt from list.) Why? Q4) From your company’s perspective, are there particular WTO commitments that you expect to be relatively difficult to implement? (Prompt from list.) Why? Q5) What impact do you expect that China’s WTO commitments will have on your business, if any? 1. 2. 3. 4. 5. 6. Don’t know/No basis to judge Q6) How soon do you expect you will begin to experience an overall business impact on your company, if any, from China’s WTO commitments? 1. 2. 3. From 1 to 2 years 4. From 3 to 4 years 5. From 5 to 6 years 6. From 7 to 10 years 7. 8. Never, no effects expected 9. Don’t know/No basis to judge Q7) To what extent, if any, is your company’s business strategy in China dependent on China’s compliance with its WTO commitments? 1. 2. 3. 4. 5. WTO implementation and compliance issues Q8) From your company’s perspective, in what areas have the Chinese successfully complied with their WTO commitments? Q9) From your company’s perspective, in what areas have the Chinese not complied with their WTO commitments? Q10) Is your company involved in efforts to assist the Chinese in implementation of their WTO commitments (Prompt: formal and informal training by your company, U.S. or other governments, associations, non-profit organizations, etc.)? 1. 2. If yes, please describe: Q11) How, if at all, does your business track China’s implementation of its WTO commitments (legislative and regulatory changes, practices, etc.)? 1. Day-to-day experience of doing business in China 2. 3. 4. Contacts with Chinese government officials 5. Contacts with U.S. government officials 6. GAO’s future compliance surveys Q12) Do you feel that most U.S. businesses in China have a sufficient understanding of China’s WTO commitments, as they relate to their own industries, to identify all potential compliance problems? Q13) Who is the most knowledgeable respondent in your company for questions related to China’s compliance with its WTO commitments? 1. Self or other corporate representative in China 2. Company’s government affairs representative 3. 4. Respondent varies depending on question 5. 6. Other (please describe) Q14) From your company’s perspective, how concerned are you that reporting compliance problems with WTO commitments to the U.S. government might result in retaliatory action by Chinese government entities against your company? Q15) Are there any tactics you would recommend to GAO to ensure that we receive candid responses from businesses in future surveys and interviews? Q16) Do you have any suggestions to GAO for monitoring China’s compliance with its WTO commitments? Q17) For future compliance surveys, what is the most convenient format for you and others in your company? Q18) Is there anything else you would like to tell us regarding China’s accession to the WTO, including any problems or concerns that China’s WTO commitments do not address? Thank you very much for your cooperation and assistance. Categorization of commitment areas for questions 3 & 4: Tariff & nontariff trade restrictions (increased market access) 1. Tariffs, fees, & charges 3. Other quantitative import restrictions (licensing & tendering requirements) 4. Standards, certification, registration, & testing requirements (product safety, animal, plant, & health standards, etc.) 5. Customs procedures & inspection practices 7. Market access for services Investment-related measures (liberalized foreign investment) 8. Government requirements stipulating minimum amount of production that must be exported 9. Foreign exchange restrictions (including balancing & repatriation of profits) 10. Technology transfer requirements 11. Local content requirements 12. Scope of business restrictions (types you can provide, customers you can do business with, number of transactions you can conduct, & where you can conduct business geographically) 13. Restrictions on partnerships & joint ventures (choice of partner & equity limits) 14. Establishment & employment requirements (capital, deposit, years in practice, threshold sales, forced investment, & nationality/residency requirements) 15. Trading rights (ability to import & export) 16. Number of products subject to state/designated trading 18. Subsidies for Chinese firms 20. Operation of state-owned enterprises 21. Price controls including dual and discriminatory pricing 22. Equal treatment in taxation 23. Equal treatment for access to funding (loans & equity issues), supplies, & human resources 24. Consistent application of laws, regulations, & practices (within & among national, provincial & local levels) 25. Transparency of laws, regulations, & practices (publishing and making publicly available) 26. Enforcement of contracts & judgments/Settlement of disputes in Chinese court system 27. Independence of judicial bodies 28. Equal treatment between Chinese & foreign entities under Chinese laws, regulations, & 29. Intellectual Property Rights 30. China’s application of safeguards against U.S. exports (antidumping duties and other legal actions against surges in imports) This appendix summarizes key items of interest from the survey and structured interview results in order to present a profile of the companies providing the data discussed in this report. We asked survey and structured interview respondents a series of questions to describe their business operations in China. Detailed results are also included in the reprinted survey and structured interview guide found in appendixes II and III. The 191 respondents to our mail survey included companies from a wide range of industries, locations, and types of operations in China. For example, respondents included companies with business activities in all of the agriculture, manufacturing, and services categories listed in our survey. About 69 percent of respondents identified manufacturing as their primary business activity with China, while 25 percent of respondents identified services as their primary business activity with China. Only eight respondents (about six percent) identified agriculture as their primary business activity with China. Almost 60 percent of respondents were relatively specialized and identified only one category to describe their business activities with China. Conversely, about 20 percent of respondents selected two categories to describe their business activities in China, and another 18 percent of respondents selected three or more categories to describe their company’s activities there. Table 9 displays the number of respondents included in individual categories to describe their business activities with China. Respondents reported that they carry out these business activities in facilities and offices across all of China. Beijing, Shanghai, and Guangdong were the most frequent responses to the question of where companies had a facility or other presence among all of the Chinese locations listed in our survey (listed in order of frequency of responses). In fact, only a few respondents (less than five) did not have a facility or other presence in Beijing, Guangdong, or Shanghai. About 46 percent of respondents had a facility or other presence in one or more of these three locations, while another 45 percent were located in Beijing, Shanghai, and/or Guangdong plus one or more other locations. Figure 5 shows the number of companies that reported having a facility or other presence in each location in China listed in our survey. Survey and structured interview respondents reported that they engage in a range of business relationships in their many locations throughout China. More than 50 percent of the respondents had one type of business relationship, about 25 percent had two types of business relationships, and more than 10 percent of the respondents reported three or more types of business relationships there. Representative offices, joint ventures, wholly owned foreign enterprises, and agents/distributors were the most frequently reported types of business relationships, respectively. Table 10 provides a description of each type of business relationship. Figure 6 displays the number of survey and structured interview respondents that reported each type of business relationship. More than 40 percent of the survey respondents reported that their companies both imported from and exported to China as part of these business relationships. More than one quarter of the respondents reported that they only exported to China. Almost one quarter of the respondents reported that they neither import from nor export to China. Companies that neither import from nor export to China include businesses operating representative offices and service providers, among others. Survey respondents also varied with respect to the number of employees in the United States, the number of employees in China, and the length of time that their companies had engaged in business relationships with China. Large companies, those with 500 or more employees in the United States, accounted for about 60 percent of the respondents. In contrast, only 20 percent of the respondents reported having more than 500 employees (including joint venture employees) in China. Respondents to our survey had a strong base of experience to draw on as they answered our questionnaire. Respondents reported business relationships with China ranging from less than 2 years to more than 20 years, with 6 to 9 years and 10 to 20 years as the most frequent responses. Most of the respondents had engaged in a business relationship with China for more than 5 years. In fact, more than half of large company respondents had been in China more than 10 years. Smaller companies that responded to our survey had generally maintained a presence in China for fewer years than large companies. Figure 7 shows the length of time that survey respondents had engaged in a business relationship with China. Companies with a longer history in China and agriculture/manufacturing firms reported that they were already profitable more frequently than companies with a shorter history in China or firms whose primary business activity in China focused on services. Specifically, for companies engaged in a business relationship with China for 6 to more than 20 years, about 72 percent of them reported that they were already profitable. Among companies engaged in a business relationship with China for 5 years or less, only about 43 percent of them reported that they were already profitable. Overall, about 66 percent of respondents engaged in agriculture/manufacturing reported that they were already profitable compared to less than 50 percent of the respondents only engaged in services activities. U.S. investment and trade with China have grown significantly over the past decade. U.S. companies have increased their presence in China through manufacturing and service operations, and both exports and imports of goods and services have risen dramatically, including those of small- and medium-sized companies. In addition, many U.S. companies have integrated operations in which trade occurs with their affiliates in China. Total U.S. direct investment in China reached nearly $10 billion in 2000, according to the U.S. Department of Commerce (on a historical-cost basis), making the United States the second largest source of foreign direct investment in China. This amount represents nearly 27 times the amount of U.S. foreign direct investment in China in 1990. U.S. investment in China followed a different pattern from overall U.S. investment worldwide. Figures 8 and 9 show the percentages of foreign investment worldwide and U.S. foreign direct investment by industry sector in China in 2000. The largest portion of worldwide U.S. direct investment abroad went to finance, insurance, and real estate, which accounted for 40 percent of total U.S. investment abroad. However, in China, finance, insurance, and real estate accounted for less than 10 percent of U.S. investment in 2000. In China, the largest portion of U.S. foreign direct investment went to manufacturing, which accounted for almost 60 percent of investment. Globally, manufacturing accounted for less than 30 percent of U.S. investment in 2000. This difference reflects China’s foreign direct investment policy, which aimed at guiding direct investment into targeted industries in accordance with China’s economic and industrial development strategy. The targeted economic sectors include infrastructure (such as roads) and high-technology industries. U.S. exports of goods and services have also grown significantly over the past decade. In terms of trade in goods, U.S. exports totaled almost $18 billion in 2001, making China the ninth largest market for U.S. goods. The United States was also the top export destination for China in 2001, as $102 billion in goods from China were imported here. As a result of this difference between exports and imports, the United States has had a trade deficit in goods with China since 1983. In terms of exports, U.S. small- and medium-sized companies accounted for a growing share of companies that export to China. In 1999, about 83 percent of companies that exported to China were small- and medium-sized firms compared to about 77 percent in 1992, according to the U.S. Department of Commerce. In addition, the value of these exports to China rose 85 percent from 1992 to 1999. Nonetheless, large firms still accounted for more than 70 percent of total U.S. exports to China in 1999. U.S. exports to China include products such as transport equipment, electrical machinery, office machines, oilseeds, and fruits. Figure 10 shows the distribution of U.S. exports to China in 2001 by broad industrial groupings. This distribution of U.S. exports to China is very similar to the distribution of U.S. exports to the world overall. For example, machinery, electronics, and high-tech apparatus; auto vehicles, other vehicles, and parts; and chemicals, plastics, and minerals were the three major exporting sectors, both worldwide and to China in 2001. U.S. private services exports to China have also grown over the past decade, rising from $1.6 billion in 1992 to $4.6 billion in 2000, according to the U.S. Department of Commerce. The United States maintains a services trade surplus with China, importing about $2.8 billion in services from China in 2000. China is currently a relatively small market for U.S. services exports, making up less than 2 percent of total U.S. services exports in 2000. For some multinational companies, investment and trade with China are integrated. Companies may establish a presence in China through foreign direct investment in order to supply goods and services to the Chinese market or to produce products in China for export. In 1999, the most recent year available for such data, U.S. businesses exported about $3 billion in goods to their affiliates in China, according to the U.S. Department of Commerce. This accounted for nearly one quarter of total U.S. exports to China in that year. In addition, U.S. parent companies sold nearly $500 million in services to their affiliated companies in China, accounting for about 10 percent of total U.S. cross-border sales of services to China in 2000. In order to supply services abroad, companies can either provide the service from the United States (cross-border trade) or establish affiliates in foreign countries to supply the foreign markets directly. In the case of China, U.S. businesses provided about $1.7 billion in services through local affiliates in 1999. This is more than double the amount provided in 1998 ($800 million) and has grown annually since 1993, the first year for which these figures were available. We identified and reviewed similar surveys that other organizations had conducted of U.S. companies in China to help us develop our own questionnaire and to increase our understanding of issues of interest to companies doing business in China. These surveys, administered between 1998 and 2001, were sponsored by U.S. government agencies, professional associations, and consulting firms and had response rates ranging from just under 5 percent to just under 30 percent. Table 12 summarizes the surveys we reviewed in terms of their sponsors, populations, response rates, key survey topics, WTO-related findings, and other relevant factors. These surveys cannot be directly compared to our survey, because they were administered at different points in time to different populations and asked different questions. All of the surveys discussed here addressed issues that overlapped with the items considered in our survey. However, some of them had very different purposes than our survey. In addition, all of the other surveys had low response rates, which raises major questions about generalizing their results to the full populations from which the samples were drawn. Nevertheless, we found that, at a very broad level -- such as basic expectations about WTO -- there were some similarities between the responses to the other surveys and the responses to our survey. Furthermore, the results of these surveys can sometimes provide further insights into the results that we obtained. For example, we asked a question about profitability, as did several of the other surveys. Our question asked how soon firms expected to be profitable. Other surveys asked about the profitability of investments and improvements in operating margins. While all of these questions probed the same issue, the way in which the questions were asked and the response options all differed; therefore, reporting on the other surveys’ results allows for some additional insights and perspectives. The sections that follow describe some of the main findings from these surveys in order to provide some context for our survey. Respondents to the other surveys were generally positive about the effects of China’s WTO entry. For example, more than three quarters of the respondents to the 2001 American Chamber of Commerce annual membership survey expected China’s WTO entry to have a positive impact on their companies. Respondents viewed the major positive impacts as increased transparency (85 percent), increased business scope (81 percent), and increased investment options (66 percent). Almost 90 percent of the China-based respondents to the 2001 Deloitte-Touche Tomatsu survey reported that the importance of the Chinese market will likely increase in the 3 years after China’s WTO entry. Almost two-thirds of these respondents expected their companies to expand existing product or enterprise lines in China. There are many difficulties with doing business in China, according to respondents to the other surveys. For example, almost two-thirds of the respondents to the 1998 U.S. Embassy Survey reported that problems with transparency were worse or much worse than they had expected. More than half of the respondents to this survey also reported that the cost of doing business, the problems with customs procedures, and the risks encountered in dealing with foreign exchange rates were worse than they had expected. Almost half of the respondents reported that protectionism, the enforcement of regulations, and intellectual property rights (IPR) protection were worse than expected. Similarly, at least two-thirds of the respondents to the 2001 American Chamber of Commerce annual membership survey indicated that transparency, bureaucracy, weak enforcement of laws, business scope restrictions, and protectionism negatively affected their companies. The other surveys also listed a number of potential compliance problems. More than two-thirds of the respondents to the May 2000 Department of Commerce survey foresaw difficulties with China’s ability to develop a WTO-compliant legal framework and enforce the obligations consistently throughout the country. About half of these respondents expected difficulties with or due to corruption, local/provincial implementation, IPR, transparency in practices, and/or transparency in regulations. More than three quarters of the respondents to the 2001 American Chamber of Commerce annual membership survey were concerned or very concerned that China’s WTO agreement will be ignored, that new regulations will be enacted to counter WTO commitments, and that there will be increased protectionism by the Chinese government. China’s implementation of its WTO commitments was a concern for most respondents (about 90 percent) to the 2001 Deloitte-Touche Tomatsu survey. The 2000 Department of Commerce survey asked a question on concerns about potential retaliation for reporting compliance problems to the U.S. government. About half of the survey respondents reported that they feared retaliation if they reported WTO compliance problems to U.S. government officials. Questions about profitability generally yielded results similar to those obtained in response to our survey. Almost 60 percent of the respondents to the 1998 U.S. Embassy Survey reported that they were already profitable. Slightly more than 50 percent indicated that they had attained a return on their investment. About 50 percent of the respondents to the 2001 American Chamber of Commerce annual membership survey reported that their operating margins had improved either “substantially” or “somewhat” from 2000 to 2001, while about 25 percent of respondents reported that their operating margins had deteriorated either “substantially” or “somewhat” over the same time period. About 50 percent of the respondents to the 2001 Deloitte-Touche Tomatsu survey expected their investments to be profitable in less than 3 years, while slightly less than 33 percent expected investments to be profitable in about 3 to 4 years. Two of the other surveys asked questions about respondents’ familiarity with China’s WTO commitments. Almost 66 percent of respondents to the 2000 Department of Commerce survey reported that at the time they did not understand the WTO obligations for their sector. Only about 10 percent of respondents to the 2001 Deloitte-Touche Tomatsu survey that were located in China reported that they were familiar with China’s commitments to a “great extent.” Sixty-four percent reported that they were familiar to a certain extent, while 25 percent reported that they had very little familiarity. The area in which respondents already in China most wanted to increase their familiarity was in taxation and customs commitments (74 percent), followed by marketing and distribution (59 percent), IPR and financial services (38 percent), technology transfer (33 percent), labor and benefits (31 percent), and venture capital investment (28 percent). In addition to those named above, Carolyn Black-Bagdoyan, Ming Chen, Martin De Alteriis, Matthew Helm, Simin Ho, Stanley Kostyla, Janeyu Li, Rona Mendelsohn, Suen-Yi Meng, Beverly Ross, Richard Seldin, and Timothy Wedding made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
China's entry into the World Trade Organization (WTO) on December 11, 2001, brought the world's seventh largest economy under global trade liberalizing rules. If implemented, China's commitments will open China's economy and reform its trading activities, thereby expanding U.S. companies' opportunities for investing in China and for exporting goods, agricultural products, and services to China. Understanding U.S. companies' expectations is fundamental for policymakers to judge the degree to which the benefits of China's WTO membership are being realized. GAO analyzed U.S. companies' views about (1) the importance of, (2) the anticipated effects of, and (3) prospects for China implementing its WTO commitments. GAO surveyed a random sample of 551 U.S. companies and interviewed 48 judgmentally selected companies in four cities in China. Survey results reflect responses from 191 companies--a response rate of 38 percent--and may not reflect the views of all U.S. companies with activities in China. U.S. companies that responded to the GAO survey reported that most of China's WTO commitments are important to them. These companies, which already have a presence in China, identified rule of law--related reforms as more important than other reforms to increase market access, to liberalize foreign investment measures, and to make fundamental changes to continue China's transition to a market economy. Specifically, WTO commitments in the areas of intellectual property rights; consistent application of laws, regulations, and practices; and transparency of laws, regulations, and practices emerged as the most important areas in which China made commitments. Most companies responding to GAO's survey expected that China's WTO commitments would have a positive impact on their business operations, that the impact has already begun or would begin within 2 years and that it would lead to an increase in their volume of exports to China, market share in China, and distribution of products there. However, some company representatives whom GAO interviewed in China believed that China's implementation would be incremental. Survey respondents expected that most of the WTO--related commitment areas listed in GAO's survey would be difficult for Chinese officials to implement. Companies expected the important rule of law--related commitment areas to be the most difficult commitments to carry out and had mixed expectations about implementation for different government levels and geographic areas across China. Besides rule of law--related reforms, company representatives described how they expect that China's need to protect its domestic interests and China's culture with regards to business relationships might create impediments to implementation.
According to agency officials and guidance posted on State’s public website, applicants can apply for a U.S. passport in one of three ways: in person at an acceptance facility, by mail (for renewal applications), or at a passport facility that offers acceptance services (typically expedited applications). Applicants submit documents, such as a birth certificate or driver’s license, to passport acceptance agents to provide evidence of citizenship, or noncitizen nationality, and proof of identity. The acceptance agents are to watch the applicant sign the application, review submitted documents for completeness, and check for application inconsistencies. For example, acceptance agents are to assess whether photographs and descriptions in the identification documents match the applicant. If an acceptance agent suspects that an applicant has submitted fraudulent information or exhibits nervous behavior, the acceptance agent is instructed to accept the application and complete a checklist indicating the reason for suspected fraud. The agents are to then send the application, checklist, and photocopy of the identification to State’s Fraud Prevention Manager (FPM). Acceptance agents are not State employees; however, State provides training, as well as detailed guidance that governs their work. State also conducts periodic inspections and audits of acceptance facilities to ensure compliance with regulations and policies. According to State officials, the most common way to renew a passport is by mail. An individual with a passport issued during the previous 15 years may renew it by submitting a mail-in application, along with the previously issued passport, a recent photograph, and documentation of a name change, if applicable. Applications submitted by mail or at an acceptance facility are sent to a Department of the Treasury contracted lockbox service provider for data entry and payment processing. The lockbox service provider converts handwritten or typed text into electronic data and deposits passport fees paid by the applicant. Once the lockbox data entry and payment are complete, the electronic data and paper passport application are sent to passport-issuing facilities around the United States for adjudication. Applicants who demonstrate a need for in-person expedited service for either a first-time issuance or a renewal may submit their applications directly to a passport-issuing facility. State employees at these facilities accept passport fees and enter application data directly into State’s electronic processing system, called the Travel Document Issuance System (TDIS), before forwarding the application for expedited adjudication. Figure 2 provides an overview of the passport application and adjudication process for applications received in person at an acceptance facility, by mail, or at a passport facility that offers acceptance services (see app. 2 for static version of this figure). As we have noted in previous products, each passport application is to be individually reviewed by a passport specialist during a process known as adjudication. State’s FAM specifies the steps passport specialists must take to address various fraud indicators. According to State documents, specialists are responsible for reviewing applications and documents establishing the applicants’ identity and citizenship, as well as conducting various checks, as described below. Depending on the results of the adjudication, passport specialists may approve or deny the passport issuance, conduct additional checks, request more information from the applicant, or forward the application for additional review by their supervisor, the FPM, or by offices in Passport Headquarters. Once a passport has been issued, the application is scanned and archived. Passports issued to individuals 16 years or older are generally valid for 10 years. Several federal statutes and regulations either require or permit State to withhold a passport from an applicant in certain situations. For example, State must withhold passports from individuals who are in default on certain U.S. loans, who are in arrears of child support in an amount determined by statute, or who are imprisoned, on parole, or on supervised release as a result of certain types of violations of the Controlled Substances Act, Bank Secrecy Act, and some state-level drug laws. Likewise, State may choose to refuse a passport to applicants who are the subject of an outstanding local, state, or federal warrant of arrest for a felony, or the subject of probation conditions or criminal court orders that forbid the applicant from leaving the country and the violation of which could result in the issuance of a federal arrest warrant. During the adjudication process, passport specialists are to review applications and results of checks against various databases to detect fraud and suspicious activity, and for other purposes. Application data are entered into TDIS, State’s electronic processing system. TDIS automatically checks applicants’ names against a number of sources, including SSA’s death records and a database of warrants. For example, TDIS automatically checks key identifying information of all passport applicants against SSA’s full death file, as well as a database of felony warrants for certain crimes. Passport specialists are to compare the application to the information in TDIS to make sure it was entered properly and to identify missing information. Passport specialists are also to review the results of automatic checks during a process State refers to as “the front-end” process of adjudication. For instance, during this process, passport specialists are to determine whether an applicant currently holds a passport, has a history of lost or stolen passports, or has already submitted a passport application. According to State officials, such checks are intended to facilitate the identification of suspicious activity and prevent multiple passport issuances to the same person. Passport specialists also are to consider the results from facial recognition technology which is used to help prevent the issuance of passports to individuals using false identities and people who should be denied passports for other legal reasons, such as terrorists in the Federal Bureau of Investigation’s (FBI) terrorist In addition, passport specialists may employ commercial database.databases and other tools during the adjudication process to assist in confirming an applicant’s identity or citizenship. In April 2007, State and SSA signed an information-exchange agreement that allows State to query SSA’s records for verifying applicants’ identities and identifying deceased individuals. In accordance with this agreement, State’s TDIS automatically queries SSA’s Enumeration Verification System (EVS) to verify that a passport applicant’s SSN, name, and date of birth match the records at SSA. EVS includes a death indicator based on SSA’s full death file of approximately 98 million records, which aids State in identifying applicants using the identity of a deceased individual to apply for a passport. In most cases, State’s controls will not flag an applicant as deceased unless certain fields such as the SSN, name, and date of birth all match the identifying information of a deceased individual. According to State’s procedures, passport specialists must refer any applications with a positive death indicator to State’s FPM for additional review, since the match may indicate a case of stolen identity. The FPM reviews all applications referred to it by passport specialists to determine whether the identifying information on the passport application is in fact associated with a deceased individual. The FPM can approve the passport application once it has reviewed and resolved any indicators of potential fraud. See appendix III for additional details on State’s use of SSA’s records for death checks. In fiscal years 2009 and 2010, the years of passport issuances we reviewed, State did not have access to federal and state prisoner databases in order to check whether applicants’ identities matched those of incarcerated individuals. Since then, State has taken steps to explore access to such databases. For example, in June 2013, State entered into a data-sharing agreement with the BOP in order to access federal prisoner data. In addition, officials told us that in December 2013, State completed the first phase of a pilot project using prisoner data from two states, Florida and Rhode Island, to identify whether applicants are fraudulently using identities of state prisoners. We provide additional details on State’s initiatives to improve data checks for incarcerated individuals in a subsequent section. In 2002, the Marshals Service began transmitting certain warrant data to State for use during the passport adjudication process. Since then, the information State receives has changed to include additional warrants from the FBI, as described in detail below. To help State determine whether an applicant may have an active warrant for a felony charge, TDIS automatically checks applicants’ identifying information in State’s Consular Lookout and Support System (CLASS), a database that TDIS indicates a possible match if certain data maintains warrant data.elements from the passport application, such as the name, SSN, date of birth, place of birth, or gender, matches information in CLASS within certain parameters. State’s policies require that passport specialists refer likely matches in CLASS to State’s passport legal office. Officials said paralegals in the passport legal office are to review the information and contact the warrant issuer to confirm the identity of the subject in the warrant against the passport applicant, verify that the warrant is active and related to a felony charge, and further coordinate, as necessary. The passport legal office may also use commercial databases, or photographs obtained from the warrant issuer, to confirm applicants’ identities. In technical comments, State officials clarified that the passport legal office is authorized to deny the passport issuance when it determines, or is informed by a competent authority, that the applicant is the subject of an outstanding federal, state, or local warrant of arrest for a felony crime. In addition, the passport legal office can authorize the passport issuance if it determines, upon additional review, that there was not in fact a legitimate match in CLASS. The legal office may approve an issuance in cases where the warrant was closed, associated with a misdemeanor charge, or for other reasons, such as a request by law enforcement agencies. Of the combined total of approximately 28 million passport issuances we reviewed from fiscal years 2009 and 2010, we found instances of issuances to individuals who applied for passports using identifying information of deceased or incarcerated individuals, as well as applicants with active felony warrants. The total number of cases we identified represented a small percentage of all issuances during the two fiscal years, indicating that fraudulent or high-risk issuances were not pervasive. We also determined that State’s data contained inaccurate SSN information for thousands of passport recipients. Most of the instances in which there was inaccurate SSN information appeared to be applicant or State data-entry errors, rather than fraud. Since fiscal years 2009 and 2010, State has taken steps to improve its detection of passport applicants using the identifying information of deceased or incarcerated individuals. In addition, State modified its process for identifying applicants with active warrants, and has expanded measures to verify SSNs in real time. Out of a combined total of approximately 28 million passport issuances we reviewed from fiscal years 2009 and 2010, we identified 181 passports issued to individuals whose name and SSN both appeared in SSA’s full death file, suggesting that the applicant may have inappropriately used the identity of a deceased person. To ensure that our matches did not contain legitimate applicants who died shortly after submitting their applications, we included only individuals who had died more than 120 days before the passport issuance.our matching analysis and sample results. It is not possible to determine from data matching alone whether the passport issuance was appropriate or fraudulent without reviewing the facts and circumstances for each individual case from the 181 passport issuances. Thus, we randomly selected a nongeneralizable sample of 15 cases for additional analysis. For each case, we attempted to verify death information from SSA’s full death file by obtaining a copy of the death certificate and confirming that SSA’s most-current records listed the individual as deceased. We also requested TECS travel data from FinCEN and reviewed open-source information to search for additional fraud indicators. The following information provides additional details on the 15 cases. In one case, the applicant applied for and received an expedited passport by mail in January 2009 using the SSN, name, and date of birth of a deceased individual. The SSA’s full death file and the death certificate indicated that the purported applicant had died in May 2008. According to TECS travel data, the passport was used in June 2009 to fly to the United States from Mexico and had not been used again as of June 2013. As a result of information we provided, State reviewed this case in 2013 and determined that the applicant appeared to be an imposter. State officials noted that the application should have been referred to the FPM during adjudication, because it contained multiple fraud indicators. State officials said this case should be referred to the Bureau of Diplomatic Security (DS) for further investigation. In another case, the applicant’s passport issuance was delayed by more than a year because her name mistakenly appeared in SSA’s full death file. In our May 2013 testimony, we found that SSA’s data contained a small number of inaccurate records, and SSA has stated, in rare instances, it is possible for the records of a person who is not deceased to be included erroneously in the death file.where a living individual is inappropriately listed as deceased in SSA’s records can create a hardship for the person who has been falsely identified as deceased. This case highlights one of the challenges State encounters when querying SSA’s full death file, and illustrates why State reviews applicants with death indicators on a case-by-case basis. In 4 of the 15 cases, the applicant used a similar name to, as well as the same SSN as, a deceased individual. For each of the four cases, we verified the death information in SSA’s full death file by obtaining a copy of the deceased person’s death certificate. However, State officials said fraud could likely be ruled out in all four cases for various reasons, such as the inadvertent use of an incorrect SSN. In 9 of the 15 cases, we could not verify the death of the applicants because we were unable to identify the state in which the individual’s death was recorded (possibly because the applicant was not deceased) or because state officials would not or could not provide the death certificate to us. State’s subsequent review of these cases indicated that fraud could likely be ruled out in four cases, and that five of the cases should be referred to DS for further investigation. As of May 2014, we have referred all 181 passport issuances we identified from our matching analysis using SSA’s full death file, including the 15 cases we examined in more detail, to State for further review and investigation. Out of the combined total of approximately 28 million passport issuances we reviewed from fiscal years 2009 and 2010, we identified 68 issuances to individuals who used an SSN, name, and in some cases, date of birth of a state prisoner on their passport application. Without reviewing the facts and circumstances for each case, it is not possible on the basis of data matching alone to determine the extent to which these instances represent fraudulent issuances. Thus, from the group of individuals related to the 68 issuances, we selected 14 cases for further review.each sample case, we obtained additional documentation from state departments of corrections to verify key data fields for these passport recipients. Figure 4 summarizes our matching analysis and sample results. From our nongeneralizable sample of 14 cases, we identified seven passport applicants who may have fraudulently used the identities of state prisoners, since the incarcerated individuals could not have physically appeared at a passport facility to submit their applications. The seven remaining cases in our state prisoner sample of 14 individuals were either not incarcerated at the time of application submission, applied for passports using mail-in applications, or represented possible identity theft by the prisoner prior to incarceration. Federal regulations do not prohibit State from issuing passports to prisoners; however, according to officials, State’s policy is to deny passport issuances to individuals who are incarcerated at the time of application submission. We could not conclusively determine that all our sample cases or matches represented passport fraud, because for instance, it is possible that the state prisoner may have stolen the identity of the applicant prior to incarceration. For example, we identified two cases involving data from the same prison facility in which the prisoner had an alias name, in addition to an SSN and date of birth, that matched the information of the passport applicant. We provided information on all our matches, including the 14 state prisoner cases in our sample, to State for review. According to officials, State’s review of these cases included, but was not limited to, an assessment of fraud indicators in the passport applications, and review of the applicants’ information in commercial and internal databases. State determined that fraud could likely be ruled out in eight cases. Officials initially said they should refer the remaining six cases to DS for further investigation. In their technical comments on a draft of this report, State officials said they conducted a second review of the six remaining cases and determined that two individuals used their true identities on their passport applications, and they ultimately referred four cases to DS for investigation. Of the four cases in our state prisoner sample that State officials referred to DS, we identified three instances where the passport was used to cross an international border during the prisoners’ periods of incarceration. These cases highlighted the active use of passports obtained by potentially fraudulent means. We verified this travel activity by comparing the names, dates of birth, and passport numbers in State’s passport data for these cases with TECS travel data provided by FinCEN. The TECS travel log for the three cases showed that the individuals used the passports for international travel at least once during the prisoners’ periods of incarceration. In one case, an individual used the passport obtained by potentially fraudulent means to cross the U.S.-Mexico border more than 300 times. In addition to our analysis of state prisoners, we also identified 206 passport issuances to individuals who used an SSN, name, and date of birth in their applications that matched identifying information in the BOP’s federal prisoner data. However, the data we received included individuals residing in halfway houses. Unless otherwise stated in the conditions of release for parole, passport issuances to individuals living in halfway houses are legally permissible. Since we focused our in-depth analysis on a nongeneralizable sample of 15 cases, we did not determine the extent to which the 206 cases represented individuals in federal prison facilities as opposed to halfway houses. Figure 5 summarizes our matching analysis and sample of 15 cases. From our sample of 15 cases, we did not identify any individuals who applied for a passport using the identity of a federal prisoner in their passport application. We determined that at least 9 of the 15 individuals were living in BOP halfway houses when the passport application was submitted. Moreover, we did not find any indications of identity theft. The other six individuals were either not in a halfway house at the time of application submission, or we were unable to determine, on the basis of the information provided, their location after they were released from a federal prison facility. However, the documentation for these six individuals indicated that they were not incarcerated when the application was submitted. In fiscal years 2009 and 2010, officials said State did not have access to federal and state prisoner databases in order to check whether applicants’ identities matched those of incarcerated individuals. In June 2013, State entered into a data-sharing agreement with the BOP that will allow it to access federal prisoner data, including information about individuals incarcerated in federal facilities or halfway houses. In addition, State obtained data-sharing agreements with two individual state departments of corrections, Florida and Rhode Island, as part of a pilot project to identify whether applicants fraudulently used the identities of state prisoners. State officials said these states represent different geographical regions and a large and small inmate population, and both had technical capabilities to transfer data efficiently and securely to State for adjudication purposes. Officials said in their technical comments that State completed the first phase of the pilot project in December 2013. This phase included the development of search criteria for detecting the fraudulent use of prisoners’ identities. According to officials, State referred three potential fraud cases to DS for further investigation as a result of this effort. Officials also reported in their technical comments that State plans to acquire prisoner data from other states, and that it is developing best practices for obtaining such data. In addition, officials noted that State is in the early stages of planning a second phase of the pilot project. State officials highlighted various challenges with respect to using prisoner data during adjudication, including technical requirements and issues related to data transmission, as well as potential legal limitations. For example, according to BOP officials, State and the BOP will have to develop a technical infrastructure to facilitate sharing of federal prisoner data, which officials expected to occur no later than the end of fiscal year 2014. Similarly, with respect to state prisoner data, State officials noted that data from state departments of corrections would need to be automatically transmitted to allow for the updating of information on a consistent basis. In technical comments, State officials clarified that they would prefer to receive data from individual state departments of corrections on a real-time basis; however, the frequency with which State receives these data is not a factor in determining whether State enters into a data-sharing agreement with a department of corrections. State officials also highlighted issues regarding the compatibility of systems from various states, as well as concerns about poor data that could lead to false matches and delays in processing passport applications. Moreover, State officials told us legal limitations may prevent the transfer of state-level inmate data; however, State did not report having such challenges working with Florida and Rhode Island during its pilot project. Out of a combined total of approximately 28 million passport issuances we reviewed from fiscal years 2009 and 2010, we identified 486 issuances to individuals using the SSN, name or alias, and date of birth of people with active warrants on their passport applications. We could not determine from matching analysis alone whether all warrants were associated with felonies, but our analysis excluded warrants with a description of either a “traffic crime” or “misdemeanor” in data provided by the Marshals Service (see app. IV for additional details). The type of warrant data State has received for detecting active felony warrants through CLASS has changed over time. In 2002, the Marshals Service began providing State with certain warrant information in CLASS. These only included Class 1 warrants, which is a designation for warrants the Marshals Service enters and maintains in the National Crime Information Center (NCIC) database, a criminal database that provides the warrant data to CLASS.State with federal felony warrants in 2005 for use during the adjudication According to State officials, the FBI began providing process. In late July 2009, officials said State began receiving state and local warrants from the FBI for crimes of varying degrees of severity, including misdemeanors, serious felonies, and nonserious felonies. According to State officials, the high volume of warrant cases was unmanageable and State had no authority to take action on misdemeanor warrants. Thus, in November 2010, State officials said they updated CLASS so that it included only state or local warrants connected to more- serious felonies they selected. Officials also said CLASS is updated daily with information provided by the Marshals Service and the FBI, and currently contains information for federal, state, and local felony warrants related to State’s selected felony charges. Figure 6 illustrates the evolution in State’s data checks for warrants. From the population of 486 issuances with active warrants that we identified through matching, we randomly selected a nongeneralizable sample of 15 individuals for additional analysis. Figure 7 summarizes our matching analysis and sample results. According to the Marshals Service, all 15 of the individuals in our nongeneralizable sample had warrants related to felony charges, 3 of which the Marshals Service was responsible for executing. Fugitives with felony warrants may pose a risk to public safety, and passports could help them evade capture by law enforcement agencies. State may choose to refuse a passport to applicants who are the subject of an outstanding felony warrant. In our analysis of the 15 cases, we took into account the evolution in State’s controls. Figure 8 summarizes our review of the cases in our sample. Among the 13 applicants we reviewed in detail, we found five cases with warrants that State identified during the adjudication process. For instance, after detecting the warrant in one case, State’s passport legal office ultimately determined the passport applicant was a victim of identity theft and was not the subject of the warrant. In another case, State identified a warrant at the state or local level for an applicant who applied for a passport at a time when State’s controls had begun checking for such warrants. In this case, State’s passport legal office authorized the passport issuance to the individual after concluding the associated charge was for a misdemeanor crime, as opposed to a felony offense. We also identified five cases where the applicants had outstanding state or local felony warrants on the application date, and State’s CLASS did not have data for such warrants when the individuals applied. In the other 3 of the 13 sample cases we reviewed, we found no indications that State was aware of or alerted to the individuals’ warrants at the time they applied for passports, even though it appeared that CLASS should have included the We referred all passport issuances we identified from our warrant data.matching analysis, including our sample cases, to State for further review and investigation. Out of the combined total of approximately 28 million passport issuances we reviewed from fiscal years 2009 and 2010, we found 13,470 passport issuances to individuals who submitted an SSN associated with a deceased individual, but where the name used in the passport application did not match the name of the deceased individual. As we previously noted, we analyzed these cases to determine whether the applicant provided the correct SSN and State recorded it incorrectly, or whether the applicant provided the wrong SSN and State recorded the incorrect SSN its system. Specifically, from this population, we selected a stratified random sample that consisted of 140 passport issuances, evenly divided between fiscal years 2009 and 2010 (see fig. 9). We refer to these cases below as deceased-SSN errors. We estimated that approximately 50 percent of the 13,470 cases with deceased-SSN errors were instances where the applicant provided an SSN that did not belong to him or her. We did not identify any other evidence of potential fraud in this group of cases, which suggests that applicants may have made mistakes in filling out the passport application. We estimated that in approximately 44 percent of the 13,470 cases with deceased-SSN errors applicants provided the correct SSNs, but State entered them incorrectly into TDIS. State officials could not provide an explanation for the errors related to these cases, which included applications with handwritten SSNs that were difficult to read, as well as typed SSNs. The remaining 6 percent of cases did not fall into either of these categories. Such cases included instances where the applicant did not provide an SSN, or where we were unable to ascertain the applicant’s actual SSN. Figure 10 summarizes the issuances we reviewed involving deceased-SSN errors. State may issue multiple passports to the same individual, such as when the applicant applies for both a passport book and a passport card. These 24,278 passport issuances were associated with 22,543 unique SSNs. In some cases, more than one individual used the same likely invalid SSN to apply for a passport. have included an applicant who used that SSN in this population. We randomly selected a nongeneralizable sample of 15 cases from the population of unique, likely invalid SSNs for additional review (see fig. 11). In seven cases, State improperly recorded the applicants’ SSNs in TDIS. In six cases, the applicant provided an incorrect SSN, five of which were close to the applicant’s actual SSN.cause of the invalid SSNs in the remaining two cases, because they involved minors for whom we could not ascertain the passport recipient’s actual SSNs. We provided a draft of this report to State and the Department of Justice for comment. State provided technical comments, which we incorporated into the report, as appropriate. The Department of Justice did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of State, and the Attorney General. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. You asked that we assess potential fraud in the Department of State’s (State) passport program. This report examines potentially fraudulent or high-risk issuances among passports issued during fiscal years 2009 and 2010. To examine potentially fraudulent and high-risk passport issuances in fiscal years 2009 and 2010, we matched State’s passport-issuance data for approximately 28 million passport issuances to databases containing information about individuals who were (1) deceased, (2) incarcerated in a state prison facility, (3) in the custody of the federal Bureau of Prisons (BOP), or (4) the subject of an active warrant at the time of the passport We conducted this matching on the basis of common data issuance.elements including Social Security number (SSN), name, and date of birth. We also analyzed the passport data to identify issuances to applicants who provided an invalid SSN, which was defined as an SSN that had not been assigned at the time of the passport application, or had a high risk of misuse. Because our review focused on passport issuances with certain fraud indicators, we did not review other types of passport fraud, such as identity theft involving individuals who were not deceased or imprisoned. Similarly, since we focused on high-risk issuances involving applicants with active warrants, we did not match the passport data against all databases with individuals at risk of misusing a passport, such as the Federal Bureau of Investigation’s Terrorist Screening Center data. We only reviewed data with passport issuances, therefore we did not examine passport applications that were rejected or abandoned by the applicant. In addition, our unit of analysis was passport issuances, instead of passport holders. Some individuals in our sample may have been issued multiple passports. In addition, we examined policies, guidance, including the Foreign Affairs Manual, and other materials provided to passport specialists. We reviewed changes to State’s controls since fiscal years 2009 and 2010 with respect to preventing certain fraudulent or high-risk passport issuances. We also assessed the reliability of State’s passport data, TECS travel activity data provided by the Financial Crimes Enforcement Network (FinCEN), the Social Security Administration’s (SSA) full death file, prisoner databases provided by the BOP and by departments of corrections in 15 selected states, as well as data on individuals with open warrants provided by the Marshals Service, by reviewing relevant documentation, interviewing knowledgeable agency officials, and examining the data for obvious errors and inconsistencies. We concluded that all but four of these databases were sufficiently reliable for the purposes of this report. Through data tests and interviews, we concluded that state prisoner data from Illinois, Louisiana, Michigan, and Pennsylvania were not sufficiently reliable for our purposes. We did not assess the reliability of state prisoner data from North Carolina because they were not provided in time to be included in our analysis. To identify individuals using the SSN of a deceased individual, we matched State passport data to SSA’s full death file as of September 2011. The full death file contains all of SSA’s death records, including state-reported death information. We included only those individuals who died more than 120 days before the passport was issued to ensure that our matches did not include legitimate applicants who died shortly after submitting an application. We further divided these passport applications into two groups on the basis of whether the name in the passport file matched the name in the corresponding death file record. From 13,470 passport records with names that did not match the corresponding record in the full death file, we selected a stratified random sample that consisted of 140 passport issuances, evenly divided between fiscal years 2009 and 2010. For each case, we examined a copy of the original passport application and submitted the SSN from State’s passport data to SSA for verification. We analyzed these cases to determine whether the applicant provided the correct SSN and State recorded it incorrectly, or whether the applicant provided the wrong SSN and State recorded the incorrect SSN in its system. Our estimates had a margin of error of at most +/-9 percentage points for the entire population, at the 95 percent confidence level. From 181 passport records with names that matched the corresponding record in the death file, we randomly selected a nongeneralizable sample of 15 records for additional analysis. To verify that these individuals were deceased at the time their identity was used to apply for a passport, we attempted to obtain a death certificate for each applicant. In some cases, we were unable to obtain a death certificate because we could not identify the state in which the individual’s death was recorded or because state officials could not or would not provide the death certificate to us. The results of this sample are not generalizable to the entire population of applicants using the SSN and name of a deceased person. To identify individuals incarcerated at the time of passport issuance, we matched State passport data to a database of federal prisoners provided by the BOP and prisoner databases from Alabama, Arizona, California, Florida, Georgia, Indiana, Missouri, New York, Ohio, Texas, and Virginia. Federal prisoner data included individuals incarcerated during fiscal years 2009 and 2010. State prisoner data included individuals incarcerated as of the date the state provided data to us, which ranged from May to November 2011. We identified records for which the passport applicant’s SSN, name, and date of birth matched that of a person who was incarcerated on the date of passport issuance. State prisoner data from Florida, New York, and Texas did not contain dates of birth. For these states, we matched passport data to state prison data by SSN and name only. From our matches, we randomly selected 15 federal prisoners and up to 2 prisoners incarcerated in each of the states for additional analysis. If a state had two or fewer valid matches, we selected all matches from that state, for a total of 14 cases from eight different states. Three states did not have any matches. We obtained documentation from BOP and prison officials from the eight states to confirm that the selected individuals were incarcerated on the dates of passport application and issuance. The results of these samples are not generalizable to the entire population of applicants using the name, SSN, or date of birth of an incarcerated person. However, the cases offered insights on applicants who potentially used the identity of prisoners to apply for passports, and related efforts by State to identify such individuals. To identify individuals with active warrants at the time they applied for a passport, we matched State passport data to warrant data provided by the Marshals Service. We identified records for which the passport applicant’s SSN, name (or alias), and date of birth matched that of an individual with an open warrant on the date of passport issuance. From this population, we randomly selected 15 warrants for crimes other than misdemeanors and traffic violations for additional analysis. We confirmed with the Marshals Service that all 15 individuals had warrants related to felony charges. In addition, we referred to documentation provided by the Marshals Service that had warrant information for each case. We compared the warrant dates, the SSNs, names, and dates of birth, if available, in the warrant data we received from the Marshals Service for our matching analysis to the hard-copy documentation. For all 15 cases, the warrant issuance dates, as well as the names and fugitive unique identifiers, in the hard-copy documentation matched the information in the warrant data we used to match with the passport database. In addition, for 12 cases, at least one date of birth and SSN noted in the hard-copy documentation matched the warrant data used for our matching analysis. The dates of birth and SSNs in the documentation for 3 of the 15 individuals were redacted, and therefore the match was based on name and unique identifier only. The results of this sample are not generalizable to the entire population of applicants using the name (or alias), SSN, and date of birth of a fugitive, but provided insights about State’s efforts to identify such individuals during the adjudication process. Because we matched passport data to databases of deceased individuals, prisoners, and fugitives using two or more identifiers—SSN, name, date of birth—we are generally confident in the accuracy of our results. However, in some cases, our matches may include applicants who were not deceased, incarcerated, or the subject of an active warrant. This can occur when a passport applicant has an SSN, name, and date of birth that are similar to an individual listed in one of the other databases or when the applicant is listed in the other database erroneously. In addition, our matches may be understated because we may not have detected applicants whose identifying information in the passport data differed slightly from their identifying information in other databases. Moreover, federal warrant data do not contain information on all individuals with an open warrant issued by a state court. We analyzed State passport data to identify issuances to individuals using an invalid SSN. We defined invalid SSNs as SSNs that had not been issued as of fiscal year 2010 or commonly misused SSNs. SSNs with certain digit combinations, such as those starting with 000 or 666, had never been issued as of fiscal year 2010. Commonly misused SSNs include single-character SSNs, or SSNs that have been publically disclosed in advertisements. From this population, we selected 15 cases for additional analysis. The results of this sample are not generalizable to the entire population of applicants using an invalid SSN, but the cases provided insights on State’s controls related to identifying inaccurate SSNs on passport applications. In total, we selected 214 cases for additional analysis, as shown in figure 12. In all, we selected a total of 214 passport issuances for additional review for our five nongeneralizable and one generalizable samples. For each of the 214 passport issuances selected, we reviewed a copy of the original passport application, verified the SSN in State’s passport data using SSA’s database, and obtained records of the passport holder’s travel activity from FinCEN. We also reviewed State documentation of additional investigative activities taken in any of our cases. Where applicable, we obtained additional documentation about the death, incarceration, or fugitive status of applicants from federal and state agencies, and follow- up actions planned or taken by State. We could not determine whether the passport issuance was inappropriate or fraudulent without additional investigation of the facts and circumstances for each individual case. We were not able to perform these investigations due to restrictions on the use of State’s passport data. The period required for our review was a result of various factors, including a data-sharing negotiation with State, time required to receive requested data and documentation, extensive data preparation and analysis involving multiple databases, and the necessity for resource-intensive reviews of information on-site, given the sensitivity of certain information. We performed this audit from March 2010 through May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This figure is a printable version of the interactive graphic presented above, which provides an overview of the passport application and adjudication process. In April 2007, State and SSA signed an information-exchange agreement that allows State to automatically query SSA’s Enumeration Verification System records for verifying applicants’ identities and identifying deceased individuals. SSA provides State the death status of an applicant when all identifying fields—the Social Security number (SSN), name, and date of birth on the passport application—match an SSA record. However, if one of these identifying elements does not match, SSA will not provide a response with respect to death status. For example, SSA would not provide State a death status if an applicant submitted the SSN of a deceased individual but used a different name. Figure 14 summarizes the potential responses State receives from SSA. Out of the approximately 28 million combined passport issuances we reviewed from fiscal years 2009 and 2010, we identified 1,096 individuals whose Social Security number (SSN), name, and date of birth were associated with 1,309 warrants in data provided by the Marshals Service from the Justice Detainee Information Center database. Of these 1,309 warrants, we identified 486 passport issuances to individuals who may have had active felony warrants at the time of application submission. Over half of the 1,309 warrants were for offenses such as misdemeanors and traffic violations. According to officials at the Department of State (State), the department has no regulatory authority to take action on misdemeanor warrants. Officials from the Marshals Service said the agency tracks other warrants for federal agencies, including those related to misdemeanors and traffic offenses, as well as cases at the state level that require the assistance of the Marshals Service. Officials of the Marshals Service also said they do not have responsibility for entering information about such warrants into the National Crime Information Center (NCIC) database, and therefore such warrants are classified as non–Class 1 warrants. On the other hand, Class 1 warrants are those that the Marshals Service is responsible for entering into NCIC and that it transmits to State’s Consular Lookout and Support System (CLASS) for adjudication. Marshals Service officials told us that of the 1,309 warrants we identified in our matching analysis, 111 (9 percent) were Class 1 warrants. Because the Marshals Service only transmits Class 1 warrants into CLASS, State would have identified the non–Class 1 warrants only if it received them from another source, such as warrants provided by the Federal Bureau of Investigation. Table 1 lists the types of open warrants we identified from our matching analysis. Some of the 1,309 passport issuances we identified with active warrants were associated with felony offenses, including violent crimes ranging from assault to homicide. Ten of the warrants were associated with border and immigration crimes such as smuggling aliens and immigration violations. State may have identified and reviewed these warrants during the adjudication process, and decided to issue the passport after reviewing and resolving the circumstances of the case. We referred all our matches to State for further review and investigation. In addition to the contact above, Heather Dunahoo (Assistant Director); Hiwotte Amare; James Ashley; Patricia Donahue; Richard Hillman; Leslie Kirsch; Maria McMullen; Sandra Moore; Anthony Moran; James Murphy; Rebecca Shea; and Gavin Ugale made key contributions to this report.
Fraudulent passports pose a significant risk because they can be used to conceal the true identity of the user and potentially facilitate other crimes, such as international terrorism and drug trafficking. State issued over 13.5 million passports during fiscal year 2013. GAO was asked to assess potential fraud in State's passport program. This report examines select cases of potentially fraudulent or high-risk issuances among passports issued during fiscal years 2009 and 2010—the most recently available data at the time GAO began its review. GAO matched State's passport data from fiscal years 2009 and 2010 for approximately 28 million issuances to databases with information about individuals who were deceased, incarcerated in state and federal prison facilities, or who had an active warrant at the time of issuance. GAO also analyzed the passport data to identify issuances to applicants who provided a likely invalid SSN, which had not been assigned at the time of the passport application, or had been publically disclosed. From each of these five populations, GAO selected nongeneralizable samples for additional review. GAO also randomly selected a generalizable sample from a population of passport issuances to applicants who used only the SSN of a deceased individual. GAO reviewed State's adjudication policies, and examined passport applications for these populations to further assess whether there were potentially fraudulent or high-risk issuances. State provided technical comments and generally agreed with our findings. This report contains no recommendations. Of the approximately 28 million passports issued in fiscal years 2009 and 2010 that GAO reviewed, it found issuances to applicants who used the identifying information of deceased or incarcerated individuals, had active felony warrants, or used an incorrect Social Security number (SSN); however, GAO did not identify pervasive fraud in these populations. The Department of State (State) has taken steps to improve its detection of passport applicants using identifying information of deceased or incarcerated individuals. In addition, State modified its process for identifying applicants with active warrants, and has expanded measures to verify SSNs in real time. GAO referred, and State is reviewing, matches from this analysis. The following summarizes GAO's findings: Deceased individuals. As shown in the figure, GAO identified at least 1 case of potential fraud in the sample of 15 cases, as well as likely data errors. State reviewed the cases referred by GAO, and indicated fraud could likely be ruled out in 9 of the 15 cases; State plans to further review 6 cases. State prisoners. GAO found 7 cases of potential fraud among the sample of 14 state prisoner cases. State noted fraud could likely be ruled out in 10 of the 14 cases, and intends to conduct additional reviews of 4 cases. Federal prisoners. None of the 15 cases in this sample had fraud indicators, since all individuals were not actually in prison when applying for passports. Individuals with active warrants. GAO found five cases where State identified the warrant and resolved it prior to issuance. As the figure shows, GAO also identified three cases with warrants that State was not aware of or alerted to, but should have been in State's system for detection during adjudication. In addition, GAO found 13,470 passport issuances to individuals who used the SSN, but not the name, of a deceased person, as well as 24,278 issuances to applicants who used a likely invalid SSN. GAO reviewed a 140-case generalizable sample and a 15-case nongeneralizable sample for these two populations, respectively, and determined the cases were likely data errors. State has taken steps to capture correct SSN information more consistently.
GPRA is intended to shift the focus of government decision-making, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. OPM’s mission is to support the federal government’s ability to have the best workforce possible to do the best job possible. OPM is to accomplish this mission by leading federal agencies in shaping human resources management systems to effectively recruit, develop, manage, and retain a high-quality and diverse workforce; protecting national values embodied in law, including merit principles and veterans’ preference; serving federal agencies, employees, retirees, their families, and the public through technical assistance, employment information, pay administration, and benefits delivery; and safeguarding employee benefit trust funds. The results of OPM’s efforts largely take place at federal agencies outside of the direct control of OPM. This section discusses our analysis of OPM’s performance in achieving its selected key outcomes and the strategies it has in place, particularly strategic human capital management and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which the agency provided assurance that the performance information it is reporting is credible. We cannot assess progress made by OPM in contributing to the outcome that the federal government has an appropriately constituted workforce with the proper skills to carry out its missions. OPM has several goals that relate to this outcome, but none that focus squarely on the degree to which the federal workforce has the right skill mix. Specifically, OPM’s fiscal year 2000 performance report includes goals that “federal human resources management policies and programs are merit-based, mission- focused, and cost effective” and “a model for workforce planning . . . is in place” for use by agencies. OPM’s performance report states that both of these goals were met. OPM states that the first goal was met because it produced a few studies that contributed to human resource policy or program proposals. The second goal was met, according to the OPM fiscal year 2000 performance report, because OPM has provided the workforce planning model to several agencies. This model, and other important steps OPM has taken to support better workforce planning—including developing research tools and launching a website to facilitate information sharing about workforce planning issues—could prove helpful to agencies in addressing their individual strategic human capital challenges. As a next step, OPM needs to measure, for example, how the studies and workforce planning model actually contributed to improved strategic human capital management at the agencies. Specifically, as an intermediate outcome, OPM could measure the number of agencies that were able to identify skill shortages and solutions as a result of using workforce planning. Previously, we have reported that OPM should take a more proactive role in agency workforce planning efforts, and our April 2001 report on expected trends in federal employee retirements further highlights the need for improved workforce planning. Other information indicates that this outcome is not commonly being achieved. Our high-risk series gave many examples of agencies not having the appropriate workforce to carry out its mission. For example, the Department of Energy did not have employees with adequate contract management skills to oversee the clean up of hazardous waste sites, and nursing shortages at Veterans Affairs facilities could put veterans at risk. OPM’s fiscal year 2002 plan contains several strategies that, for the most part, appear to be reasonable. For example, OPM plans to obtain input from agencies on how workforce policies need to be changed and to explore policies on dual compensation and phased retirement to bolster retention of federal employees. Determining whether these strategies are successful will require OPM to develop indicators of whether federal agencies and departments have appropriately skilled workforces and how these strategies are being used to build workforce skills. As is the case with the first outcome, OPM’s performance report does not contain sufficient outcome measures to fully assess the extent to which federal employees are held accountable for their performance. OPM’s performance report contains several goals related to this outcome. For example, OPM is to develop performance-oriented approaches to employee compensation and to provide assistance in developing performance management systems. OPM measures goal achievement by such indicators as the number of workshops offered, the number of performance studies available on the OPM Website, and whether performance management guidance is issued in a timely manner. Other information indicates that much more needs to be done to improve performance management at federal agencies. For example, in our October 2000 report, we noted that surveys we had administered to managers showed that only 26 percent in 1997 and 31 percent in 2000 reported that employees in their agencies had received positive recognition to a great or very great extent for helping agencies accomplish their strategic goals. Also, the Merit Systems Protection Board (MSPB) and we have previously reported that holding employees accountable for their job performance continues to be perceived as a challenge because employees perceive the process as cumbersome. OPM’s plan identifies a variety of strategies for achieving goals that relate to the outcome of evaluating, rewarding, and otherwise holding federal employees accountable for their performance. The strategies call for providing guidance and information to agencies, including information on best practices, as well as working with internal and external stakeholders to identify needed changes in compensation and performance policies and programs. Although the strategies appear reasonable, how they will help to achieve the outcome of holding employees accountable for performance is not always clear. For example, a strategy OPM cited to help achieve its goal of identifying options for performance-oriented approaches to compensation was to maintain comprehensive research on best practices in private and public sector compensation systems and tools that the federal government can use. But OPM offers no explanation of how the use of such systems and tools will aid the federal government in holding employees accountable for their performance. OPM has made mixed progress on the outcome that federal agencies adhere to merit system principles. On the one hand, OPM’s fiscal year 2000 performance report states that OPM’s periodic reviews of agencies have identified no systemic merit principle weaknesses. On the other hand, the results of OPM’s government-wide survey of federal employees conducted in fiscal years 1999 and 2000 indicate that a sizable percentage of employees think that certain merit principles are not being followed. The fiscal year 2000 performance report includes goals related to the overall adherence to merit principles by agencies, including agencies with delegated examining authority. OPM uses a variety of measures to determine if this outcome is being achieved, including (1) the results of merit system reviews of federal agencies, (2) agencies’ satisfaction with the reviews, and (3) the views of federal employees regarding adherence to merit principles. OPM’s report states that the reviews indicated that agencies, including those with delegated examining authority, were adhering to merit principles. According to the performance report, the problems found in OPM’s reviews were not systemic. Once problems were identified in the review, OPM worked with the agency to resolve the problems. In its reviews, OPM also identified best practices and shared them with other agencies. The views of federal employees on adherence to the nine merit system principles, as provided in an OPM survey, indicated that there was no significant change from the fiscal year 1999 survey. OPM’s goal was to increase by two percentage points the percentage of federal employees who believed each of the merit principles were being adhered to by their agencies. There are a variety of factors that influence employees’ responses to this question, including governmentwide economic, cultural, and social conditions. For this reason, OPM expects substantive change in the perceptions of these principles to take place over several years. This year’s survey indicates that a relatively large portion of federal employees believed that employees maintain a high standard of integrity and concern for the public and that employees are protected from improper political influence. But on the other hand less than half believe that employees are protected against reprisal for the lawful disclosure of information or are provided equal pay for equal work and rewarded for excellent performance; and only a little more than half think that employees are managed efficiently and effectively. Similar results were reported in the merit system principles survey conducted by MSPB in 2000. OPM’s fiscal year 2002 performance plan identifies a variety of strategies that are consistent with its current efforts. The current strategies should help OPM achieve its goals as well as contribute to the outcome of ensuring that agencies adhere to the merit system principles. For example, to help ensure that personnel practices are carried out in accordance with these principles, OPM’s strategies include conducting nationwide agency merit system oversight reviews, auditing agencies with delegated examining authority and reviewing reports filed by these agencies to identify any training needs, and reviewing all agency selections for initial career Senior Executive Service appointments for compliance with merit system principles. OPM does not include coordination with MSPB as a strategy for achieving performance goals within its Office of Merit Systems Oversight and Effectiveness—the program office that is responsible for leading the federal government’s efforts in overseeing the merit system. MSPB’s mission, in part, is to ensure that agencies make employment decisions in accordance with the merit system principles. In support of its mission, MSPB hears and decides cases involving abuses of the merit system. It also administers the merit principles survey to gather data on the “health” of the federal civil service. OPM’s strategy should also consider MSPB’s decisions and merit principles survey in helping to achieve this outcome. However, even though both agencies administer programs and conduct similar activities that share a common purpose, OPM’s strategic plan for fiscal years 2000 through 2005 states that coordination with MSPB is limited to adjudicatory issues. We could not fully assess the progress OPM is making to reduce fraud and error in the Federal Employees Health Benefits Program. The OPM OIG has identified health care fraud in the Federal Employees Health Benefits Program as one of the most serious management challenges facing OPM. The fiscal year 2000 performance report contains an OIG goal to have fraud against OPM programs detected and prevented. This goal has several measures, including the number of convictions for health benefit program fraud (51 in fiscal year 2000) and the number of health benefit providers who are debarred and not allowed to participate in the Federal Employees Health Benefits Program (2,706 in fiscal year 2000). Although these are measures for the OIG, there were no goals or strategies related to the detection and prevention of fraud at the programmatic level for the Federal Employees Health Benefits Program in the Office of Retirement and Insurance Service (RIS), whose mission, in part, is to provide accurate and cost-efficient benefit services. For example, there were no goals or strategies to decrease the number of errors or fraud cases to a minimum. In addition, there were no baseline indicators of the dollar amount of fraud or errors found in the health benefits program or quantitative targets against which to measure progress. OPM believes that measures identified by the OIG are consistent with RIS’ expectations and says that RIS has worked in unison with the OIG to minimize fraud and abuse in the Federal Employees Health Benefits Program. While we recognize this, we believe that OPM needs to develop goals and measures within RIS for detecting and preventing fraud and errors in the health benefits program. The fiscal year 2002 performance plan contains a strategy to have the employee benefit trust funds be models of excellence and integrity in financial stewardship. The OIG includes strategies related to reducing fraud and errors, such as pursuing debarment of untrustworthy health care providers and conducting aggressive investigations where fraud and abuse are suspected, which seem reasonable. The RIS had no goals or performance indicators related to fraud and errors in the health benefits program, but included strategies such as (1) working with carriers participating in health benefits to ensure that audits are performed and (2) conducting financial statement audits to reduce the incidence of payment errors, which in part will help detect fraud and errors in the health benefits program. Although these strategies generally will help detect and reduce fraud and errors, it is unclear how this will be accomplished and how they plan to measure progress because there were no baseline indicators for the dollar amount of fraud or errors found in the health benefits program, or performance indicators against which to measure progress. In addition, performance indicators for the OIG measured progress in processing cases (debarments, indictments, and convictions) once the fraud is discovered but did not address measures for preventing it. The fiscal year 2000 performance report describes mixed progress in the provision of timely and accurate retirement and insurance services. OPM’s retirement and insurance program continues to receive high satisfaction ratings from its customers, but timeliness of retirement claims processing has declined. The fiscal year 2000 performance report outlines several outcome-oriented goals that include increasing customer satisfaction with services and reducing processing times. Customer satisfaction with OPM’s retirement and insurance programs remained high during fiscal year 2000. For example, more than 90 percent of new retirees said they were very or generally satisfied with how their claims were handled. Claims processing times, however, did not meet target levels, particularly the time to process Federal Employees Retirement System (FERS) claims. Specifically, Civil Service Retirement System (CSRS) claims processing time increased to 44 days from 32 days in fiscal year 1999 and FERS processing time increased to more than 6 months from 3 months in fiscal year 1999. The goal for CSRS processing time is 25 days and for FERS processing time is 60 days. OPM recognizes that it needs to address lagging times in retirement claims processing, and the fiscal year 2002 performance plan contains strategies that could improve claims processing timeliness. The plan states that the current processing is based on “aging technology, paper-based business processes, and a heavy reliance on human resources.” One of the strategies cited in OPM’s plan for reducing claims processing times is to add more resources to the processing of FERS claims. OPM’s measure of its success in achieving this goal is to gradually reduce processing times for these claims from a fiscal year 2000 level of 6 months to 5 months in fiscal year 2001 and 3 months in fiscal year 2002. The number of FERS claims is expected to increase by nearly 40 percent between fiscal year 2000 and fiscal year 2002. The number of employees seeking retirement services is expected to dramatically increase beyond 2002. To address long-term needs, OPM is implementing a Retirement Systems Modernization project that is expected to significantly reengineer and automate retirement claims processing. OPM is implementing the modernization of the retirement systems in phases and expects to realize significant business benefits each year. OPM says it has already seen improvements. For example, OPM says it implemented a prototype FERS Benefit Calculator that has helped to reduce processing times and the balance of aged cases. However, the modernized retirement systems will not be fully operational until 2009. Reducing claims processing times with an increasing workload will be a significant challenge for OPM. OPM has made some improvements to its fiscal year 2000 performance report and fiscal year 2002 performance plan over previous years. However, a number of weaknesses continue that OPM recognizes. This section describes improvements and remaining weaknesses in OPM’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. OPM’s fiscal year 2000 performance report represents an improvement over the fiscal year 1999 report, but opportunities remain for additional improvements. In our June 2000 report we indicated that OPM’s fiscal year 1999 performance report did not identify the most critical performance measures towards goal achievement. The fiscal year 2000 report clearly identifies the most critical measures of the several measures that are included under most goals. If the critical indicator was met, then OPM considered the goal met. Last year we noted a number of performance report weaknesses, including the use of activity-based indicators instead of outcome-based indicators and the lack of specific target measures for goals. Again for fiscal year 2000, many indicators are activity based or do not contain specific targets. The following are examples of activity-based measures or those without targets: To determine whether delegated examining is conducted in accordance with merit system laws, OPM measured the number of reviews conducted of agency-delegated examining activities. An OPM measure is to ensure that the OPM workforce is well trained for current and future needs; however, there is no target identified to determine when this has been achieved. In measuring customer or employee satisfaction, OPM uses terms such as “high degree” of satisfaction without defining what constitutes a “high degree” of satisfaction. OPM does not identify which positive feedback rate (e.g., 80 percent or 90 percent) is judged to be “high satisfaction” for the particular indicator. A performance management goal is for OPM to formulate performance- oriented approaches to compensation. OPM considered this goal met because it disseminated information on state-of-the-art compensation practices. Even though these measures indicate positive activity, they are not measures of actual goal achievement, and without specific targets it is not possible to determine whether the goal was met. The fiscal year 2000 performance report also contains some measures that were not considered reliable by the OPM OIG. The performance report states that the Inspector General reviewed 116 of the 458 performance measures and found that 59 percent were based on reliable information and 17 percent were based on unreliable information; for 24 percent of the measures, the OIG could not determine their reliability. Further, concerns exist about the reliability of key surveys that are used by OPM as measures for goal achievement throughout the performance report. The low participation rates for the current Human Resources Directors’ Survey and the earlier Human Resources Specialists Survey (which was not conducted in fiscal year 2000) pose a material risk that the respondents may not be representative of the overall survey population. In addition, in the case of the Human Resources Directors’ Survey, the low participation rate caused a margin of error of 9.9 percent, limiting the usefulness of the results. The fiscal year 2002 performance plan continues with several of the strengths of the 2001 plan. The plan is directly linked to the OPM strategic plan and is integrated with the OPM Congressional Budget Justification. The plan also includes a resource summary by major OPM strategic goal, including the dollars and full-time equivalents requested by goal. The fiscal year 2002 plan, like the fiscal year 2000 report, contains a number of OPM activity-based, rather than outcome-based, goals and measures or indicators. The fiscal year 2002 plan continues to rely on the results of some governmentwide surveys and secondary anecdotal information to measure whether target levels established in the previous years’ plans have been met. The 2002 plan discusses steps that OPM will take to address some of the weaknesses with these surveys and anecdotal information. For example, OPM states in its 2002 plan that it discontinued several indicators that were based on unreliable data sources. The reliability and validity of informal feedback has inherent limitations that cannot be made more reliable and statistically valid by the planned enhancement of procedures to collect and track the information. OPM is proceeding with the implementation of a new measurement framework. It plans to conduct more formal evaluations of the outcomes of specific policies and programs, identify agency-level performance measures, use agency-level measures as the primary basis of performance reporting, and tie the measures more closely to the strategic goals. This section discusses the extent to which OPM is internally addressing strategic human capital management and information security. Both OPM’s performance report, as well as its plan, address these challenges within the agency. Regarding OPM’s internal strategic human capital management, we found that the fiscal year 2002 performance plan contains several goals and measures related to OPM’s internal strategic human capital management, and OPM’s fiscal year 2000 performance report describes progress in resolving some of these OPM-level strategic human capital management challenges. For example, both the report and the plan contained goals related to recruiting, retaining, and managing a workforce to meet OPM program needs. Of particular note was the requirement that all employee performance plans be linked to agency strategic goals and the establishment of baseline data to measure the rate of retention among employees who complete career development programs. To further improve its strategic human capital management goals and strategies, OPM needs to link these strategies to specific OPM programs. For example, the OPM performance plan states that OPM has significantly changed its ratio of employees to supervisors to now exceed the governmentwide average. OPM’s performance plan also states that it wants to further increase this ratio. The plan needs to also discuss what impact this ratio change will have on program outcomes and what additional human capital strategies might be needed to address the reduction in the number of supervisors. OPM also could establish the relationship of impending retirements to OPM succession planning processes to ensure that critical competencies and leadership are available for mission-critical activities. With respect to information security, we found that OPM’s fiscal year 2002 performance plan contains a goal and measures related to information security, and the agency’s fiscal year 2000 performance report explains its progress in resolving its information security challenges. OPM reports that in fiscal year 2000, it met its goal of ensuring that the information security program provided adequate computer security commensurate with the risk and magnitude of harm that could result from loss or compromise of mission-critical information technology systems. However, the results of the independent public accountant’s audit of OPM’s fiscal year 2000 consolidated financial statements show that a reportable condition continues to exist in the electronic data processing general control environment. The audit noted weaknesses in (1) entity-wide security, (2) access control, (3) control over application changes and software development, and (4) service continuity planning. The target date for describing the corrective action taken to resolve these deficiencies is fiscal year 2001. The fiscal year 2002 performance plan includes a goal to enhance information security. The plan states that the absence of critical security problems is the critical indicator for achieving this goal. OPM’s mission, in part, is to provide strategic human capital management leadership and services to federal agencies. OPM’s fiscal year 2000 performance report and fiscal year 2002 performance plan contain many goals that measure the extent of their activities, but there are few goals and measures that assess the actual state of strategic human capital management in the federal government or the specific contributions that OPM’s programs and initiatives make. Even though OPM does not directly control these outcomes in federal agencies, it needs to measure the results to assess how well its leadership and services are working. OPM has recognized this weakness and is working with federal department and agency human resource directors to develop a series of human capital measures. OPM also needs to make other improvements to its report and plan, including strengthening goals and measures to improve their reliability, linking its internal human capital goals to OPM programs, and establishing a program management performance goal to assess fraud and error in the Federal Employees Health Benefits Program. To better assess OPM programs, we recommend that as a part of OPM’s continued strengthening of its efforts to clearly define goals, measure its performance, and provide leadership over strategic human capital management, the Director of OPM develop goals and measures that assess the state of human capital at federal departments and agencies, replace informal feedback measures with indicators that are more reliable, and better link internal strategic human capital management goals to specific OPM programs and outcomes. In addition, we also recommend that the Director of OPM develop goals and measures that assess the prevention and detection of fraud and errors in the Federal Employees Health Benefits Program, perform a risk assessment to identify areas most vulnerable to fraud and errors, and institute internal controls to prevent and detect occurrences. Our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from the Office of Management and Budget (OMB) for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of OPM’s operations and programs, our identification of best practices concerning performance planning and reporting, and our observations on OPM’s other GPRA-related efforts. We also discussed our review with OPM officials and with OPM’s OIG. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member, Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for all of OPM’s programs or activities. The major management challenges confronting OPM—including the governmentwide high-risk areas of strategic human capital management and information security that we identified in our January 2001 performance and accountability series and high-risk update—were identified by us and by OPM’s OIG in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of OPM’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. We provided a draft of this report to OPM for its review and comment. OPM’s Acting Director provided written comments, which are reprinted in appendix II. Overall, he agreed with the results our review, including our recommendations, and appreciated our recognizing the strategic challenges OPM faces with regard to human capital management as well as its efforts to improve its measurement framework, including the shift to measuring governmentwide outcomes. OPM also provided specific comments to clarify information we presented on four of the five selected key outcomes. We made several changes to this report in response to these comments. Our responses are given in appendix II as well as in various sections of this report. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Acting Director of OPM, and the Director of OMB. Copies will also be made available to others on request. If you or your staff have any questions, please call me at (202) 512-6806. Key contributors to this report were Bill Doherty, Danielle Holloman, Linda Lambert, Mary Martin, Elizabeth Martinez, Ben Ritt, Ed Stephenson, and Scott Zuchorski. Table 1 identifies the major management challenges confronting OPM, which include the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the challenges that we and/or OPM’s OIG have identified. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, OPM has made in resolving its challenges. The third column discusses the extent to which OPM’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and/or OPM’s OIG identified. We found that OPM’s performance report discussed the agency’s progress in resolving all challenges. Of the agency’s seven major management challenges, its performance plan had goals and measures that were directly related to all seven of the challenges. OPM can build upon its efforts by more clearly identifying the specific strategies that it is using to address its challenges. Such information is important to help OPM, the Congress, and other decisionmakers determine whether the best mix of strategies is in place and to help pinpoint improvement opportunities. The following are GAO’s comments on the specific comments contained in the enclosure to OPM’s letter dated June 26, 2001. 1. We revised this report to show that the retirement systems modernization project is being implemented in phases and that it will not be fully operational until 2009. 2. In the draft of this report provided to OPM for comment, we stated that OPM’s goal for fiscal year 2000 was to make the workforce planning model available to agencies for their use and that this goal was met. Although making the model available to agencies is a useful activity, OPM’s performance report and plan do not make clear what the outcome or result was from this activity. For example, neither document provides information on how agencies have used this model to help ensure that they have an appropriately constituted, properly skilled workforce to carry out their missions. OPM comments “there is already real evidence that the model has been of assistance, as all Federal agencies are meeting a deadline of June 29, 2001, to provide the Office of Management and Budget with individual workforce analyses as a first step in meeting the President’s initiative to use human capital planning to streamline Government.” The basis upon which OPM makes this statement is unclear, because although OPM says the workforce planning model has been of assistance to agencies, it does not say how many agencies actually used it in responding to OMB’s directive. 3. We recognize that OPM’s goal for fiscal year 2000 was to complete its research on private and public sector best practices in compensation systems and tools. However, OPM does not describe in its fiscal year 2000 performance report or 2002 performance plan what outcome or result was expected for this activity-based goal in terms of ensuring employee performance accountability. Thus, we continue to believe that OPM needs to have goals, measures, and strategies in place that will show how the use of the compensation systems and tools identified as a result of its research will aid the federal government in holding employees accountable for their performance. 4. We changed the report to recognize OPM’s belief that the measures the OIG identified for detecting fraud and error in the Federal Employees Health Benefits Program are consistent with RIS’ expectations. However, as we have stated in this report, the OIG’s measures relate to detecting fraud after it has occurred rather than preventing it. Accordingly, we continue to believe that OPM needs to develop goals and measures within the program office, RIS, for detecting and preventing fraud and errors in the health benefits program. OPM commented that it will look for ways to develop additional program- wide goals and measures relating to fraud and errors in the federal health benefits program. 5. In addition to the changes described in comment 1, we changed the report to reflect that the retirement systems modernization project has not been delayed. We also cited an example of improvements OPM has seen thus far in implementing the modernized retirement systems.
This report reviews the Office of Personnel Management's (OPM) fiscal year 2000 performance report and fiscal year 2002 performance plan. OPM's mission, in part, is to provide strategic human capital management leadership and services to federal agencies. OPM's fiscal year 2000 performance report and fiscal year 2002 performance plan contain many goals that measure the extent of their activities, but there are few goals and measures that assess the actual state of strategic human capital management in the federal government or the specific contributions that OPM's programs and initiatives make. Although OPM does not directly control these outcomes in federal agencies, it needs to measure the results to assess how well its leadership services are working. OPM recognizes this weakness and is working with human resource directors at federal agencies to develop a series of human capital measures. In its report and plan, OPM also need to strengthen goals and measures to improve their reliability, link its internal human capital goals to OPM programs, and establishing a program management performance goal to assess fraud and error in the Federal Employees Health Benefits Program.
The success of crosscutting, multi-organizational efforts depends on certain key concepts to meld organizational efforts. These include central leadership, an overarching strategy, effective partnerships, and common definitions. These are critical elements that underpin the Government Performance and Results Act of 1993 or were shown as critical in our related work on combating terrorism efforts and the successful resolution of Y2K computer problems. In March 2002, we testified about these elements in terms of promoting partnerships in the development of a national strategy for homeland security. We have previously reported that the general tenets embraced by the Results Act provide agencies with a systematic approach for managing programs. The Results Act principles include clarifying missions, developing a strategy, identifying goals and objectives, and establishing performance measures. When participants in a crosscutting program understand how their missions contribute to a common strategy, they can develop goals and objectives and implementation plans to reinforce each other’s efforts and avoid duplicating or inadvertently obstructing them. Moreover, a uniformly rigorous approach to assessing performance can enable the Executive Branch and the Congress to identify programs that are not operating as intended and target corrections as needed. Our work on combating terrorism indicated that without central leadership and an overarching strategy that identifies goals and objectives, priorities, measurable outcomes, and state and local government roles, the efforts of the more than 40 federal entities and numerous state and local governments were fragmented. Specifically, we found that key interagency functions in combating terrorism resided in several different organizations and that this redundancy led to duplication of effort. We reported that state and local officials have expressed concerns about duplication and overlap among federal programs for training about weapons of mass destruction and related matters. Some officials said that the number of federal organizations involved created confusion concerning who was in charge. As we noted in our September 2001 report on combating terrorism, a representative of the International Association of Fire Chiefs testified similarly that efforts would benefit greatly from an increased level of coordination and accountability. Our work also showed that common definitions promote effective agency and intergovernmental operations and permit more accurate monitoring of expenditures at all levels of government. Effective partnerships are also key in crosscutting efforts. In the Y2K effort, for example, the issues involved went beyond the federal government to state and local governments and to key economic sectors, such as financial services, power distribution, and telecommunications. A failure in any one area could have affected others, or critical services could have been disrupted. Thus, the President’s Council on Year 2000 Conversion established more than 25 working groups drawn from different economic sectors and initiated numerous outreach activities to obtain the perspectives of those involved on crosscutting issues, information sharing, and the appropriate federal response. Lastly, in March 2002, we testified on the need for a national strategy to improve national preparedness and enhance partnerships among federal, state, and local governments to guard against terrorist attacks. This strategy should clarify the appropriate roles and responsibilities of federal, state, and local entities and establish goals and performance measures to guide the nation’s preparedness efforts. Homeland security is a priority among public and private sector entities, but their efforts are not fully unified. Federal agencies are undertaking homeland security initiatives, but without the national strategy cannot know how the initiatives will support overarching goals and other agencies. Some state and local governments and private sector entities are waiting for further guidance on national priorities, roles and responsibilities, and funding before they take certain additional action. A key step toward a more unified approach was achieved in October 2001 with Executive Order 13228, when the President established a single focal point to coordinate efforts against terrorism in the United States—the Office of Homeland Security. The national strategy is under development, and partnerships among federal, state, and local governments and the private sector are evolving. However, the federal government does not yet have commonly accepted and authoritative definitions for key terms, such as homeland security. Public and private sector entities have been either pursuing their own homeland security initiatives without assurance that these actions will support the overall effort, or they have been waiting for further guidance before undertaking certain new initiatives. For example, the U.S. Coast Guard has realigned some resources to enhance port security, drawing them from maritime safety, drug interdiction, and fisheries law enforcement. Similarly, the Customs Service has used approximately 1,500 personnel since September 11 in support of the Federal Aviation Administration’s Air Marshal program and the Federal Bureau of Investigation’s Joint Terrorism Task Forces; Customs Service aircraft and crews were assigned to assist the North American Aerospace Defense Command; and the Customs Service also undertook other initiatives to bolster homeland security. The Department of Defense has initiated two major operations. Operation Enduring Freedom is a combat mission conducted overseas in direct pursuit of terrorists and their supporters, while Operation Noble Eagle concerns increased security required for the nation’s homeland. To help accomplish these new efforts, the department has recommended and been authorized to create a new unified command—the Northern Command—to lead all of the department’s military homeland security missions and activated almost 82,000 Reserve and National Guard service members for participation in these operations. The Department of Transportation in response to legislation established the Transportation Security Administration and is in the process of hiring over 30,000 baggage screeners at airports across the United States. In addition, the Department of Health and Human Services, including the Centers for Disease Control and Prevention, have received significant new funding to support its homeland security programs. At the same time, officials from these agencies as well as associations of state officials stated that they were waiting for the Office of Homeland Security to provide a vision and strategy for homeland security and to clarify additional organizational responsibilities. Certain state officials said that they are uncertain about additional roles for state and local governments as well as how they can proceed beyond their traditional mission of managing the consequences of an incident or providing for public health and safety. Uncertainty about funding may also impede a unified approach to homeland security. At the time of our report, officials representing state and local governments as well as the private sector believed they were unable to absorb new homeland security costs. The National Governor’s Association estimated fiscal year 2002 state budget short falls of between $40 billion and $50 billion, making it difficult for the states to take on new initiatives without federal assistance. Similarly, representatives from associations representing the banking, electrical energy, and transportation sectors told us that member companies were concerned about the cost of additional layers of security. For example, according to National Industrial Transportation League officials, transport companies and their customers are willing to adopt prudent security measures (such as increased security checks in loading areas and security checks for carrier drivers), but are concerned about the impact and cost of new regulations to enhance security on their ability to conduct business. At the same time, the North American Electric Reliability Council officials told us that utility companies need a way to recoup expenses incurred in protecting facilities the federal government deems critical to homeland security. As we have testified, our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have a variety of policy tools including grants, regulations, tax incentives, and regional coordination and partnerships to motivate or mandate other levels of government or the private sector to address security concerns. Key to the national effort will be determining the appropriate level of funding in order that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while protecting against federal funding being used as a substitute for state, local, or private sector funding that would have occurred without federal assistance. Inadequate intelligence and sensitive information sharing have also been cited as impediments to participation in homeland security efforts. Currently, no standard protocol exists for sharing intelligence and other sensitive information among federal, state, and local officials. Associations of state officials believe that intelligence sharing has been insufficient to allow them to effectively meet their responsibilities. According to a National Emergency Management Association official, both state and local emergency management personnel have not received intelligence information, hampering their ability to interdict terrorists before they strike. According to this official, certain state and local emergency management personnel, emergency management directors, and fire and police chiefs hold security clearances granted by the Federal Emergency Management Agency; however, these clearances are not recognized by other federal agencies, such as the Federal Bureau of Investigation. The National Governors’ Association agreed that inadequate intelligence- sharing is a problem between federal agencies and the states. The association explained that most governors do not have security clearances and, therefore, do not receive classified threat information, potentially undermining their ability to use the National Guard to prevent an incident and hampering their emergency preparedness capabilities to respond to an incident. On the other hand, the Federal Bureau of Investigation believes that it has shared information with state or local officials when appropriate. For example, field offices in most states have a good relationship with the emergency management community and have shared information under certain conditions. At the same time, bureau officials acknowledged that the perception that a problem exists could ultimately undermine the desired unity of efforts among all levels of government. Even federal agencies perceived that intelligence sharing was a problem. For example, Department of Agriculture officials told us that they believe they have not been receiving complete threat information, consequently hampering their ability to manage associated risks. Some homeland security initiatives to unify efforts are in place or under development. At the same time, we could not confirm that another key element, a definition of homeland security, was being addressed at the time we collected data for our report. The President established the Office of Homeland Security to serve as the focal point to coordinate the nation’s efforts in combating terrorism within the United States. The office is developing a national strategy and has begun to forge partnerships within the interagency system, with state and local governments, and with the private sector by establishing advisory councils comprised of government and nongovernment representatives. However, implementing the national strategy will be a challenge. The partnerships are not fully developed, and an authoritative definition of homeland security does not exist. In October 2001, the President established a single focal point to coordinate efforts to combat terrorism in the United States—the Office of Homeland Security. This action is generally consistent with prior recommendations, including our own, to establish a single point in the federal government with responsibility and authority for all critical leadership and coordination functions to combat terrorism. We had also recommended that the office be institutionalized in law and that the head of the office be appointed by the President and confirmed by the Senate. As constituted, the office has broad responsibilities, including (1) working with federal, state, and local governments as well as private entities to develop a national strategy and to coordinate implementation of the strategy; (2) overseeing prevention, crisis management, and consequence management activities; (3) coordinating threat and intelligence information; (4) reviewing governmentwide budgets for homeland security and advising agencies and the Office of Management and Budget on appropriate funding levels; and (5) coordinating critical infrastructure protection. The Office of Homeland Security is collaborating with federal, state, and local governments and private entities to develop a national strategy and coordinate its implementation. The strategy is to be “national” in scope, including states, localities, and private-sector entities in addition to federal agencies. It is to set overall priorities and goals for homeland security and to establish performance measures to gauge progress. At the federal level, the strategy is to be supported by a crosscutting federal budget plan. The national strategy is to assist in integrating all elements of the national effort by ensuring that missions, strategic goals, priorities, roles, responsibilities, and tasks are understood and reinforced across the public and private sectors. The office plans to deliver the national strategy to the President in June 2002. Officials at key federal agencies indicate that they expect the national strategy to provide a vision for homeland security and prioritize and validate organizational missions for homeland security. However, achieving the support of all of the organizations involved in devising and implementing the strategy is a daunting challenge because of their specialized, sometimes multiple missions; distinctive organizational cultures; and concerns about how forthcoming initiatives might affect traditional roles and missions. Partnerships are being established among federal, state, and local governments, and private sector entities to promote a unified homeland security approach. First, Executive Order 13228, which established the Office of Homeland Security, also established a Homeland Security Council made up of the President, Vice President, the Secretaries of the Treasury, Defense, Health and Human Services, and Transportation, the Attorney General, and the Directors of the Federal Emergency Management Agency, Federal Bureau of Investigation, Central Intelligence, the Assistant to the President for Homeland Security, and other officers designated by the President. Second, the President also established interagency forums to consider policy issues affecting homeland security at the senior cabinet level and sub-cabinet levels. Third, to coordinate the development and implementation of homeland security policies, the Executive Order created policy coordination committees for several functional areas of security, such as medical/public health preparedness and domestic threat response and incident management. These committees provide policy analysis in homeland security and represent the day-to-day mechanism for the coordination of homeland security policy among departments and agencies throughout the federal government and with state and local governments. In addition, the President established a Homeland Security Advisory Council with members selected from the private sector, academia, professional service associations, federally funded research and development centers, nongovernmental organizations, and state and local governments. The council is advised by four committees representing (1) state and local officials; (2) academia and policy research; (3) the private sector; and (4) local emergency services, law enforcement, and public health/hospitals. The function of the Advisory Council includes advising the President through the Assistant for Homeland Security on developing and implementing a national strategy; improving coordination, cooperation, and communication among federal, state, and local officials and private sector entities; and advising on the feasibility and effectiveness of measures to detect, prepare for, prevent, protect against, respond to, and recover from terrorist threats or attacks within the United States. In terms of interagency partnerships, federal agencies in some program areas have formal mechanisms to support collaboration, and other agencies report improvement in communication and cooperation. For example, the Federal Emergency Management Agency has coordinated the emergency response capabilities of 26 federal agencies and the American Red Cross by developing a comprehensive plan that establishes their primary and secondary disaster relief responsibilities, known as the Federal Response Plan. The plan establishes a process and structure for the systematic and coordinated delivery of federal assistance to state and local governments overwhelmed by a major disaster or emergency. As another example, the Department of Justice, as directed by Congress, developed the Five-Year Interagency Counterterrorism and Technology Crime Plan. The plan, issued in 1988, represents a substantial interagency effort. After the events of September 11, officials from the Federal Emergency Management Agency, the Environmental Protection Agency, and the Departments of Agriculture, Energy, Transportation, and the Treasury told us that their relationships with other federal agencies have improved. For example, some agencies reported increased contact with the intelligence community and regular contact with the Office of Homeland Security. Some agencies have indicated that they also provided a new or expanded level of assistance to other agencies. For example, the Department of Agriculture used its mobile testing labs to help test mail samples for anthrax; the Department of Defense provided security to the National Aeronautics and Space Administration prior to and during the launch of the space shuttle and to the Secret Service at such major sporting events as the Winter Olympics in Utah and the Super Bowl in New Orleans, Louisiana, in 2002; and the National Guard assisted with the security of commercial airports throughout the United States. Although the federal government can assign roles to federal agencies under a national strategy, it may need to seek consensus on these roles with other levels of government and the private sector. The President’s Homeland Security Advisory Council is a step toward achieving that consensus. However, state and local governments are seeking greater input in policymaking. Although state and local governments seek direction from the federal government, according to the National Governors’ Association, they oppose mandated participation and prefer broad guidelines or benchmarks. Mandated approaches could stifle state- level innovation and prevent states from serving as testing grounds for new approaches to homeland security. In terms of the private sector, partnerships between it and the public sector are forming, but they are not yet developed to the level of those in Y2K efforts, generally due to the emerging nature of homeland security. Nonetheless, some progress has been made. For example, the North American Electric Reliability Council has partnered with the Federal Bureau of Investigation and the Department of Energy to establish threat levels that they share with utility companies as threats change. Similarly, a Department of Commerce task force is to identify opportunities to partner with private sector entities to enhance security of critical infrastructure. Commonly accepted definitions help provide assurance that organizational, management, and budgetary decisions are made consistently across the organizations involved in a crosscutting effort. For example, they help guide agencies in organizing and allocating resources and can help promote more effective agency and intergovernmental operations by facilitating communication. A definition of homeland security can also help to enforce budget discipline and support more accurate monitoring of homeland security expenditures. The lack of a common definition has hampered the monitoring of expenditures for other crosscutting programs. In our prior work, we reported that the amounts of governmentwide terrorism-related funding and spending were uncertain because, among other reasons, definitions of antiterrorism and counterterrorism varied from agency to agency. On the other hand, the Department of Defense has a draft definition of its own to identify departmental homeland security roles and missions and to support organizational realignments, such as the April 2002 announcement of the establishment of the Northern Command. The department has also required that the services and other organizations use standard terminology when communicating with each other and other federal agencies to ensure a common understanding occurs. However, when the department commented on a draft of this report, it stated that it continues to refine its definition. The department’s comments are reprinted in their entirety in Appendix III. Office of Management and Budget officials stated that they also crafted a definition of homeland security to report how much money would be spent for homeland security as shown in the president’s fiscal year 2003 budget. These officials acknowledge that their definition is not authoritative and expect the Office of Homeland Security to create a definition before the fiscal year 2004 budget process begins. Officials at other key federal agencies also expect the Office of Homeland Security to craft such a definition. In the interim, the potential exists for an uncoordinated approach to homeland security caused by duplication of efforts or gaps in coverage, misallocation of resources, and inadequate monitoring of expenditures. The Office of Homeland Security faces a task of daunting complexity in unifying the capabilities of a multitude of federal, state, and local governments and private organizations. As shown in our previous reports on combating terrorism, duplication and gaps in coverage can occur when the nation’s capabilities are not effectively integrated. Homeland security efforts are not yet focused and coordinated. Some organizations are forging ahead and creating homeland security programs without knowing how these programs will integrate into a national plan while other organizations are waiting for direction from the Office of Homeland Security. Since the Office of Homeland Security plans to address the key issues needing immediate attention—preparing a national strategy, clarifying roles and missions, establishing performance measures, and setting priorities and goals, we are making no recommendations concerning these issues at this time. However, commonly accepted or authoritative definitions of fundamental concepts, such as homeland security, will also be essential to integrate homeland security efforts effectively. Without this degree of definition, communication between participants will lack clarity, coordination of implementation plans will be more difficult, and targeting of resources will be more uncertain. We recommend that the President direct the Office of Homeland Security to develop a comprehensive, governmentwide definition of homeland security, and include the definition in the forthcoming national strategy. We presented a draft of this report to the Office of Homeland Security; the Environmental Protection Agency; the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Justice, Transportation, and Treasury; the Customs Service; and the Federal Emergency Management Agency. Only the Departments of Justice, Defense, Health and Human Services and the Customs Service provided written comments on a draft of this report. The Department of Justice was concerned that the draft report did not discuss several key aspects of its efforts related to ensuring homeland security, noting in particular that we did not note the department’s role in the development of the Five-Year Interagency Counterterrorism and Technology Crime Plan. We agree that this plan is an important contribution to homeland security, and we revised our text to recognize the department’s efforts in developing the plan. The department’s comments and our evaluation of the comments are reprinted in their entirety in appendix II. The Department of Defense stated that the draft portrayed the many challenges facing the departments and agencies as they address homeland security efforts. However, the department pointed out that its definition of homeland security, developed for its own use, was still in draft at the time of our report. We were aware of that and revised our report language to clarify this point. We also incorporated technical corrections as appropriate. The Department of Health and Human Services and the Customs Service provided no overall comments but did provide letters in response to our request for comments, which we have included in appendix IV and V, respectively. The Department of Health and Human Services also provided technical comments, which have been incorporated in the report, as appropriate. We discuss our scope and methodology in detail in appendix I. As agreed with the offices of our congressional requesters, unless they announce the contents of the report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees. We will also send a copy to the Assistant to the President for Homeland Security; the Secretaries of Defense, Agriculture, Commerce, Energy, Health and Human Services, Transportation, and the Treasury; the Attorney General; the Director, Federal Bureau of Investigation; the Administrators of the Federal Emergency Management Agency and Environmental Protection Agency; and the Director, Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions regarding this report or wish to discuss this matter further, please contact me at (202) 512-6020. Key contributors to this report are listed in appendix VI. To determine the extent to which homeland security efforts represent a unified approach, we interviewed officials and obtained available documents from the Office of Homeland Security, Environmental Protection Agency, Federal Emergency Management Agency, the Federal Bureau of Investigation, the Central Intelligence Agency, and the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Transportation, and the Treasury. We selected these agencies based on their prominent role in the U.S. Government Interagency Domestic Terrorism Concept of Operations Plan and the Federal Response Plan. In addition, we talked to officials from the Office of Management and Budget to discuss budgeting for homeland security. We interviewed officials of the National Governors Association, the National League of Cities, the National Emergency Management Association, the American Red Cross, the Georgia Emergency Management Agency, Gilmore Panel, the Hart-Rudman Commission, the Rand Corporation, the ANSER Institute of Homeland Security, the Center for Strategic and International Studies, the American Bankers Association, the North American Electric Reliability Council, the National Industrial Transportation League, and the Southern Company. We also reviewed year-2000 efforts, our related work on combating terrorism, and Government Performance and Results Act reports we previously issued to identify key elements that support a unified approach to addressing public problems. We did not evaluate the Office of Homeland Security leadership or its efforts to develop the national strategy because it was too early to judge adequately its performance in these areas. Our selection methodology does not permit projection nationwide. We conducted our review from August 2001 through April 2002 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Justice’s letter dated May 28, 2002. The Department of Justice was concerned that we did not discuss several key aspects of the department’s efforts related to homeland security. Specifically, the department mentioned several plans and roles that it believes should be mentioned in the report. We agree that the plans and roles the department outlines are important and that they play a vital role in homeland security. These plans and efforts along with the many other plans and efforts of local, state and federal governments as well as the private sector—will need to be integrated by the Office of Homeland Security, in its efforts to develop a national homeland security strategy. The department specifically mentions the Five-Year Interagency Counterterrorism and Technology Crime Plan and said that we failed to state that the plan represents a substantial interagency effort and is one document that could serve as a basis for a national strategy—a statement the department points out is contained in a prior GAO report, Combating Terrorism: Selected Challenges and Related Recommendations GAO-01-822 (Washington, D.C.: September 2001). However, in the same report, we also state the plan lacks certain critical elements including a focus on results-oriented outcomes. Moreover, because there is no national strategy that includes all the necessary elements, the Office of Homeland Security is developing an overarching national strategy, which will build on the planning efforts of all participants. The department also stated that we did not reference its role in domestic preparedness. Domestic preparedness and the roles that all participants play in it are important. However, domestic preparedness is only one element of homeland security. As our report points out, our objective was to evaluate the extent to which homeland security efforts to date represent a unified approach. In developing the national strategy, the Office of Homeland Security will address individual agency efforts including those involved in domestic preparedness efforts. The department also noted that we did not cite its efforts regarding the U.S. Government Interagency Domestic Terrorism Concept of Operations Plan. To the contrary, we are very aware of the overall importance of the plan and used it as a basis for selecting the federal agencies that we interviewed. This is discussed in appendix I—scope and methodology. The department furthers cites our failure to acknowledge efforts to improve intelligence sharing. Our objective was to evaluate the extent to which homeland security efforts were unified, and in our discussions, intelligence sharing was repeatedly mentioned as an obstacle to further integration. Despite the department’s efforts to improve intelligence sharing as cited in its letter, our work showed that there is a prevailing perception that it continues to be a problem. We do mention, in the section on evolving public and private sector relationships, the intelligence sharing efforts led by the Office of Homeland Security to include the Homeland Security Council and the policy coordination committees. The following are GAO’s comments on the Department of Defense’s letter. The Department of Defense requested that we more clearly state that it continues to define homeland defense and homeland security and its role in support of homeland security. We agreed and incorporated this information in our report section on the nonexistence of an official governmentwide definition of homeland security. The following are GAO’s comments on the Department of Health and Human Services letter dated May 29, 2002. The Department of Health and Human Services had no specific comments on the draft report. However, the Department did provide several technical comments that we incorporated as appropriate. The following are GAO’s comments on the Customs’ letter dated May 29, 2002. The Customs Service had no specific comments on the draft report. In addition to the contact named above, Lorelei St. James, Patricia Sari- Spear, Kimberly C. Seay, Matthew W. Ullengren, William J. Rigazio, and Susan Woodward made key contributions to this report. Homeland Security: Responsibility and Accountability for Achieving National Goals. (GAO-02-627T, April 11, 2002). National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security (GAO-02-621T, April 11, 2002). Homeland Security: Progress Made, More Direction and Partnership Sought (GAO-02-490T, March 12, 2002). Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs (GAO-02-160T, November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, October 31, 2001). Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness (GAO-02-145T, October 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, October 12, 2001). Homeland Security: A Framework for Addressing the Nation’s Issues (GAO-01-1158T, September 21, 2001). Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness (GAO-02-550T, April 2, 2002). Combating Terrorism: Enhancing Partnerships Through a National Prearedness Strategy (GAO-02-549T, March 28, 2002). Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness (GAO-02-548T, March 25, 2002). Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness (GAO-02-547T), March 22, 2002). Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness (GAO-02-473T, March 1, 2002). Combating Terrorism: Considerations For Investing Resources in Chemical and Biological Preparedness (GAO-01-162T, October 17, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, September 20, 2001). Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management (GAO-01-909, September 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, April 24, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, March 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, March 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities: Opportunities Remain to Improve Coordination (GAO-01- 14, November 30, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, March 21, 2000). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, October 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, September 7, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO-NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, March 11, 1999). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO-NSIAD-99-3, November 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, October 2, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, April 9, 1998). Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, December 1, 1997). Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection (GAO-02-235T, November 15, 2001). Bioterrorism: Review of Public Health and Medical Preparedness (GAO- 02-149T, October 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, October 10, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, October 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, September 28, 2001). Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed (GAO-01-667, September 28, 2001). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, September 11, 2000). Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks (GAO/NSIAD-99-163, September 7, 1999). Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework (GAO/NSIAD-99-159, August 16, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, March 16, 1999). Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures (GAO-01-837, August 31, 2001). FEMA and Army Must Be Proactive in Preparing States for Emergencies (GAO-01-850, August 13, 2001). Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01-832, July 9, 2001). Results-Oriented Budget Practices in Federal Agencies (GAO-01-1084SP, August 2001). Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies (GAO-010592, May 2001). Determining Performance and Accountability Challenges and High Risks (GAO-01-159SP, November 2000). Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap (GAO/AIMD-97-156, August 29, 1997). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T—AIMD-95-161, June 7, 1995). Government Reorganization: Issues and Principals (GAO/T-GGD/AIMD- 95-166, May 17, 1995). Grant Programs: Design Features Shape Flexibility, Accountability, and Performance Information (GAO/GGD-98-137, June 22, 1998). Federal Grants: Design Improvements Could Help Federal Resources Go Further (GAO/AIMD-97-7, December 18, 1996). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, September 1, 1995).
The issue of homeland security crosscuts numerous policy domains, impinging on the expertise and resources of every level of government, the private sector, and the international community. GAO found that although combating terrorism crossed organizational boundaries, it did not sufficiently coordinate the activities of the 40 federal entities involved, resulting in duplication and gaps in coverage. The homeland security efforts of public and private entities do not yet represent a unified approach, although key supporting elements for such an approach are emerging. Progress has been made in developing a framework to support a more unified effort. Other remaining key elements--a national strategy, establishment of public and private sector partnerships, and the definition of key terms--are either not in place yet or are evolving. At the same time, key terms, such as "homeland security," have not been defined officially; consequently, certain organizational, management, and budgetary decisions cannot currently be made across agencies. In the interim, the potential exists for an uncoordinated approach to homeland security that may lead to duplication of efforts or gaps in coverage, misallocation of resources, and inadequate monitoring of expenditures.
In 2002, the Navy’s Sea Power 21 vision stated that shore-based capabilities would be transformed to seabased capabilities whenever practical to improve the reach, persistence, and sustainability of systems that are already afloat. The objective for the United States to maintain global freedom of action is a consistent theme throughout the National Defense Strategy and National Military Strategy. DOD’s 2006 Quadrennial Defense Review Report further stated that the future joint force will exploit the operational flexibility of seabasing to counter political antiaccess and irregular warfare challenges. The joint seabasing concept is currently going through the Joint Capabilities Integration and Development System (JCIDS), a DOD decision support process for transforming military forces. Figure 1 shows the JCIDS process, including the major elements of a capabilities-based assessment. The purpose of JCIDS is to identify, assess, and prioritize joint military capability needs. Capabilities represent warfighting needs that are studied as part of the system’s capabilities-based assessment process. The process identifies warfighter skills and attributes for a desired capability (Functional Area Analysis), the gaps to achieving this capability (Functional Needs Analysis), and possible solutions for filling these gaps (Functional Solution Analysis). The results of this assessment are used as the basis for identifying approaches for delivering the desired capability. When identifying these approaches, cost is one factor that is considered. One way costs are used to evaluate potential approaches is by developing total ownership cost estimates. The Joint Requirements Oversight Council has overall responsibility for JCIDS and is supported by eight Functional Capabilities Boards (Command and Control, Battlespace Awareness, Focused Logistics, Force Management, Force Protection, Force application, Net-Centric, and Joint Training), which lead the capabilities- based assessment process. DOD’s anticipated timeframe for an operational joint seabasing capability as currently envisioned in the Joint Integrating Concept is 2015–2025. The services are either considering or actively pursuing material solutions to support seabasing. According to service officials and documentation, these solutions will play a critical role in enhancing current seabasing capabilities. For example, the Navy and Marines plan to acquire the Maritime Prepositioning Force (Future) along with several supporting connectors needed for it to be able to achieve its mission. As part of the seabase, the Maritime Prepositioning Force (Future) will be a squadron of ships to transport and deliver the personnel, combat power, and logistic support of the Marine Expeditionary Brigade. The connectors, which are envisioned to provide both intertheater lift to the seabase and intratheater lift within the seabase, include sealift, such as the Joint High Speed Vessel, Joint High Speed Sealift, and Joint Maritime Assault Connector (this vessel is intended to replace the Landing Craft Assault Connector), and airlift, such as the V-22 Osprey and CH-53K heavy lift helicopter. Figure 2 illustrates and describes several sealift and airlift connectors. The Army is also exploring new capability initiatives for establishing a seabasing capability. In conjunction with the Navy and Marine Corps, the Army is developing the Joint High Speed Vessel and Joint High Speed Sealift ships. Furthermore, the Army is also in the early stages of development of its Afloat Forward Staging Base, which is a ship concept whose mission would be providing aerial maneuver with Army forces from the sea. One option the Army is exploring for the Afloat Forward Staging Base is to add flight decks to a commercial container ship, along with other alterations, as a means to provide aerial maneuver to Army forces. Although DOD has taken action to begin the development of joint seabasing, DOD has not fully established a comprehensive management approach to effectively guide and assess joint seabasing as an option for projecting and sustaining forces in an antiaccess environment and integrate service initiatives. Specifically, DOD has not fully incorporated sound management practices—such as providing leadership, dedicating an implementation team, and establishing a communications strategy—that our prior work has shown are found at the center of successful transformations. DOD has taken action to develop joint seabasing by pursuing it within DOD’s Joint Capabilities Integration and Development System (JCIDS). JCIDS is a key DOD decision support process that uses a capabilities- based approach to assess existing capabilities, identify capability gaps, and develop new warfighting capabilities. Within JCIDS, future capability needs are intended to be developed from top-level strategic guidance such as the National Military Strategy, a “top-down” approach. Under the former process, requirements grew out of the individual services’ unique strategic visions, a “bottom-up” approach. In January 2006 we reported that JCIDS is not yet functioning as envisioned to define gaps and redundancies in existing and future military capabilities across the department and to identify solutions to improve joint capabilities. We reported that requirements continue to be defined largely from the “bottom up”—by the services—although DOD uses the JCIDS framework to assess the services’ proposals and push a joint perspective. According to Office of the Under Secretary of Defense, Acquisition, Technology, and Logistics officials, seabasing is going through the JCIDS process to become more of a joint concept that is developed through input from the services, combatant commands, and other DOD organizations. DOD has produced a Seabasing Joint Integrating Concept that outlines the concept for joint seabasing and identifies essential capabilities. Under JCIDS, the capabilities-based assessment follows a structured, four-step process. The first step in this process, the Functional Area Analysis, dated October 2005, identified the seabasing tasks, conditions, and standards needed to meet military objectives. The Functional Area Analysis identified such critical joint seabasing tasks as providing for maintenance of equipment in the joint operations area, attacking operational targets, and building and maintaining sustainment bases in the joint operations area. The second step of the capabilities-based assessment, the Functional Needs Analysis, dated November 2006, provided a prioritized list of joint seabasing capabilities and capability gaps, and identifies potential mitigation areas from which the identified capability gaps may be addressed. The 17 seabasing capability gaps include at-sea assembly, forcible entry, and conducting operational movement and maneuver. The analyses that are currently being developed are intended to further define and organize the capability gaps identified in the Functional Needs Analysis and recommend potential solutions for consideration in future analyses. Despite pursuing joint seabasing within JCIDS, DOD has not fully incorporated key sound management practices into its approach for managing the development of joint seabasing requirements and integrating service initiatives. In our prior work, we identified several key sound management practices at the center of successful mergers, acquisitions, and transformations. These key sound management practices include (1) ensuring top leadership drives the transformation, (2) dedicating an implementation team to manage the transformation process, and (3) establishing a communication strategy to create shared expectations and report related progress. Without a management approach that contains these elements, DOD may be unable to guide and assess joint seabasing in an efficient and cost-effective manner. Moreover, without central coordination, it is unclear whether DOD will be able to effectively manage billions of dollars of potential service investments in interdependent complex platforms, connectors, and logistics technologies that will need to be coordinated using a common set of standards, requirements, timeframes, and priorities. First, although joint seabasing capability development is underway, DOD has not provided sufficient leadership to integrate service initiatives and guide the development of joint seabasing. While the joint seabasing JCIDS process is still in the early stages of assessing needed capabilities, the services have developed their own concepts and approaches for seabasing, and in some cases systems that will support joint seabasing are further along than the concept in JCIDS development. For example, the Maritime Prepositioning Force (Future) and the Joint High Speed Vessel are approaching their second major milestone, or decision point, within DOD’s acquisition system, which will initiate systems-level development, whereas the joint seabasing concept is still being refined. Preliminary cost estimates for both these systems range from nearly $12 billion to over $15 billion. The 2005 National Research Council Committee’s report, Sea Basing, concluded that developing a system of systems such as seabasing that is comprised of complex platforms, connectors, and logistics technologies will require a common set of standards, requirements, timeframes, and priorities. Various ship, airlift, and sealift connector components of the seabase will need to interface, and the capabilities of some of these components will be interdependent. In addition, joint operations from a seabase will require robust logistics technologies and command and control. Prematurely developing such systems to meet individual service requirements rather than joint requirements may result in initiatives that duplicate each other and systems that are not interoperable and compatible. Moreover, in addition to the billions of dollars being spent to procure these systems, it may be costly to realign or adjust the efforts of the services in the future if they do not meet the joint requirements of seabasing. In addition, DOD leadership has not provided an official, unified vision for joint seabasing to guide the transformation, ensure that focus is maintained on providing a capability that is the best option for projecting and sustaining forces in an antiaccess environment, and ensure that joint seabasing is evaluated against competing options. Joint Staff officials told us that the joint seabasing JCIDS process has been addressing how seabasing can be used to counter the problem of projecting and sustaining forces in an antiaccess environment, rather than examining specific solutions. We reported in 2003 that key practices and implementation steps for successful transformations include ensuring top leadership drives the transformation. We found that leadership must set the direction, pace, and tone for the transformation. Concerns have been raised by other organizations about the lack of leadership to guide the development of joint seabasing. For example, the National Research Council Committee’s report, Sea Basing, stated that “given the complexity of and the long- term nature of the major capital investments by Services in new platforms, development of advanced technologies, and the introduction of appropriate joint doctrine, such a unifying vision will be essential in order to best leverage existing currently programmed and future Service capabilities.” Also, in 2003 the Defense Science Board Task Force on Sea Basing found that developing the seabase requires persistent, top-down leadership to coordinate the numerous initiatives—including concepts of operations, ships, aircraft, weapons, and transportations systems—that support the seabase. Absent leadership, DOD can not be certain joint seabasing has been evaluated against competing options for projecting and sustaining forces in an antiaccess environment. Moreover, without leadership that has the authority, responsibility, and accountability to guide joint seabasing and integrate service initiatives, DOD cannot be sure that ongoing or planned initiatives are cost-effective, fully leveraged, properly focused, and complement each other. Second, DOD has not established a dedicated implementation team to provide day-to-day management oversight. We reported in 2003 that a dedicated implementation team should be responsible for the day-to-day management of transformation to ensure various initiatives are integrated. Such a team would ensure that joint seabasing receives the focused, full-time attention necessary to be sustained and effective, by establishing clearly defined roles and responsibilities, helping to reach agreement on work priorities, and keeping efforts coordinated. There are several groups and DOD organizations tasked with specific responsibilities for developing joint seabasing within JCIDS; however, none of these organizations have the overall authority, responsibility, and accountability to coordinate initiatives and the acquisition of systems that may support joint seabasing. For example, the Navy was designated the sponsor of the Seabasing Joint Integrating Concept and is responsible for all common documentation, periodic reporting, and funding actions required to support the seabasing capabilities development and acquisition process. The Force Management Functional Capabilities Board is responsible for leading the seabasing capabilities-based assessment and oversees the sponsor (the Navy) in developing documents. The Seabasing Working Group was organized and tasked by the Joint Staff to assist the Force Management Functional Capabilities Board in completing the joint seabasing analyses. The Seabasing Working Group is comprised of members from the Joint Staff, combatant commands, the services, and other organizations, and serves as a source of expertise and as a joint sounding board for collaboration and focusing the direction of the analyses. According to Joint Staff officials, the working group can ask the services and combatant commands to participate and provide input to the analyses, but they have no authority to force their participation in the development of the analyses nor do they have authority over service initiatives that may support joint seabasing. Recommendations have been made for a joint office to manage and lead joint seabasing by DOD officials, the Defense Science Board Task Force on Sea Basing, and the Naval Studies Board, but a leadership body has not been established. In November 2003, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed that a terms of reference be developed for a Joint Expeditionary Force Projection/Seabasing Capabilities Office. According to the Terms of Reference, the office would organize all joint seabasing-related DOD activities—ranging from experimentation efforts to solutions development to training—into a coherent direction. In addition, the office would be comprised of members from each of the four services and the U.S. Joint Forces Command and would have limited contract authority. However, DOD officials decided to forgo the joint office and pursue joint seabasing within the JCIDS process. According to officials from the Office of the Under Secretary of Defense Acquisition, Technology, and Logistics, one reason a joint office was not set up for joint seabasing was because there was no staff available at the time. According to Joint Staff officials, one downfall to joint seabasing being developed under the JCIDS process is that consensus is required on all decisions before moving forward, which may result in compromising solutions. Although use of the JCIDS process has encouraged the Army, Air Force, and Marine Corps to participate with the Navy in the development of the Joint Integrating Concept and JCIDS analyses, the services continue to pursue their own initiatives. As previously mentioned, some of these initiatives are still in the early stages of concept development, whereas other initiatives are further along in the acquisition process ahead of joint seabasing. A key official from the Defense Science Board Task Force on Sea Basing told us that the need for a joint office to coordinate efforts between the services still exists. According to the official, the lack of action in setting up a joint seabasing office makes achieving compatible systems to support joint seabasing more difficult considering some supporting systems are ahead of joint seabasing in the development process. The Naval Studies Board also recommended a joint planning office be set up to “correlate Service requirements and advise Service procurements” so common capabilities among the services can be taken advantage of and incompatible acquisitions will not be made. We and the DOD Office of the Inspector General have found similar management challenges in DOD’s efforts to field other joint capabilities such as the Global Information Grid and network-centric warfare. Without formally designating a dedicated leadership body to provide day-to-day management oversight by providing a coherent direction for related activities, establishing clearly defined roles and responsibilities, helping to reach agreement on work priorities, and keeping efforts coordinated, DOD’s ability to develop a joint seabasing capability in an efficient manner may be hindered. Furthermore, without a dedicated implementation team, it may be difficult for DOD to sustain joint seabasing development over a long period of time. Third, DOD has not fully developed a communications strategy that encourages communication, shares knowledge, and provides information to DOD organizations involved in joint seabasing initiatives. We previously reported that creating an effective, ongoing communication strategy is central to forming the partnerships that are needed to develop and implement the organization’s strategies. As previously mentioned, there are numerous groups and DOD organizations involved in joint seabasing and various initiatives that may affect joint seabasing. The seabasing working group hosts meetings that provide a forum for discussion on joint seabasing among members. In addition, it has established a Web site that posts meeting minutes and various joint seabasing JCIDS analysis documents. While this Web site provides some transparency into the analysis process, it does not serve as a central repository for communicating information on joint seabasing because it does not provide information on joint seabasing efforts conducted by the services and combatant commands outside of the JCIDS process. In addition, we found no evidence of a formal mechanism that communicated joint seabasing information. Officials from the Navy and Marine Corps told us they face challenges in determining what organizations are involved in joint seabasing and what they are doing. According to Marine Corps officials, this impedes their ability to leverage activities and minimize redundancy. Furthermore, Joint Staff officials have acknowledged that the lack of a central, authoritative source of information significantly hindered timely completion of analyses. For example, the data management tool used to associate essential seabasing capabilities with the appropriate functional area did not provide a systematic method for identifying relevant information and some data was missing. Moreover, they also recognized that a means for identifying DOD-wide initiatives that affect joint seabasing needs to be established. In the absence of clear communication of joint seabasing information throughout DOD via an overall communications strategy, joint seabasing participants may not be able to effectively leverage activities and minimize redundancy, and the overall development of joint seabasing may be impeded. DOD has not developed, implemented, or used an overarching joint experimentation campaign plan to inform decisions about joint seabasing. Experimentation campaign plans play an important role in developing transformational concepts by coordinating and guiding experimentation efforts using a series of related experiments that develop knowledge about a concept or capability. Many seabasing experimentation activities have taken place across DOD and the services; however, an overarching experimentation campaign plan to coordinate and guide joint seabasing experimentation does not exist because the U.S. Joint Forces Command— DOD’s leader of joint warfighting experimentation—has not taken the lead in coordinating joint seabasing experimentation efforts. Additionally, DOD lacks a systematic means to analyze, communicate, and disseminate information on joint seabasing experimentation. Moreover, DOD lacks a feedback mechanism to interpret and clarify results from joint seabasing experimental activities. According to military experimentation guides, experimentation campaign plans play an important role in developing transformational concepts by coordinating and guiding experimentation efforts using a series of related experiments that develop knowledge about a concept or capability. Taken together, the results of these experiments can inform decisions about future research and technology programs, acquisition efforts, risk, organizational changes, and changes in operational concepts. A well- planned experimentation campaign provides a framework for much of what needs to be known about a new concept or capability. According to defense best practices, key aspects of an experimentation campaign include: (1) designated campaign leaders; (2) clear campaign focus and objectives; (3) a spectrum of well-designed and sequenced experimental activities, including studies and analyses, seminars and conferences, war games, modeling and simulation, and live demonstrations; (4) data collection and analyses; (5) broad dissemination of results; and (6) a feedback mechanism to discuss and interpret results. Experimentation campaigns that include these aspects can reduce the risk in developing and fielding a new concept or capability by addressing a spectrum of possibilities and building upon experimentation activities systematically, with continual analyses and feedback to interpret the results into useful information. Single experiments alone are insufficient to develop transformational concepts because they can only explore a limited number of variables, and their contributions are limited unless their findings can be replicated in other experiments. Campaigns can provide conclusive and robust results through their ability to replicate findings and conduct experiments in a variety of scenarios and operating environments. A well-planned experimentation campaign can mitigate the limitations of a single experiment by synthesizing outputs from a series of activities into coherent advice to decision makers. Many experimentation activities involving seabasing have taken place; however, an overarching DOD experimentation campaign plan to guide and coordinate these activities does not exist. All of the services, combatant commands, and some defense entities have been involved with seabasing experimentation through war games, studies, workshops, modeling and simulation, and live demonstrations. For example, in 2004 the Joint Chiefs of Staff led a war game called Nimble Viking that brought the services together and addressed gaps in their understanding of the joint seabasing concept. The services conducted studies addressing gaps in the joint seabasing concept, such as the Navy’s 40 Knot Marine Expeditionary Brigade study, which identified gaps in conducting forcible entry operations with Marine Corps forces using seaborne lift capable of speeds of 40 knots. Moreover, the Marine Corps modeled plans for landing seabased forces from amphibious ships, the results of which, according to the Marine Corps, shaved hours off the landing of forces from amphibious ships. In addition, the U.S. Joint Forces Command and services worked together in cosponsoring several war games involving joint seabasing, including Unified Course 2004, Joint Urban Warrior 2004, Pinnacle Impact 2003, and Sea Viking 2004. While many of the reports from these war games recognized joint seabasing as a potential concept for addressing antiaccess and force projection issues, they stated that further experimentation was needed before joint seabasing moved forward. Additionally, material solutions being developed to support joint seabasing have undergone planned experimentation and testing activities. For example, U.S. Transportation Command officials believe that DOD’s Joint- Logistics-Over-the-Shore program could support joint seabasing logistical operations, such as heavy cargo transfer at sea. To that end, in June 2006 they sponsored a Joint-Logistics-Over-the-Shore exercise to transfer equipment and bulk materials from large ships to the beach using smaller landing craft. Figure 3 shows forces using a barge to move construction vehicles from ships to shore during a Joint-Logistics-Over-the-Shore exercise at Naval Magazine, in Indian Island, Washington. The Navy’s Program Executive Office for Ships, which manages the Maritime Pre-positioning Force (Future) and the Joint High Speed Vessel programs, reports that the Maritime Pre-positioning Force (Future) program has planned and is executing a series of jointly coordinated tests involving modeling and simulation and live demonstrations. According to the Program Manager, demonstrations included at-sea evaluation of the Mobile Landing Platform concept and its ability to interface with other vessels supporting the joint seabase. Additionally, the Navy’s Office of Naval Research is developing a number of technologies, such as internal ship cargo handling and ship-to-ship cargo transfers, to address capability gaps in joint seabasing operations. Although joint seabasing experimental activities have taken place, an overarching experimentation campaign plan to coordinate and guide these activities does not exist because the U.S. Joint Forces Command has not taken the lead in coordinating joint seabasing experimentation efforts. Moreover, involvement in these activities by the services, combatant commands, and defense entities has been inconsistent due to budget restraints, other competing priorities, and the lack of timely coordination and advance notice of events. In May 1998, the Secretary of Defense designated the U.S. Joint Forces Command as the DOD executive agent for joint warfighting experimentation. In this role the command is responsible for conducting joint experimentation on new warfighting concepts and disseminating the results of these activities to the joint concept community, which includes the Office of the Secretary of Defense, Joint Staff, combatant commands, services, and defense agencies. The U.S. Joint Forces Command is also responsible for coordinating joint experimentation efforts by developing a biennial joint concept development and experimentation campaign plan. In January 2006, a memo from the Chairman of the Joint Chiefs of Staff further underscored this responsibility by providing explicit direction to the U.S. Joint Forces Command on developing a campaign plan that provided guidance to the joint concept community on coordinating joint experimentation efforts, and capturing and disseminating the results of these efforts. While the U.S. Joint Forces Command said it is in the process of developing the plan, it is unclear the extent to which this plan will address joint seabasing. According to the U.S. Joint Forces Command, other more near-term priorities, such as improvised explosive devices and urban warfare, have prevented them from focusing on joint seabasing during the past few years. Once the U.S. Joint Forces Command develops and implements the plan, which it intends to do by fiscal year 2008, it is also unclear the extent to which this plan will be able to guide and coordinate joint seabasing experimentation efforts because the U.S. Joint Forces Command does not have the authority to direct service and other DOD organizations’ experimentation plans. The services and combatant commands are responsible for working with the U.S. Joint Forces Command in executing the joint concept development and experimentation campaign plan, and for providing them with observations, insights, results, and recommendations related to all joint experimentation efforts. However, the services and combatant commands are not required to go through the U.S. Joint Forces Command before executing their own experimentation activities. Moreover, the U.S. Joint Forces Command says it does not have authority to make the services and combatant commands take specific joint actions. Additionally, there are many entities within the services involved in joint seabasing experimentation and there are no formalized leaders coordinating service efforts. As a result, these entities operate independently and do not coordinate their efforts with the U.S. Joint Forces Command. This lack of coordination poses risks of duplicating experimentation efforts and conducting experimentation that does not build upon previous activities. Furthermore, no overarching campaign plan to guide joint seabasing experimentation exists within any other DOD entity. While the Navy and Marine Corps have seabasing experimentation campaign plans, officials told us these plans are not overarching within each of the services and it is unclear the extent to which they are being implemented. For example, a seabasing experimentation plan exists as part of the Navy’s Sea Trial Concept Development and Experimentation Campaign Plan; however, Navy officials said there is not a lot of joint seabasing experimentation being conducted within this plan and the plan does not encompass all of the Navy’s efforts. In addition, the Marine Corps has a plan that broadly focuses on issues that need to be addressed for seabasing capabilities such as the Maritime Pre-positioning Force (Future) and the Joint High Speed Vessel. However, its plan does not identify designated leaders and specific experimentation activities that should take place, nor does the plan identify timelines, resources, or staff to conduct experimentation. It also does not contain plans for data collection and analysis or any provisions for disseminating results. In addition, according to Marine Corps officials, the plan is not being fully executed due to lack of funding and staff. Many service officials expressed concern over the lack of coordination and guidance on joint seabasing experimentation. They stated that the U.S. Joint Forces Command has not shown much interest in experimentation for future concepts such as joint seabasing, instead focusing experimentation efforts on short-term concepts and immediate priorities such as improvised explosive devices. One service official commented that there is no single point of contact for joint seabasing at the U.S. Joint Forces Command. Additionally, the Joint Chiefs of Staff states in the Functional Needs Analysis that more joint experimentation is needed to inform and further refine capability gaps in the joint seabasing concept. DOD also lacks a systematic means to analyze, communicate, and disseminate information about joint seabasing experimentation across the department. According to military experimentation guides, a significant part of an experiment consists of gathering data, interpreting it into findings, and combining it with already known information. Additionally, data collection and analysis plans are important to experimentation because they ensure valid and reliable data are captured and understood, and that the analysis undertaken addresses the key issues in the experiment. However, we found no overarching data collection and analysis plan to guide the analysis of joint seabasing experimentation results. Furthermore, officials in the Office of the Secretary of Defense’s Program Analysis and Evaluation division described a lack of analysis in joint seabasing to inform the capabilities-based assessment, which could lead to inaccurately identifying gaps in implementing the concept. They said that no comprehensive analytical framework was ever established to guide development of the joint seabasing concept; consequently, the value joint seabasing will bring to the warfighter is unknown. Without an overarching campaign plan, experimental results for joint seabasing are being obtained and interpreted using different data collection and analysis methods, which may lead to inconsistent reporting methods. As a result, experimentation data may be analyzed, interpreted, and shared inconsistently and with little transparency across the community. Additionally, DOD and service officials commented on the lack of sufficient modeling and simulation tools available to provide valid data on joint seabasing. Modeling and simulation tools play an important role in experiments. Unlike live demonstrations, modeling and simulation techniques can inexpensively vary the values of variables to represent a wide variety of conditions. They also provide a great deal of control over the variables in the experiment, which allows for replication. The Joint Chiefs of Staff noted the absence of high-level modeling tools capable of end-to-end modeling of seabasing in the Functional Needs Analysis, saying that the absence of this type of modeling precluded effective and meaningful data to validate warfighter needs and thus limited the depth of their analysis. Furthermore, officials in the Office of the Secretary of Defense’s Program Analysis and Evaluation division also commented that the lack of modeling could result in missing critical gaps in the joint seabasing concept that have not yet been identified. The Joint Chiefs of Staff identified the U.S. Joint Forces Command as a possible lead for end- to-end modeling and simulation of joint seabasing because of its role in joint concept development and experimentation, and its expertise in developing comprehensive modeling and simulation tools. While some communication takes place among the entities involved with developing the seabasing concept, there is no established method for communicating observations, insights, and upcoming events across the entire community. DOD and service officials described the joint seabasing community as an informal community of practice, where the services, combatant commands, and defense entities invite each other to participate in their experimentation activities. The U.S. Joint Forces Command and the services track to some degree the experimental efforts of the joint seabasing community. For example, the U.S. Joint Forces Command says it tries to leverage off the services’ efforts by partnering with them in experimental activities. However, despite this informal community, DOD and service officials describe a lack of coordination and awareness of experimental activities. A Marine Corps official stated that some officials are more aware than others are; but no one is completely aware of what is going on across the entire community. In fact, many officials we spoke with were either unaware or had very little advance notice of an upcoming war game involving seabasing. Without an established communication method, joint seabasing experimentation efforts are not transparent to the entire community, which can contribute to a lack of consensus on the types of activities that take place, conflicts in scheduling events, and duplication of efforts. Additionally, there is no overarching system to disseminate observations and results on joint seabasing experimentation. The U.S. Joint Forces Command has a database containing documents and reports from experimentation activities; however, the database contains different levels of information based on what the services choose to publish. As a result, the database is not a comprehensive resource of joint experimentation information. The Navy’s Warfare Development Command also maintains a Web site of information pertaining to its Sea Trial campaign, which other entities within the Navy contribute to, but it is not overarching within the Navy. In response to a January 2006 memo from the Chairman of the Joint Chiefs of Staff, the U.S. Joint Forces Command is developing an online knowledge management portal to disseminate information on experimentation activities across the joint concept community. The portal contains a repository of information on experimentation concepts, projects, and documents; a bulletin board to post insights and observations; hotlinks to other sites; and a calendar function for upcoming experimentation activities. The portal also contains a section on activities relating to joint logistics, and joint deployment and sustainment; however, it does not yet contain information on joint seabasing. Furthermore, while the portal has the ability to disseminate information, it may not be successful in increasing communication across the joint seabasing community because the services have not been directed to use the portal in planning their activities. DOD lacks a feedback mechanism to interpret and clarify results from joint seabasing experimental activities. Feedback on analyses and findings produced from experimental activities provides the joint seabasing experimentation community an opportunity to comment on the results and ask questions. It also gives the experiment sponsor an opportunity to see how the work was received, assist in interpreting results, and provide further advice on how the results should be used. In the context of an experimentation campaign, it may also give the sponsor an opportunity to clarify how the results affect the overarching campaign concept. While individual seabasing experiments may have had some form of feedback, the lack of an overarching joint seabasing experimentation campaign plan that includes procedures for providing and obtaining feedback may prevent the joint seabasing experimentation community from fully realizing how the results of individual experiments affect the development of joint seabasing. While some service acquisitions tied to seabasing are approaching milestones for investment decisions, it is unclear when DOD will complete development of total ownership cost estimates for a range of joint seabasing options. Understanding estimated total ownership costs helps decision makers measure the whole cost of owning and operating assets and make comparisons between competing options. The joint seabasing capability is being assessed in the JCIDS analysis process. However, DOD has not yet begun a key study of approaches and their associated costs and may not complete this study for at least a year. In the meantime, the services are considering or pursuing systems to enhance seabasing capabilities. For example, a major Navy-Marine Corps initiative is scheduled to undergo a major milestone review in fiscal year 2008. Until total ownership cost estimates for joint seabasing options are developed and made transparent to DOD and Congress, decision makers will not be able to evaluate the cost-effectiveness of individual service initiatives. In order to evaluate options and make informed, cost-effective decisions, decision makers must have an understanding of the total ownership costs for establishing a desired capability. A total ownership cost estimate includes the costs to develop, acquire, operate, maintain, and dispose of all systems required to establish a seabasing capability. Understanding total ownership cost estimates helps organizations measure the whole cost of owning and operating assets by providing a consistent framework for analyzing and comparing options. Total ownership cost estimates can be used to assess the possible return on investment of new initiatives. According to DOD guidance, all parties involved in the defense acquisition system must be cognizant of the reality of fiscal constraints and treat cost as an independent variable when developing systems. Furthermore, the policy stresses the importance of identifying the total costs of ownership, including major cost drivers, while considering the affordability of establishing needed capabilities. Even with future concepts, such as joint seabasing, where uncertainty exists, total ownership cost estimates can be developed. According to DOD cost analysis guidance, in such cases, areas of uncertainty can be quantified using ranges of cost, thereby giving decision makers, at a minimum, a rough estimate of the total costs to achieving a desired capability. For systems of systems, such as seabasing, a total ownership cost estimate should include research, acquisition, operation, maintenance, and disposition costs of all systems, primary and support, needed to achieve the desired end state. Understanding the estimated total ownership costs of seabasing options can help decision makers make informed decisions to determine the most cost-effective method of achieving a seabasing capability. Furthermore, they can be used to more effectively evaluate joint seabasing against alternative methods of projecting and sustaining forces in an antiaccess environment. Joint seabasing is currently going through the capabilities-based assessment phase of the JCIDS analysis process. One part of the JCIDS analysis process is the Functional Solutions Analysis—an operationally based assessment of all potential approaches, including changes to doctrine, organization, training, as well as material solutions, to solve identified capability gaps. According to Joint Chiefs of Staff guidance, this process will assess the costs of potential approaches to joint seabasing. For any material approaches that are developed, the cost to develop, procure, and sustain each approach will be estimated. These estimates should provide decision makers with some understanding of the costs of these approaches. However, the timeframe for when these cost assessments will take place is unclear. According to DOD officials, cost assessments for joint seabasing approaches have not yet begun and may not be completed for a year or more. Furthermore, the Joint Chiefs of Staff guidance does not provide a specific methodology for what level of cost assessment should take place. Rather, the guidance only states that the process should “roughly assess” the costs of each identified approach. Although DOD has not yet begun its analysis of joint seabasing approaches and costs, the services are either considering or actively pursuing systems to develop enhanced seabasing capabilities. For example, the Department of the Navy Fiscal Year 2007 Budget includes funding for the development of seabasing ships, including ships for the Maritime Prepositioning Force (Future) and Joint High Speed Vessels. Furthermore, the Navy has included eleven ships for its Maritime Prepositioning Force (Future), three Joint High Speed Vessels, and one Joint High Speed Sealift ship in its Annual Long-Range Plan for Construction of Naval Vessels for Fiscal Year 2007 report to Congress. Although the plan could change as the Navy continues to assess its requirements and address affordability issues, the Navy estimates that these investments will cost nearly $12 billion. The ships the Navy has programmed for the Maritime Prepositioning Force (Future) do not include the cost of a Landing Helicopter Deck (LHD) amphibious assault ship, which is planned to be part of the squadron. The Congressional Research Service has reported that this ship has an estimated cost of $2.2 billion, and that the estimated cost of the entire Maritime Prepositioning Squadron is about $14.5 billion. However, unknown factors remain that could affect these estimates. Furthermore, the number of connectors required to support the Maritime Prepositioning Force (Future) is yet to be determined. Within the Maritime Prepositioning Force (Future) squadron, several factors that could influence cost—such as manning and ship survivability levels—remain in flux. Figure 4 shows the ships of the Maritime Prepositioning Force (Future). The Navy and Marine Corps have not yet estimated the total ownership costs of their preferred options for establishing a seabasing capability. However, both the Maritime Prepositioning Force (Future) and the Joint High Speed Vessel, which will play a critical role in establishing a joint seabasing capability, are in development and progressing through DOD’s acquisition system. The Maritime Prepositioning Force (Future) is approaching its second major milestone, which initiates system development and demonstration, in mid-2008. Prior to this milestone, a total ownership cost estimate will be required in order for the Maritime Prepositioning Force (Future) to be validated and approved before program initiation. Although a total ownership cost estimate may be available for the Maritime Prepositioning Force (Future) squadron for this milestone, according to service documentation, the costs of the supporting vehicles and vessels needed for the squadron to operate as planned for use in joint seabasing will not be included. Furthermore, one of the ships in the squadron—the Mobile Landing Platform—is going through its own acquisition process with its second milestone scheduled in fiscal year 2008. Furthermore, because the JCIDS analysis process for Joint Seabasing will not produce any cost assessments for at least 1 year, decision makers risk making substantial investment concerning the Maritime Prepositioning Force (Future) without knowledge of the potential costs of other joint seabasing options. The Navy plans to acquire the first ship for the squadron in 2009. The Army is also exploring new initiatives for establishing a seabasing capability. In conjunction with the Navy and Marine Corps, the Army is developing the Joint High Speed Vessel and Joint High Speed Sealift ships. Although not being developed specifically for seabasing, according to service documentation, these systems will have a significant role in establishing a seabasing capability. The Army plans to acquire five Joint High Speed Vessels beginning in fiscal year 2008, with a total acquisition cost of $210 million for the first ship and $170 million for the remaining ships. The Navy’s long-range shipbuilding plan estimates the Joint High Speed Sealift ship to cost around $920 million. Furthermore, the Army is also in the early stages of exploring ideas for its Afloat Forward Staging Base to provide aerial maneuver to Army forces. One option the Army is exploring for the Afloat Forward Staging Base is to add flight decks to a commercial container ship, along with other alterations, as a means to provide aerial maneuver to Army forces. Several research organizations also recommended this option, because it is seen as a potentially low-cost means of establishing a seabasing capability. A rough order of magnitude estimate of the cost to convert a commercial cargo ship is approximately $300 million to $600 million. In addition to the options in development, additional means for projecting and sustaining forces in an antiaccess environment exist. However, they cannot effectively be compared when total ownership costs are not known. For example, the U.S. Transportation Command is working to enhance the military’s joint logistics over-the-shore capabilities, which utilize existing assets, such as the Army’s Logistics Support Vessel and the Navy’s Improved Navy Lighterage System, to deploy and sustain forces by allowing strategic sealift ships to discharge through austere or damaged ports, or over a bare beach. Furthermore, the Air Force has developed its Expeditionary Airbase Operating Enabling Concept. This concept is a methodology and plan for rapid airbase seizure, establishment, and operation to support the joint force commander in sustaining forces. Other possibilities include Army air-dropped or air-landed operations to roll- back enemy shore-based defense or joint special operations forces to attack high-value coastal defense assets prior to or in concert with naval strikes from the sea. Some of these options represent existing capabilities, which could prove to be a more cost-effective means of projecting and sustaining forces in an antiaccess environment. Until total ownership costs are developed, the cost-effectiveness of these options cannot be effectively evaluated. While DOD’s ability to project and sustain forces in an antiaccess environment is expected to become increasingly important, DOD has not taken all of the steps needed to effectively manage joint seabasing initiatives across the department and evaluate competing options for force projection and sustainment. Without a comprehensive management approach to guide and assess joint seabasing, DOD may be unable to ensure that ongoing or planned joint seabasing initiatives are properly focused and complement each other and the capability is being developed in an efficient and cost-effective manner. One consequence of this lack of effective management is the absence of a joint experimentation campaign plan. Without a campaign plan to direct experimentation for joint seabasing, DOD and the services’ ability to evaluate and validate their solutions, coordinate efforts, perform analysis, and disseminate results could be compromised. As a result, the services risk duplicating experimentation efforts and developing and fielding seabasing capabilities that are not compatible or interoperable, and they will be unable to leverage the results of individual experiments across the joint seabasing experimentation community to maximize synergies. Furthermore, establishing a joint seabasing capability could be the source of significant investment by DOD. Given the challenging fiscal environment facing DOD and the rest of the federal government, decision makers must make investment decisions that maximize return on investment at the best value for the taxpayer. By understanding the estimated total ownership costs of options for establishing a seabasing capability, decision makers would be in a better position to make informed decisions about what options are most cost-effective, and evaluate the costs and benefits of establishing a seabasing capability against other competing priorities. However, while it is unclear when DOD will complete its analysis of joint seabasing approaches and costs, the services are pursuing initiatives and systems to develop a seabasing capability, some of which are approaching milestones for investment decisions. If individual systems that support seabasing are allowed to move forward through the acquisition process before total ownership cost of seabasing options are developed and made transparent to DOD and Congress, there is a risk that DOD could make significant investments to develop a capability that may not be the most cost-effective means of projecting and sustaining forces in an antiaccess environment. To assist decision makers in developing a comprehensive management approach to guide and assess joint seabasing as an option for force projection and sustainment in an antiaccess environment and integrate service initiatives, we recommend that the Secretary of Defense take the following actions to incorporate sound management principles into DOD’s management of joint seabasing: assign clear leadership and accountability for developing a joint seabasing capability and coordinating supporting initiatives; establish an overarching, dedicated implementation team to provide day-to-day management oversight over the services, combatant commands, the Joint Chiefs of Staff, and others involved in joint seabasing; and develop and implement a communications strategy to ensure communication between and among the services, combatant commands, Office of the Secretary of Defense, and the Joint Chiefs of Staff, and to provide information on all joint seabasing activities across DOD. To better guide joint seabasing experimentation and inform decisions on joint seabasing as an option for force projection and sustainment in an antiaccess environment, we recommend that the Secretary of Defense do the following: Direct the U.S. Joint Forces Command to lead and coordinate joint seabasing experimentation efforts, under the purview of the joint seabasing implementation team. U.S. Joint Forces Command should be responsible for developing and implementing a joint seabasing experimentation campaign plan to guide the evaluation of joint seabasing as a capability for force projection and sustainment. Such an experimentation plan should include the following elements: a clear focus and objectives for joint seabasing that encompass near-, mid-, and long-term experimentation plans; a near-term plan for joint seabasing experimentation that includes events for the next fiscal year, participants, timelines, and resources that will be used to support the events; a spectrum of joint experimentation activities that include wargaming, comprehensive modeling and simulation, live demonstrations, workshops, symposiums, and analysis; a data collection and analysis plan to capture and evaluate results; a method for communicating observations, results, upcoming activities, and feedback across the joint seabasing experimentation community. Direct that the services collaborate with the U.S. Joint Forces Command in developing, implementing, and using the joint seabasing experimentation campaign plan. Direct that the services utilize and contribute to the U.S. Joint Forces Command’s knowledge management portal by providing their observations, insights, results, and planned activities to the portal for use by the joint seabasing experimentation community. To assist decision makers in evaluating the costs of joint seabasing options against the capabilities that joint seabasing could provide the joint warfighter as a means for force projection and sustainment in an antiaccess environment, we recommend that the Secretary of Defense direct the implementation team or other appropriate entity to synchronize development of total ownership cost estimates for the range of joint seabasing options so decision makers have sufficient information to use in making investment decisions on service seabasing initiatives. In comments on a draft of this report, DOD partially agreed with our recommendations, except for the need for a dedicated implementation team. In its comments, DOD stated that it is premature to establish additional oversight at this time and that in the interim the Force Management Joint Capabilities Board is providing an appropriate level of management oversight. As discussed below, in view of the magnitude of potential DOD investments in seabasing and DOD’s need to efficiently manage future resources and distinguish between needs and wants, we continue to believe that an implementation team is needed to coordinate disparate service and defense organization initiatives related to seabasing and urge the department to further consider the need for action now rather than waiting until after it establishes joint requirements. In addition, although DOD partially agreed with our other recommendations, its comments did not indicate that it would take specific actions beyond those it has already begun and which we evaluated as part of our review. In light of DOD’s stated agreement with the intent of our recommendations, we urge the department to develop specific actions and plans to implement our recommendations. DOD partially agreed with our recommendation regarding leadership and accountability for developing a joint seabasing capability and coordinating supporting initiatives. DOD stated that the Joint Staff is assigned responsibility to develop the Joint Seabasing Concept and the resulting capability and that there is clear and accountable leadership established within the Joint Requirements Oversight Council and the Joint Capabilities Board to accomplish this development. While the Joint Staff, Joint Requirements Oversight Council, and the Joint Capabilities Board have oversight and responsibilities within JCIDS, we found that none of these organizations have the overall authority, responsibility, and accountability to coordinate joint seabasing initiatives and the service acquisitions that may support joint seabasing. As discussed in the report, the services have their own seabasing concepts and some service initiatives are outpacing joint seabasing in development. DOD has not provided sufficient leadership to ensure these initiatives are fully leveraged, properly focused, and complement each other. Because of the potential for billions of dollars to be spent to procure these systems, we continue to believe our recommendation has merit and that assignment of clear leadership and accountability for developing a joint seabasing capability and coordinating supporting initiatives is needed. DOD did not agree with our recommendation that an overarching, dedicated implementation team be established to provide day-to-day management oversight over the services, combatant commands, the Joint Chiefs of Staff, and others involved in joint seabasing. DOD commented that the joint seabasing concept is still being developed within the JCIDS and the Force Management Functional Capabilities Board is providing the appropriate level of management oversight. DOD stated that it is premature to establish additional oversight at this time and that after the needed joint seabasing capabilities have been defined, the department will determine if additional oversight is necessary. We believe that the Force Management Functional Capabilities Board’s oversight does not go far enough in providing comprehensive management oversight for joint seabasing. While the Board is responsible for leading the joint seabasing capabilities-based assessment and oversees the sponsor (the Navy) in developing documents, the Board’s responsibilities do not constitute the type of oversight needed to ensure ongoing or planned service initiatives that may support joint seabasing are coordinated and complement each other. We continue to believe that our recommendation has merit and that creation of an implementation team to provide day-to-day management oversight of joint seabasing is needed. Therefore, we urge the department to create such a team now rather than waiting until needed joint seabasing capabilities are defined. DOD also partially agreed with our recommendation regarding implementing a communications strategy for all joint seabasing activities in DOD. DOD stated that the JCIDS process, Joint Capabilities Boards, and the Joint Requirements Oversight Council provide for communication between the Joint Staff, all four services, the combatant commands, and the Office of the Secretary of Defense (OSD). However, as discussed in our report, we found that while the Joint Staff, all four services, the combatant commands, OSD, and others participate in the JCIDS process, the information shared is not all inclusive and it is not always clear who is involved in joint seabasing and what they are doing. A DOD-wide communication strategy that provides a framework to effectively manage activities can support the overall development of joint seabasing by (1) providing better information for the participants in organizing and planning initiatives and (2) enabling the participants to minimize redundancy by leveraging activities being conducted by others. We continue to believe, as we have recommended, that a communications strategy should be developed and implemented. DOD partially agreed with our recommendations regarding coordination of joint seabasing experimentation efforts and development of a joint experimentation campaign plan. DOD stated that the Joint Staff, with service, combatant command, and OSD support, is developing a draft Joint Capabilities Document that recommends a joint seabasing experimentation plan. However, DOD’s comments did not address which organization would be responsible for developing the experimentation campaign plan. As we recommended, we continue to believe that the U.S. Joint Forces Command should be charged with developing and implementing the joint seabasing experimentation campaign plan. As noted in our report, the U.S. Joint Forces Command is the DOD executive agent for joint warfighting experimentation. In this role the command is responsible for conducting joint experimentation on new warfighting concepts, disseminating the results of these activities, and coordinating joint experimentation efforts. DOD also partially agreed with our recommendation regarding the U.S. Joint Forces Command’s knowledge management portal. DOD concurs that a common portal should be established and used by the services. DOD stated that the U.S. Joint Forces Command’s knowledge management portal is one option that will be considered in order to share joint seabasing experimentation observations, insights, results, and planned activities. While we support DOD’s plans to establish a knowledge management portal for joint force projection and sustainment experimentation, we continue to believe our recommendations merit action and that DOD should direct the services to use the U.S. Joint Forces Command’s knowledge management portal to share information on joint seabasing rather than consider it an option. Finally, DOD partially agreed with our recommendation regarding development of total ownership costs for joint seabasing options. DOD stated that once the Joint Requirements Oversight Council defines the required joint seabasing capabilities, total ownership costs for the options to satisfy the needed capability gaps will be developed as part of the DOD’s Planning, Programming, Budgeting and Execution and acquisition processes. We support DOD’s plans to develop total ownership costs; however, as our report points out, we do not believe that these actions alone will sufficiently ensure that total ownership costs for all joint seabasing options are synchronized. While total ownership costs will be estimated and synchronized for those options being developed in DOD’s JCIDS process for joint seabasing, the services are either considering or actively pursuing systems to develop their own seabasing capabilities. Some of these systems are approaching major milestone reviews for investment consideration. Requiring that total ownership cost estimates be developed for only those options developed in DOD’s joint seabasing JCIDS will provide decision makers with an incomplete picture of all joint seabasing options. Without ensuring that total ownership cost estimates are developed as we recommended for both joint seabasing options being developed in JCIDS and those options being developed by the services, DOD will risk making investment decisions that may not be the most cost- effective means of establishing a joint seabasing capability. DOD also provided technical and editorial comments, which we have incorporated as appropriate. DOD’s comments are reprinted in appendix II of this report. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Secretary of the Navy; the Chairman, Joint Chiefs of Staff; the Commander, U.S. Joint Forces Command; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4402 or stlaurentj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the extent to which the Department of Defense (DOD) has employed a sound management approach for developing a joint seabasing capability, we interviewed officials from the Office of the Secretary of Defense, the joint staff, two combatant commands, the four military services, and the private sector; received briefings from relevant officials; and reviewed key documents. We compared DOD’s approach with best practices for managing and implementing major efforts. To identify these best practices, we reviewed our prior work including GAO, Results- Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. In the absence of a comprehensive planning document, we used relevant questions derived from the identified best practices in interviews with officials and in analyzing pertinent documents such as the August 2005 Seabasing Joint Integrating Concept, and instructions and manuals on DOD’s Joint Capability Integration and Development System (JCIDS), including (1) the Chairman of the Joint Chiefs of Staff Instruction 3170.01E, Joint Capabilities Integration and Development System (May 11, 2005); (2) the Chairman of the Joint Chiefs of Staff Manual 3170.01B, Operation of the Joint Capabilities Integration and Development System (May 11, 2005); and (3) the Joint Chiefs of Staff White Paper on Conducting a Capabilities-Based Assessment (CBA) Under the Joint Capabilities Integration and Development System (JCIDS) (January 2006). We also interviewed officials involved in the development of the joint seabasing to obtain information on how involved the services, combatant commands, Office of the Secretary of Defense, and the Joint Chiefs of Staff were in developing joint seabasing, what their respective roles and responsibilities were, the level of authority available to direct the services and combatant commands to participate in the JCIDS analyses, how information on joint seabasing development efforts and initiatives was shared, how initiatives that may support joint seabasing were coordinated, and other issues. In addition, we examined the Seabasing Working Group Web site to identify what information was being communicated through the Web site. To assess the extent to which a joint experimentation campaign plan has been developed, implemented, and used to inform decisions on joint seabasing options, we obtained briefings and interviewed officials from the Office of the Secretary of Defense, the Joint Chiefs of Staff, the U.S. Joint Forces Command, the U.S. Transportation Command, and the Army, Navy, Air Force, and Marine Corps. We also discussed the status of joint seabasing experimentation efforts and the extent to which they coordinated with each other in conducting joint seabasing experimentation. We examined DOD guidance to identify and clarify roles and responsibilities for leading joint warfighting experimentation. To identify key aspects for conducting experimentation campaigns, we reviewed books and publications on experimentation campaigns, including Code of Best Practice: Campaigns of Experimentation; Code of Best Practice: Experimentation; Guide for Understanding and Implementing Defense Experimentation; and The Role of Experimentation in Building Future Naval Forces. We obtained and reviewed DOD and service reports and briefings containing the analyses and findings of experimentation activities. We also attended an Army Joint-Logistics-Over-the-Shore exercise demonstrating the unloading and loading of equipment to the shore when port facilities are inadequate, unavailable, or nonexistent. To assess the extent to which DOD and the services identified the cost of joint seabasing options so that decision makers can make informed, cost- effective decisions, we reviewed official statements, obtained briefings from, and interviewed officials from, the Office of the Secretary of Defense, Joint Chiefs of Staff, Army, Navy, Air Force, Marine Corps, Defense Science Board, and Center for Strategic and Budgetary Assessments. We examined DOD documents and data including, but not limited to, the President’s Fiscal Year 2007 Defense Budget, the Department of the Navy Ships and Aircraft Supplemental Data Tables, and the Report to Congress on Annual Long-Range Plan for Construction of Naval Vessels for FY 2007. We assessed the reliability of the data used through discussions with knowledgeable officials. We determined that the data used were sufficiently reliable for our objectives. We reviewed statements by the Congressional Budget Office and Center for Strategic and Budgetary Assessments. We also reviewed reports on seabasing including, but not limited to, Thinking About Seabasing: All Ahead, Slow by the Center for Strategic and Budgetary Assessments, Sea Basing by the Defense Science Board, Sea Basing by the Naval Research Advisory Committee, and Seabasing: Ensuring Joint Force Access From the Sea by the National Research Council. To identify guidance on cost estimating and total ownership costs, we reviewed DOD documentation, including DOD Directive 5000.1, The Defense Acquisition System (May 12, 2003), DOD Instruction 5000.2, Operation of the Defense Acquisition System (April 5, 2002), Chairman of the Joint Chiefs of Staff Instruction 3170.01E, Joint Capabilities Integration and Development System (May 11, 2005), and Chairman of the Joint Chiefs of Staff Manual 3170.01B, Operation of the Joint Capabilities Integration and Development System (May 11, 2005). We also reviewed our prior work on cost estimating and total ownership cost. We conducted our review from February 2006 to October 2006 in accordance with generally accepted government auditing standards at the following locations: Offices of the Secretary of Defense, Washington, D.C. Office of Force Transformation Office of Program Analysis and Evaluation Office of the Under Secretary of Defense, Acquisition, Technology, and The Joint Staff, Washington, D.C. Office of Force Structure Resources and Assessment—Studies, Analysis, and Gaming Division U.S. Joint Forces Command, Suffolk, Virginia Joint Experimentation Directorate Joint Futures Lab U.S. Transportation Command, Scott Air Force Base, Illinois Offices of the Chief of Naval Operations, Washington, D.C. Office of Expeditionary Warfare Office of Assessments, Seabasing Pillar Naval Sea Systems Command, Washington, D.C. U.S. Fleet Forces Command, Norfolk, Virginia Navy Warfare Development Command, Newport, Rhode Island Office of Naval Research, Arlington, Virginia Naval War College, Newport, Rhode Island Marine Corps Combatant Development Command, Quantico, Virginia Capabilities Development Directorate, Seabasing Integration Division Operations Analysis Division, Mission Area Analysis Branch Marine Corps Warfighting Lab Offices of the U.S. Army Chief of Staff, Washington, D.C. In addition to the contact named above, Patricia Lentini, Assistant Director; Sarah Baker; Renee Brown; Nicole Harms; Margaret G. Holihan; Ian Jefferies; Kevin L. O’Neill; Roderick Rodgers, Analyst-in-Charge; and Rebecca Shea made key contributions to this report.
Joint seabasing is one of several evolving concepts for projecting and sustaining forces without relying on immediate access to nearby land bases and could be the source of billions of dollars of investment. In future security environments, the Department of Defense (DOD) expects to encounter situations of reduced or denied access to areas of operation. Even where forward operating bases are otherwise available, their use may be politically undesirable or operationally restricted. GAO was asked to address the extent to which (1) DOD has employed a comprehensive management approach to joint seabasing, (2) DOD has developed a joint experimentation campaign plan for joint seabasing, and (3) DOD and the services have identified the costs of joint seabasing options. For this review, GAO analyzed joint requirements documents, experimentation efforts, and service acquisition plans. While DOD has taken action to establish a joint seabasing capability, it has not developed a comprehensive management approach to guide and assess joint seabasing. GAO's prior work showed that sound management practices for developing capabilities include involving top leadership, dedicating an implementation team, and establishing a communications strategy. DOD is developing a joint seabasing concept and various DOD organizations are sponsoring seabasing initiatives. However, DOD has not provided sufficient leadership to guide joint seabasing development and service initiatives are outpacing DOD's analysis of joint requirements. DOD also has not established an implementation team to provide day-to-day management to ensure joint seabasing receives the focused attention needed so that efforts are effective and coordinated. Also, DOD has not fully developed a communications strategy that shares information among the organizations involved in seabasing. Without a comprehensive management approach containing these elements, DOD may be unable to coordinate activities and minimize redundancy among service initiatives. DOD has not developed a joint experimentation campaign plan, although many seabasing experimentation activities--including war games, modeling and simulation, and live demonstrations--have taken place across the services, combatant commands, and other defense entities. No overarching joint seabasing experimentation plan exists to guide these efforts because the U.S. Joint Forces Command has not taken the lead in coordinating joint seabasing experimentation, although it has been tasked with developing a biennial joint experimentation campaign plan for future joint concepts. While the U.S. Joint Forces Command is in the process of developing the plan, it is unclear the extent to which this plan will address joint seabasing or will be able to guide joint seabasing experimentation efforts. Without a plan to direct experimentation, DOD and the services' ability to evaluate solutions, coordinate efforts, and disseminate results could be compromised. While service development efforts tied to seabasing are approaching milestones for investment decisions, it is unclear when DOD will complete development of total ownership cost estimates for a range of joint seabasing options. Joint seabasing is going through a capabilities-based assessment process that is intended to produce preliminary cost estimates for seabasing options. However, DOD has not yet begun the specific study that will identify potential approaches, including changes to doctrine and training as well as material solutions, and produce preliminary cost estimates. DOD officials expect the study will not be complete for a year or more. Meanwhile, the services are actively pursuing a variety of seabasing initiatives, some of which are approaching milestones which will guide future program investments. Until total ownership cost estimates for joint seabasing options are developed and made transparent to DOD and Congress, decision makers will not be able to evaluate the cost-effectiveness of individual service initiatives.
Head Start, the centerpiece of federal early childhood programs, was created in 1965 as part of President Johnson’s War on Poverty. Head Start’s primary goal is to improve the social competence of children in low-income families. Social competence is the child’s everyday effectiveness in dealing with both the present environment and later responsibilities in school and life. Social competence involves the interrelatedness of cognitive and intellectual development, physical and mental health, nutritional needs, and other factors. To support the social competence goal, Head Start programs deliver a broad range of services to children. These services include educational, medical, nutritional, mental health, dental, and social services. Another essential part of every program is parental involvement in parent education, program planning, and operating activities. Head Start programs are governed by performance standards, which set forth the expectations and minimum requirements that all Head Start programs are expected to meet. Program officials expect these standards, however, to be largely self-enforcing, with the exception that Head Start’s 12 regional offices conduct on-site monitoring of Head Start programs every 3 years. The program also has a separate set of performance standards for services for children with disabilities. Both sets of performance standards, which have governed the program since 1975, were revised in the 1990s. Head Start issued performance standards for children with disabilities in 1993. The performance standards for the rest of the programs became effective in January 1998 and attempt to reflect the changing Head Start population, the evolution of best practices, and program experience with the earlier standards. Head Start targets children from poor families, and regulations require that at least 90 percent of the children enrolled in each program be low income. By law, certain amounts are set aside for special populations of children, including those with disabilities and Native American and migrant children. The program is authorized to serve children at any age before the age of compulsory school attendance; however, most children enter the program at age 4. Head Start programs may be delivered in any of three Head Start-approved program options. One option involves the enrolled child receiving the bulk of Head Start services at a center; however, some home visits are required. Centers operate varying numbers of hours per day for either 4 or 5 days per week. Providing services at children’s homes is a second option. The children receive the bulk of services at home, with some opportunities for them to interact in a group setting. The combination option—the third— entails both center attendance and home visits. In addition, programs may implement a locally designed option, which, as the name implies, is developed at the local program level. Locally designed options may take many forms, such as family day care homes. How are services delivered in a center setting, the most common option? The center may be housed in a church basement, at a parent’s work site, in a public school building, at a college or university, or some other location. A Head Start teacher as well as a second adult instruct the children using a curriculum relevant to and reflective of the needs of the population served. Head Start regulations emphasize that large and small group activities take place throughout the day. Children should be encouraged to solve problems, initiate activities, explore, experiment, question, and gain mastery through learning by doing. In addition to educational services, children receive other services. Meals and snacks are provided as appropriate. Within a certain number of days of entering the program, children receive a thorough health screening and medical and dental examination. This screening may take place on or off site. Program staff ensure that treatment and follow-up services are arranged for all health problems detected. In addition, Head Start staff are expected to visit the children’s homes to assess their and their families’ need for services. For example, these visits may identify the families’ need for services such as emergency assistance or crisis intervention. Staff may also provide families with information about community services and how to use them. During these visits, staff are expected to develop activities for family members to use at home that will reinforce and support the child’s total Head Start experience. Head Start is administered by HHS’ Administration for Children and Families (ACF), which includes the Head Start Bureau—one of several under ACF. Grantees, which deliver Head Start services at the local level, numbered about 1,440 in fiscal year 1996. Grantees may contract with organizations—called delegate agencies—in the community to run all or part of their local Head Start programs. Grantees and delegate agencies include public and private school systems, community action agencies and other private nonprofit organizations, local government agencies (primarily cities and counties), and Indian tribes. Unlike some other federal social service programs funded through the states, HHS awards Head Start grants directly to local grantees. HHS distributes Head Start funds using a complex formula, based upon, among other things, previous allotments and the number of children, aged 5 and under, below the poverty line in each state compared with the number in other states. Head Start, a federal matching grant program, requires grantees to typically obtain 20 percent of program costs from nonfederal funds. These funds can be in the form of cash, such as state, county, and private money, or in-kind contributions such as building space and equipment. Head Start regulations require that programs identify, secure, and use community resources in providing services to Head Start children and their families before using Head Start funds for these services. As a result, Head Start programs have established many agreements for services. Head Start has served over 16 million children since its inception. The passage of the 1990 Head Start Expansion and Quality Improvement Act resulted in increased funding for Head Start to allow more children the opportunity to participate in Head Start as well as improve the quality of Head Start services. In fiscal year 1996, Head Start received $3.6 billion in funding and served about 752,000 children. This figure reflects children served through all of Head Start’s programs. The regular Head Start program serves children and families residing in the 50 states and the District of Columbia. About 85 percent of Head Start children are served through the regular Head Start program. Head Start also operates programs for migrant and Native American populations. Recognizing that the years from conception to age 3 are critical to human development, the Congress established Early Head Start in 1994. This program targets children under age 3 from low-income families as well as expectant mothers. Since 1967, however, Head Start has served children and families now targeted by the Early Head Start program through Parent Child Centers. In the past 3 years, we have issued several reports on the Head Start program. One report discussed local perspectives on barriers to providing Head Start services. That report, among other things, concluded that Head Start lacked enough qualified staff to meet the complex needs of children and families. Other barriers included a limited availability of health professionals in the community willing to help Head Start staff in providing services and programs having difficulties getting suitable facilities at reasonable costs. In our most recent report, we concluded that the body of research conducted on the Head Start program does not provide information on whether today’s Head Start is making a positive difference in participants’ lives. Specifically, we found that the body of research conducted on the program was inadequate for use in drawing conclusions about the impact of the national program in any area in which Head Start provides services such as school readiness or health-related services. We also stated that no single study of the program used a nationally representative sample so that findings could be generalized to the national program. We recommended that the Secretary of HHS include in HHS’ research plan an assessment of the impact of regular Head Start programs. In commenting on this report, HHS mentioned, among other things, that estimating program impact at the national level is not appropriate because of the extreme variability of local programs. That is, local Head Start sites have great flexibility, and, even though all programs share common goals, they may operate very differently. Thus, HHS considers a single, large-scale, national study of impact to be methodologically inappropriate. Head Start programs were funded to serve about 701,000 children at any one time in program year 1996-97; however, the number of different children enrolled in the program throughout the 1996-97 program year was about 782,000, which averaged about 454 children per program, ranging from a low of 17 to a high of 6,045. The number of different children enrolled in the program includes children who are funded with all sources of funds, such as those received from state agencies, and who have been enrolled in Head Start for any length of time, even if they dropped out or enrolled late, provided they have attended at least one class or, in home-based programs, received at least one home visit. Head Start estimates capacity or the number of children that can be served at any one time in two ways. Total funded enrollment (701,000) is the number of children that can be served at any one time with Head Start grant funds, as well as funds from other sources, such as state agencies. This estimate includes children, regardless of funding source, who are an integral part of the Head Start program and who receive the full array of Head Start services. Head Start-funded enrollment (667,000) is an estimate of the number of children that can be served at any one time with Head Start grant funds only (see table II.1 in app. II for enrollments by state). Although programs are authorized and expected to serve a certain number of children, according to Head Start Bureau officials, local programs may negotiate with their regional offices to adjust their enrollment. Thus, programs may choose to fill fewer slots or establish more slots. To illustrate, a program authorized to serve 50 children may choose to actually serve only 40 children or to serve 60. By serving fewer children, the program can support other enhancements, such as providing employees with full benefits. Head Start Bureau officials also stated that some states have regulations and laws that also affect the number of slots that can be filled. A state that requires training and licensing of its early childhood staff, for example, might be limited in the number of children it could serve if licensed staff cost more. Differences in the cost of living can also affect the number of slots that can be filled. In addition, Head Start programs served about 711,000 families of Head Start children, which Head Start regulations define as all people living in the same household who are supported by the income of the parent or guardian and related by blood, marriage, or adoption. Head Start does not require that programs count the number of individual family members served, however, so the number of services provided them is unknown. The children and families Head Start served had some similar demographic characteristics (see fig. 1). Most were either 3 (31 percent) or 4 (63 percent) years old. Most of the children—79 percent—spoke English as their main language. Spanish-speaking children constituted the next largest language group—18 percent. About 38 percent of the children were black, 33 percent were white, and 25 percent were Hispanic. About 13 percent of Head Start children had some sort of disability. Most Head Start families have more than one child; most have two or three children (see fig. 2). In addition, most (61 percent) have only one parent or are headed by other relatives, or they are foster families or have other living arrangements. Head Start families are generally very poor as indicated by several measures (see fig. 3). More than one-half are either unemployed or work part time or seasonally, and about 60 percent have family incomes under $9,000 per year. Furthermore, only 5 percent have incomes that exceed official poverty guidelines, and 46 percent receive TANF benefits. Through Head Start, children received access to a large array of services. Children received medical and dental services, immunizations, mental health services, social services, child care, and meals. According to Head Start’s annual survey, nearly all children enrolled in Head Start received medical screening/physical exams, dental exams, and immunizations in the 1996-97 program year. Most children received medical screening, including all appropriate tests and physical examinations as well as dental examinations by a dentist. Most had also received all immunizations required by the Head Start immunization schedule for the child’s age. Children also received education services in various settings. In addition, Head Start programs provided children’s families access to services (see table II.2 in app. II). Of the services we asked about, parent literacy, social services, job training, and mental health were the most frequently provided (see table II.4 in app. II). Programs were least likely to provide dental and medical services to siblings and other family members, with 64 percent reporting they never provided dental services and 56 percent reporting they never provided medical services. Most children attended centers that operated part day and part year. About 90 percent of the children received services through center programs. Fifty-one percent of children attending centers went to centers that operated 3 to 4 hours per day (see fig. 4). Another 42 percent went to centers that operated between 5 and 7 hours per day. Only 7 percent of the children went to centers that operated 8 or more hours per day. In addition, 63 percent of the children attended centers that operated 9 months of the year. However, only 27 percent of the children attended centers that operated 10 to 11 months, and even less—7 percent— attended centers that operated year round. According to Head Start’s survey, about 38 percent of the families needed full-day, full-year child care services. However, this proportion may increase dramatically as welfare reform is implemented. About 44 percent of the families needing full-day, full-year child care services left their children at a relative’s or unrelated adult’s home when the children were not in Head Start, according to Head Start’s survey. In 1997, the Congress appropriated additional funds to, among other things, increase local Head Start enrollment by about 50,000 children. Recognizing that an increasing proportion of Head Start families work and many who may receive public assistance are participating in welfare reform initiatives in response to TANF, the Head Start Bureau announced that programs that provide more full-day, full-year Head Start services will receive special priority for funding. Head Start urged programs to consider combining Head Start expansion funds with other child care and early childhood funding sources and to deliver services through partnerships such as community-based child care centers. This focus on providing full-day, full-year services departs from previous expansion priorities, which emphasized part-day, part-year, or home-based services. For our review we talked with Head Start program officials who had applied for expansion funds to meet the needs of working parents. Officials operating a program in Florida, for example, stated that they plan to expand the number of days and hours the program currently operates: hours of operation will be expanded from 7:30 a.m. to 4:00 p.m. to 6:30 a.m. to 7:00 p.m. In addition, officials operating a program in Vermont stated that it plans to provide full-day, full-year services as well. Their strategy involves collaborating with an existing private center that will offer children extended-day services. Head Start provides services in a number of ways. In some instances, Head Start programs both delivered and paid for services. In most cases, however, Head Start arranged for or referred participants to services, and some other agency delivered and paid for the services. In these cases, Head Start provided information to help participants get services from some other source. For example, when asked the main methods the programs used to provide medical services for enrolled children, 73 percent of survey respondents said that they referred participants to services, and some other entity or program, such as Medicaid, primarily paid for the service (see fig. 5 and table II.3 in app. II). Because most Head Start children are eligible for Medicaid’s Early and Periodic Screening, Diagnosis, and Treatment Program, Head Start programs may refer children to Medicaid providers; thus, Head Start provides access to these services with little or no impact on the Head Start programs’ budgets. The same was true of dental services and immunizations. About 40 percent of the programs reported Head Start funds, however, as the primary source for meals and food, even though Head Start expects programs to seek reimbursement for these expenses from the U.S. Department of Agriculture’s (USDA) Adult and Child Care Food Program. Education was the service most directly provided by Head Start for enrolled children. Nearly 90 percent of programs reported that they both delivered and funded education services for enrolled children. Some Head Start program officials we interviewed, however, told us that they contracted with a private preschool or child care centers to provide education services. These cases are rare, however; only 3 percent of respondents to our survey reported that Head Start funded, but someone else delivered, education services. These programs purchased “slots” in centers operated by other organizations for about 2,000 children. In addition, Head Start typically provides services for children’s siblings and other family members indirectly (see table II.4 in app. II). Of those respondents to our survey who indicated that they provided services to siblings and other family members, at least half reported that Head Start programs neither delivered nor paid for the services. As shown in figure 6, programs were more likely to report full Head Start involvement (that is, the program paid for and delivered the service) in the areas of education; social services; child care; and meals, food, and nutrition. For our review, we asked several Head Start directors about some of the services they provided directly to family members. Program officials stated that they typically provided services to the siblings, while providing services to the enrolled child. For example, education services provided to enrolled children in a home-based program may be provided to siblings as well, benefiting all enrolled children and their siblings. The director of a program in Montana, for example, stated that staff bring along snacks for the siblings during home visits. The director of a program in Ohio stated that if the enrolled child, as well as the child’s siblings, needs a physical exam, they will ensure that the siblings are also referred for physical exams. When asked to report the funds received from all sources to operate their Head Start programs, survey respondents reported that different funding sources supported Head Start programs (see fig. 7). Most programs— about 90 percent—had multiple sources. The number of different funding sources that respondents reported varied (see fig. 8). The largest portion of programs, 40 percent, reported one other non-Head Start funding source followed by 27 percent of the respondents who reported two other non-Head Start funding sources. At the other extreme, however, the number of programs reporting six to seven funding sources was small— about 1 percent. The multiple funding sources included other federal programs, such as the Child Care and Development Block Grant Program and the Social Services Block Grant Program, both of which provide funding for child care. USDA was also a source of federal funding for programs, which, among other things, supplemented Head Start program food and nutrition resources by reimbursing food costs for eligible children. States, charitable organizations, and businesses also provided program funds. Some of this non-Head Start funding may have been part of the 20 percent of nonfederal matching funds that programs typically have to provide. In addition, programs received in-kind support for their operations such as building space, transportation, training, supplies and materials, and health services. In fact, many Head Start agencies also operated other programs from which Head Start participants sometimes received services but whose budgets were separate from Head Start. For example, we spoke to one Head Start director whose program was operated by a public school. According to this official, the school district bears a number of the Head Start program expenses. For example, the school district bears a portion of the cost of facilities, Head Start children receive their meals in the cafeteria using school staff, and some staff funded with title I and special education money provide services for Head Start children. As shown in table 1, respondents reported receiving a total of $3.1 billion to operate their Head Start programs in their most recently completed budget year, of which $2.7 billion, or 85 percent, was income from the Head Start grant. Head Start grant funds were the largest single source of funding for most programs. For example, for about 77 percent of the respondents, Head Start funding represented between 80 and 100 percent of the programs’ total funds. Other non-Head Start funding totaled about $456 million and represented about 15 percent of the total funds received. The states provided the largest source of other funding, which totaled about $169 million and represented about 5 percent of the total funds in programs’ last budget year. The next largest source of funds came from a federal source—USDA. USDA funding of $168 million also represented about 5 percent of the total program funds. The non-Head Start funding increased the amount of funds available per child. Average Head Start grant funds per child were $4,637 for the responding programs. The total amount of funds per child, including Head Start grant funds, was $5,186 per child, a difference of about $549 or 12 percent Head Start-wide. Across most states and territories, the non-Head Start funding increased the amount available per child (see table II.5 in app. II). As shown in figure 9, for the majority of states, the additional funds increased the amount available per child by over 10 percent; in four states and the District of Columbia, additional funds increased the amount available per child by at least 21 percent. Head Start and total funding per child varied considerably (see table II.6 in app. II). Across all programs, the median amount of Head Start grant funds per child was $4,450 for the responding programs but ranged from a low of $792 to a high of $16,206. Median total funds per child of $4,932 across all programs ranged from $1,081 to $17,029 per child. Several reasons may explain the funding variation by state and program such as the hours and days of program operation and the characteristics of the children served. We spoke with a Head Start director in the District of Columbia, whose program had high per child Head Start and total funding. The director told us that the program provided service for children in centers that operated year round and for 10 hours or more per day. We also spoke with a director of a program in New York City that had high funding per child. That program provided part-day center services. The children it served, however, had multiple disabilities or special needs. We also spoke with directors whose funding per child was low. One director stated that because the Head Start program is operated by the public school, the school bears a number of the expenses—such as facilities and food cost as well as some staff costs—of the Head Start program. Head Start programs spent 68 percent of their overall funds on personnel. Personnel included teachers, teacher aides, home visitors, social service workers, and administrators. Personnel costs for educational services were the single largest personnel expense (53 percent). According to Head Start’s annual survey, Head Start programs employed many staff. About 129,000 staff worked either full or part time in regular Head Start programs nationwide (see fig. 10). These staff, in addition to providing direct services, such as education, facilitated children’s and families’ access to services. One way Head Start tries to encourage parental involvement is by providing parents preference for employment in Head Start programs as nonprofessionals. Thus, about one-third of the staff were parents of current and former Head Start children. The remaining funds—32 percent—were spent on nonpersonnel-related expenses. Interestingly, direct payment for medical services accounted for only 3 percent of nonpersonnel-related expenses. In this area, programs are encouraged to seek non-Head Start sources of funds, and many programs link families and children to the Medicaid Early and Periodic Screening, Diagnosis, and Treatment Program. In addition, programs spent their funds on a range of services. As shown in figure 11, education services were the largest expense (39 percent). The smallest expenses were for health (4 percent), disabilities services (3 percent), and parent involvement services (3 percent). Many Head Start programs reported that state-funded preschools (70 percent), other preschools, child development and child care centers (90 percent), and family day care homes (71 percent) operated in their communities serving Head Start-eligible children. The extent to which these programs resemble Head Start is not known. However, programs that serve disadvantaged children may—like Head Start—help children and families obtain additional services such as medical and social services. To test this assumption, we gathered information on Head Start agencies that also operated other early childhood programs. About 11 percent of the Head Start respondents (in 39 states) reported that they operated other early childhood programs and that these programs served Head Start-eligible children. These children received some or most—but not all—of the services typically provided by Head Start programs. Respondents reported serving about 14,000 Head Start-eligible children through these other programs. California served the greatest number of such children (3,216) followed by Kentucky (2,652) (see table II.7 in app. II). These programs provided many of the same services as Head Start programs, but not all services were provided to all children. Education services, meals, social services, and immunizations were the most often provided; dental, medical, and other nutrition services were the least often provided. Thirty percent of the programs responded that they provided no services to families. Families or siblings were more likely to receive social services and parent literacy training through Head Start and less likely to receive medical services, such as dental, mental health, and immunizations. In many respects, the Head Start program is at a crossroads because the context in which it operates today differs greatly from that of 30 years ago when the program was established. The services available to poor children have changed and communities have enhanced resources for serving poor children and their families. Consequently, Head Start facilitates or brokers many services provided by others, referring and linking families to these services, rather than providing them directly. The one service that almost all Head Start programs provide directly is education, although the number of early childhood education programs other than Head Start has grown in the past 30 years. Furthermore, changes in welfare policy have important implications for Head Start. Most Head Start programs operate for only part of the day and part of the year. As changes in welfare policy require increasing numbers of poor people—including Head Start parents—to seek and maintain employment, however, the need for full-day, full-year services will intensify. The administration’s proposals to help working parents secure affordable, quality child care include substantially increasing Head Start enrollment. Head Start’s predominantly part-day, part-year programs present obstacles for meeting the needs of working families. Head Start will need to balance the administration’s wish to serve more eligible children, which has typically been done by creating more part-day, part-year slots, with the need for more full-day, full-year services more compatible with working families’ needs. Finally, information about Head Start’s effectiveness and the efficiency of various Head Start models is lacking. As we reported earlier, although Head Start research has been conducted, it does not provide information on whether today’s Head Start is positively affecting the lives of today’s participants whose world differs vastly from that of the 1960s and early 1970s. In addition, funding for Head Start programs varies widely. We do not know to what extent, however, this variation may be attributable to efficiencies in providing services or to other factors such as programs’ ability to leverage other community resources, characteristics of the population served, or program structure. ACF provided general comments about the Head Start program and specific technical comments, which we incorporated in the report as appropriate. Four of ACF’s comments that were not incorporated in the report addressed services provided to children’s siblings, data on hours and months of attendance, use of funds for food costs, and hiring of parents. ACF commented that our discussion of services provided to enrolled children’s siblings is misleading because it implies that Head Start programs are actively providing services to such children. ACF contends that Head Start programs do not use grant funds to provide services to siblings and that such services are provided only to the extent that they are part of the enrolled child’s services. Nevertheless, a small percentage of Head Start survey respondents reported that they did use Head Start funds to deliver services to families and siblings. Our report emphasizes, however, that when provided, many of these services are neither paid for nor delivered by Head Start. Head Start facilitates siblings’ and families’ access to services in much the same way as it does to enrolled children. We also report that our interviews with Head Start officials showed that siblings sometimes receive services as part of the program’s services to the enrolled child. For example, Head Start staff may bring along snacks for siblings during home visits and provide education services for the siblings during such visits. It is likely that in such a situation, the Head Start program would consider this to be providing services directly because Head Start funds might have been used to pay the staff’s salary and the cost of siblings’ snacks. In addition, ACF commented that Head Start does collect data on the number of hours per day or months per year that enrolled children attend center programs and that such information is available through its Head Start Cost data system. During this study, we reviewed the Head Start Cost data system and found—and Head Start officials had previously confirmed—that reporting of Head Start Cost data is optional and not all programs provide such data. Furthermore, the data collected by the system on the number of hours per day or months per year that children attend center programs really reflect programs’ projected center operating schedules, not their actual schedules. ACF also stated that our discussion of USDA reimbursement is somewhat inaccurate and that USDA covers the vast majority of all food costs incurred by Head Start programs, with Head Start grant funds paying only a small portion of these costs. AFC stated that it is not conceivable that 40 percent of Head Start programs are using Head Start funds as their primary source of meals and food because programs are required to seek such reimbursement from USDA. We did not change our figures in the report, however, because they directly reflect the reports of our survey respondents. In addition, ACF stated that the discussion of hiring parents should clarify that Head Start hires parents only for jobs for which they are qualified and that many parents have advanced through the Head Start ranks and now hold professional-level positions in the program. We assessed, however, neither the qualifications of the parents Head Start employs nor the number who hold professional-level positions in the programs and therefore the report does not address these issues. We are sending copies of this report to the Secretary of Health and Human Services, the Head Start Bureau, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. In preparation for Head Start’s reauthorization, the Chairman and Ranking Minority Member, House Committee on Education and the Workforce; the Chairman and Ranking Minority Member, Subcommittee on Early Childhood, Youth and Families, House Committee on Education and the Workforce; Chairman and Ranking Minority Member, Subcommittee on Children and Families, Senate Committee on Labor and Human Resources; and Representatives Cunningham and Kildee asked us to describe the (1) number and characteristics of Head Start participants, (2) services provided and the way they are provided, (3) federal and nonfederal program dollars received and spent by programs delivering Head Start services, and (4) other programs providing similar—in part or in whole—early childhood services. As agreed with the requesters’ offices, however, we did not comprehensively review other early childhood programs. We focused on collecting information on Head Start’s regular program; thus, programs serving special populations, such as migrant and Native American and pregnant women and infants, were excluded. About 85 percent of Head Start children are served through regular Head Start programs. Programs for special populations represent only a small portion of Head Start children served and each program is unique. We administered our survey about the same time Head Start conducted its annual survey (May 1997), which we also analyzed. Both surveys collected information on the 1996-97 program year, which spanned September 1996 to May 1997. Head Start refers to its annual survey as the Program Information Report (PIR). Our survey was mailed to 1,783 regular Head Start programs; of these, 1,722 were determined to be active Head Start programs that served children. The PIR was a second source of information on programs. (Both instruments are described in more detail in the following section.) Because the mailing list HHS provided us was the same one used for the PIR, all regular Head Start programs should have received both our survey and the PIR. To obtain a broader understanding of Head Start, our questionnaire mostly avoided questions appearing on the PIR. For example, we asked respondents to report the number of months and hours of the day children attended centers, the number of classes operated on weekends, and whether Head Start programs paid for children to attend centers operated by someone else. We also asked them the number of months they provided services in their home-based programs. In addition, we asked how services are provided to enrolled children and their family members and the extent to which family members are served. We also asked them about the funds they received to operate their Head Start programs as well as their Head Start program expenditures. We asked Head Start programs if they served Head Start-eligible children through other early childhood programs they operated and about the services provided them and their families. Our complete survey appears in appendix III. HHS requires that all grantees and delegate agencies complete annual PIRs. Although the questions asked in the report change somewhat from year to year, in general, the report asks about program management issues. Among other things, the 1996-97 report asked about the numbers of children served by the Head Start program in that program year, the number receiving particular kinds of services, and details about the Head Start staff, for example, the number of staff in various kinds of positions, their educational level, and so forth. All Head Start programs are required to complete a PIR; however, not all had done so at the time of our analyses. Because we collected data from two major sources, response rates are shown in table I.1 in several ways. The overall response rate (98 percent) is based on the number of eligible respondents divided by the number from which information was obtained from at least one source. Our survey response rate is based on the number of eligible respondents divided by the number completing and returning our survey (86 percent). Finally, the PIR response rate (94 percent) is based on the number of eligible respondents for whom HHS provided us with completed 1996-97 PIR information. Response rate (percent) All surveys are vulnerable to some nonsampling errors, including errors due to imperfect population lists, measurement errors due to ambiguous questions or inaccurate responding, or errors due to lack of response. These errors may affect both our survey and the PIR to some unknown degree. We took several steps to minimize the impact of these errors. First, we examined responses for extreme values. In many cases, we reviewed questionnaires for explanations of questionable responses. When we could not resolve questions, we called survey respondents for clarification. In a few cases, respondents had reported numbers incorrectly; and, in these cases, we corrected the data, or, if correction was not possible, we rejected the erroneous data. Second, we looked for a systematic pattern in the distribution of nonrespondents. Because we thought that program size (defined by total funded enrollment) might be related to response patterns, we examined whether programs of various sizes were more or less likely to respond. Although smaller programs tended to be somewhat less likely to respond, the difference in the response rate, coupled with the small number of the nonrespondents, yielded an inconsequential overall impact. In most cases we based our analyses simply on the answers of survey respondents. No weighting for nonresponse was done because our response rate was so high that adjustments for nonresponse would have hardly affected our findings. In reporting total enrollment information, however, we adjusted the data so that more complete total enrollment could be reported. For those programs lacking enrollment data, we imputed enrollment from the 1996-97 PIR (or in cases where the 1996-97 PIR was not available, we used the 1995-96 PIR). To gather illustrative information, we conducted telephone interviews of nine Head Start programs in Florida, Iowa, Montana, New York, Ohio, Pennsylvania, Vermont, Arkansas, and Oregon, which were judgmentally selected. We selected large and small programs in different parts of the country and programs representing a mixture of the types of program options Head Start offers such as centers and homes. We selected programs operated by different types of agencies—including community action agencies, universities, and nonprofit organizations. In addition, we selected grantees that operated the program directly as well as those that did not and programs that received funds from various sources to operate their program as well as those operating with only Head Start grant funds. Finally, we selected programs in which a portion of the total enrollment was funded with non-Head Start income. We asked Head Start program officials a number of questions, including whom they served, their funding sources, availability of other early childhood programs in their communities, and general questions about program operations. We also asked programs about further program expansion. Finally, we validated selected responses to our survey by visiting several Head Start programs, which we also wanted to observe. We visited programs in Philadelphia, Pennsylvania; Boston, Massachusetts; Kansas City, Missouri; Chicago, Illinois; Atlanta, Georgia; and Seattle, Washington. We conducted our work between March 1997 and November 1997 in accordance with generally accepted government auditing standards. The tables in this appendix provide selected information on Head Start programs. Table II.1 presents data on Head Start enrollments by state. Tables II.2 provides data on the extent to which families received services, and tables II.3 and II.4 present information on how services are provided to enrolled children and their families. Table II.5 presents by state information on the average Head Start grant funding per child and the average funding per child from all sources, including Head Start grants. Table II.6 presents data on the variation in funds per child by and within state. Table II.7 presents information on the number of Head Start-eligible children receiving services through other early childhood programs that Head Start agencies operate. Head Start grant funds per child (in dollars) Total funds per child (in dollars) (continued) Head Start grant funds per child (in dollars) Total funds per child (in dollars) (continued) Head Start grant funds per child (in dollars) Total funds per child (in dollars) Head Start funds per child Average (median) Average (median) (continued) Head Start funds per child Average (median) Average (median) 5,522 Receive some or most services (continued) Receive some or most services Respondents in these states and territories did not report serving children who received some or most Head Start-like services. In addition to those named above, the following individuals made important contributions to this report: Deborah Edwards developed the survey, performed the statistical analyses, and co-wrote the report; Donnesha Correll co-wrote the report and managed survey operations; Wayne Dow performed the statistical analyses; Liz Williams edited the report; and Ann McDermott created the report graphics. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Head Start Program in preparation for its reauthorization, focusing on: (1) the number and characteristics of those served; (2) the services provided and the way they are provided; (3) federal and nonfederal program dollars received and spent by programs delivering Head Start services; and (4) other programs providing similar early childhood services. GAO noted that: (1) Head Start served about 782,000 disadvantaged children and 711,000 families in program year 1996-97, according to GAO's review; (2) the demographics of these children and families were similar in many respects; (3) most children were 4 years old and spoke English as their main language; (4) families typically had more than one child and were very poor; (5) through Head Start, children received access to a large array of services, as did their families in some cases; (6) most child and family services, however, were neither paid for nor provided directly by Head Start programs; (7) instead, Head Start programs often functioned as a coordinator or facilitator, referring and linking children and families to needed services; (8) although many families required full-day, full-year child care, Head Start services were typically provided in centers that operated part day on schedules that paralleled the school year; (9) only a small percentage of children attended programs in centers that operated year round; (10) virtually no programs operated on weekends, and only a few operated before 7 a.m. or after 5 p.m.; (11) almost half of the families identified as needing full-day services left their children at a relative's or unrelated adult's home when the children were not in Head Start; (12) most programs responding to GAO's survey secured funding for their operations from multiple sources; (13) among all programs in the states and territories, the average amount of Head Start grant funds per child was $4,637, ranging from a low of $792 to high of $16,206; (14) the additional income programs received from other sources increased the amount of funds available per child to an average of $5,186, 12 percent more income per child; (15) total funds per child varied widely by program, ranging from $1,081 to $17,029 per child; (16) programs spent their income on a variety of services and activities; however, the largest promotion of programs' overall income was spent on education services; (17) most Head Start programs reported that state-funded preschools, other preschools, child development centers and child care centers, and family day care homes operated in the same communities as Head Start programs; and (18) although GAO's review did not determine the extent to which these programs resemble Head Start, some that serve disadvantaged children sometimes help children and families obtain additional services, such as medical services, as Head Start does.
PPACA requires HHS to perform several duties related to CER, including disseminating, training, and building data capacity for research. (See table 1.) Although PPACA did not direct HHS to complete these duties by a specified deadline, it appropriated funds to the Patient-Centered Outcomes Research Trust Fund (PCORTF) through fiscal year 2019 to enable HHS and PCORI to implement their respective requirements. PPACA specified that 20 percent of the amounts appropriated or credited to PCORTF be transferred to the Secretary of HHS in each of fiscal years 2011 through 2019. In total, HHS estimates that about $731 million will be transferred to AHRQ (16 percent of the PCORTF) and about $190 million will be transferred to ASPE (4 percent of the PCORTF). With the exception of the amounts transferred to HHS, PPACA designates the remaining PCORTF funds for PCORI’s CER work—an estimated $3.5 billion from fiscal year 2010 through fiscal year 2019. AHRQ has taken some steps to disseminate CER as required under PPACA, including the creation of systematic reviews to develop CER findings, tools to disseminate CER, plans for a website to list and provide links to research databases that include CER, and plans for receiving feedback from stakeholders to whom information is disseminated. However, AHRQ has yet to take other actions that would help it address all PPACA dissemination requirements. AHRQ has taken some steps to implement the law’s key requirements for disseminating federally funded CER, that is to (1) broadly disseminate— develop and distribute—CER in consultation with NIH, (2) create tools that organize and disseminate research findings to certain targeted stakeholder groups, (3) develop a publicly available database, and (4) establish a process for receiving feedback from entities to which information is disseminated. From fiscal year 2012 through 2013, AHRQ has obligated about $37 million of the estimated $731 million it expects to receive through 2019 from the PCORTF on its dissemination activities. Development and distribution of CER findings. AHRQ contributes to the dissemination of CER in various ways, including through the development of systematic reviews, technical briefs, and research summaries that explore the benefits and harms of treatments. In particular, a key method to disseminate CER is through systematic reviews—syntheses of existing research that compare the effectiveness and harms of different healthcare interventions. A systematic review is an assessment and evaluation of all research studies that address a particular clinical issue. Researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic. Systematic reviews typically include a description of the findings from the research studies. AHRQ identifies topics for systematic review of CER, such as cardiovascular disease and arthritis, by evaluating topics nominated by individuals or groups against program selection criteria, in order to determine if the topic is appropriate or not appropriate for review. In addition to using its own criteria to identify CER topics for systematic reviews and dissemination, AHRQ documentation states that the agency will consult with experts, such as those from NIH, and review literature to determine whether any similar systematic reviews of relevant studies have already been conducted by other agencies or research organizations in order to reduce potential duplication. Topics selected for a systematic review are further refined with input from key stakeholder groups, technical experts, and patients to develop focused research questions. According to AHRQ officials, research funded by PCORI is not yet included in these systematic reviews because PCORI research is not yet complete. For each systematic review AHRQ synthesizes CER findings from existing research, and the agency disseminates these findings to various targeted stakeholder groups. From June 2012 to June 2014, AHRQ synthesized CER findings through 74 systematic reviews. (See appendix I for a listing of the 74 systematic reviews for which AHRQ disseminated CER findings.) Once a systematic review is complete, AHRQ follows procedures included in its dissemination guidance materials to develop a marketing plan that identifies key messages and targeted stakeholder groups, as well as the types of dissemination mechanisms it will use to conduct outreach. AHRQ officials told us they distribute CER results generally by using the same mechanisms as we previously reported in 2012. These mechanisms include social media, as well as AHRQ’s website and AHRQ’s Effective Healthcare Program website. According to AHRQ officials, the agency determines which specific mechanisms will be used to disseminate CER results by considering the unique characteristics of the research, such as its type, potential impact, and stakeholder groups most likely to use its findings. For example, CER identified as being of particular interest to specific specialties may be disseminated to certain clinical professional associations. Tools to organize and disseminate CER. AHRQ’s marketing plans include various informational tools to disseminate CER. Informational tools include (1) patient decision aids that walk patients through options and choices that patients should consider in working with their clinicians to make informed health care decisions; (2) continuing education and medical education modules to help clinicians understand and use CER findings; (3) slide sets to assist clinicians, researchers and other health professionals with education and training needs; and (4) short, plain- language research summaries that communicate research findings to clinicians, consumers, caregivers, and policymakers. For example, the marketing plan for the systematic review titled, Childhood Exposure to Trauma: Comparative Effectiveness of Interventions Addressing Maltreatment, was developed for a systematic review that examines evidence about interventions for maltreated children. The marketing plan included the specific informational tools to be used to disseminate this project’s findings, such as research summaries for clinicians, a summary of treatments for parents and caregivers, a continuing education module for health care providers, and a slide presentation on the topic. They on all CER that has been conducted. National Library of Medicine officials told us that they have informally consulted with AHRQ on its plans and agree with this approach. In November 2014, AHRQ officials told us that they were sharing their planned approach with senior HHS officials for review and approval. Feedback and evaluation process. As required by PPACA, AHRQ officials told us they receive feedback on dissemination efforts and materials from stakeholders, both formally and informally. For example, officials said that for some of their projects, AHRQ convenes focus groups and advisory panels to assess the needs of stakeholder groups and determine how best to disseminate materials. Some stakeholders we spoke to told us that they have provided feedback to AHRQ on materials the agency has disseminated; however, they were uncertain about the extent to which their feedback was incorporated into AHRQ’s dissemination efforts. AHRQ conducted a feedback assessment and issued a March 2012 feedback report that highlighted stakeholders perspectives about the agency’s disseminated materials. In this report, AHRQ noted that although there is a growing awareness about its disseminated materials, clinicians raised concerns about the timeliness of the information included in the materials, among other things. Officials told us that the agency may conduct future feedback assessments, but they do not know when these will occur and which targeted stakeholder groups will be included. AHRQ also has funded an evaluation to assess its CER dissemination activities and materials supported by the Recovery Act. In September 2013, IMPAQ International—the contractor that conducted the evaluation—issued presentation slides as its final report. The evaluation indicated that stakeholders’ exposure to AHRQ’s CER information, such as the number of website visits and dissemination materials requested, increased over time with AHRQ’s dissemination efforts. The final report also included feedback from certain stakeholder groups through focus groups and surveys. For example, clinicians who participated in focus groups indicated that they typically had little to no experience with the CER information that AHRQ disseminates to clinicians, and suggested that AHRQ more visibly promote the benefits and credibility of this information and then integrate the results and products into existing, easy-to-access sources of medical information focused on point-of-care decision-making. AHRQ officials told us that they plan to award a contract to evaluate the CER dissemination mechanisms—along with the materials they use to share CER findings—that they continued under PPACA. This evaluation project, according to officials, is under development as staff and senior leadership determine the objectives and methods for the study. Although AHRQ staff have not documented their plans as of November 2014, they told us that the evaluation is likely to measure progress on process and intermediate outcome goals of dissemination activities—similar to the last CER evaluation conducted for Recovery Act investments where the agency assessed the level of awareness, understanding, use, and perceived benefits of CER. Officials said the evaluation will also address longer term goals, such as improving health care practice. AHRQ has not taken other actions to help it fully address requirements for disseminating CER in PPACA. Specifically, AHRQ has not taken actions to help it fully address (1) the time frames for disseminating CER, (2) how it will disseminate to all targeted stakeholder groups, (3) its implementation plans for the publicly available database, and (4) how it will coordinate with NIH. Time frames for certain aspects of the dissemination process have not been identified and documented. Although AHRQ has outlined its dissemination process in various documents, it has not clearly identified and documented time frames for one of its key dissemination activities— to implement marketing plans and distribute associated informational tools. According to GAO’s Standards for Internal Control in the Federal Government, significant events need to be clearly documented to ensure management goals are carried out.which together describe the key activities of its dissemination process, including the steps the agency takes to identify key CER findings from systematic reviews, draft and finalize its marketing plans, and distribute its informational tools to the public. While certain AHRQ documents highlight time frames associated with key dissemination activities, we did not identify any documents that specify time frames for when the marketing plans are to be implemented and associated informational tools are to be distributed to stakeholder groups. Once the marketing plans are finalized, the informational tools are to be distributed to targeted stakeholder groups after results of the research have been posted online, such as publication in a major journal. AHRQ officials said they would expect to distribute the informational tools as soon as the results of the AHRQ has several documents research have been posted; however, the dissemination guidance materials we reviewed did not specify time frames for the completion of the implementation of the marketing plans and distribution of informational tools. Without identifying and documenting time frames for these key activities, AHRQ cannot ensure that CER findings are disseminated in a timely manner or that the dissemination process is consistently implemented by all parties. Setting time frames is especially important for dissemination given the length of time and uncertainty inherent in applying CER findings; the large volume of CER research expected from PCORI in the near future, which will increase AHRQ’s dissemination responsibilities; and the need to maximize the investment of PCORTF appropriations made through fiscal year 2019. Dissemination plan for some stakeholders identified in PPACA has not been clearly defined. Additionally, AHRQ has not determined how it will disseminate information to certain stakeholder groups identified in law, and its dissemination to some of these groups has been limited. While AHRQ’s marketing plans include informational tools aimed at most of the targeted stakeholder groups— physicians, health care providers, patients, and appropriate professional associations—federal and private health plans, and vendors of health information technology focused on clinical decision support are not included.dissemination to all of the targeted stakeholder groups, AHRQ may be missing opportunities to reach the key stakeholder groups identified in the law. Although as of October 2014 there were no specific marketing plans that identified private or federal health plans to receive disseminated CER information, AHRQ officials told us they have conducted outreach to these groups. For example, we spoke to a representative at a private health plan who confirmed receipt and use of AHRQ disseminated CER materials. For federal health plans, AHRQ officials said that they worked with the Office of Personnel Management, which manages the Federal Employees Health Benefits Program, and this program encouraged health plans to use an AHRQ report on the comparative effectiveness of autism treatments when determining coverage decisions. AHRQ officials noted that some health plans told them that CER information without a corresponding cost analysis is insufficient in informing coverage Without a defined plan for decisions. Officials also told us that AHRQ found challenges translating CER findings into clinical decision support applications; plans are underway to determine next steps. Implementation plans for addressing the requirement to create a publicly available database have not been documented. As of November 2014, AHRQ officials also have not developed and documented a specific implementation plan to create a publicly available database for CER. GAO’s Standards for Internal Control in the Federal Government state that management should compare actual performance to plans, and as previously noted, should document significant events. The agency formerly acknowledged its plan to address the PPACA requirement to build a publicly available database during our prior work in 2012,existing databases has not been documented and is in the process of being fully vetted with senior leadership. Additionally, while AHRQ officials told us that their instructions on how to search databases for CER will be aimed at the general public, they have not yet determined how effective these tactics will be to meet the needs of various user groups, such as non-researchers who may be unfamiliar with research databases. For example, officials have not determined if or how they may seek feedback from potential users or test the instructions or search terms to see if they meet potential users’ needs. Additionally, AHRQ officials told us they have not determined how to address potential limitations with this new approach. Without taking steps to develop and document an implementation approach that includes time frames and strategies to address potential limitations and AHRQ’s plans to assess whether its tactics meet the needs of various users, the agency does not have reasonable assurance that it will implement the PPACA requirement in a timely or effective manner. but AHRQ has since modified this plan, and the new plan to use NIH’s consultation role regarding AHRQ’s dissemination efforts is unclear. AHRQ is required by law to consult with NIH regarding dissemination efforts, and agency officials told us they meet informally with NIH staff. NIH officials concurred. AHRQ officials said that they have had interactions with NIH on specific dissemination projects of interest to specific NIH institutes or centers, such as the National Cancer Institute. AHRQ and NIH have not determined what role NIH should take in the dissemination process, or which NIH officials should be involved. Previous GAO work has identified key practices that can help federal agencies collaborate effectively when they work together to achieve goals. This work highlighted, for example, the importance of agreeing on roles and responsibilities and establishing compatible policies, procedures, and other means to operate across organizational boundaries. While coordination between the two entities has been informal and limited to specific NIH institutes or centers at this time, AHRQ officials told us that there is a designated AHRQ official that serves as a liaison to NIH to work on this effort. Additionally, AHRQ officials told us that the agency’s senior management is currently working with NIH to determine how best to more formally coordinate on AHRQ’s dissemination activities, but the officials did not state when this effort will be complete. Without specific plans on how it will collaborate, AHRQ officials lack reasonable assurance that they have buy in from NIH regarding dissemination activities or that their independent efforts are not unnecessarily duplicative. As required by PPACA, AHRQ has implemented a training program aimed at individual researchers and academic institutions that is designed to increase the supply and expertise of CER investigators. Through this program, AHRQ awards grants to support graduate training on CER, career enhancement of beginning and midcareer investigators who utilize CER methods, and institutional CER teaching programs. (See table 2.) AHRQ provides grants to individuals it selects and also to institutions that can select a number of individuals to train on CER. During the planning stages for AHRQ’s training program, AHRQ officials told us they consulted with NIH staff members with expertise on the design and management of training grants. An AHRQ official told us that funding will continue for the existing grants awarded to date through 2018. For example, there are currently some training awards that AHRQ will continue to fund through 2018. However, because AHRQ’s allocation from the PCORTF is scheduled to end in 2019, AHRQ officials told us that they do not expect to create or initiate additional individual grants. Additionally, AHRQ does not expect that additional funding announcements will be made for the institutional grants, since these grants are on a 5-year cycle with current grants running through 2018. For any grant on a 2-year cycle, there will likely be new awards made, but only up until 2018. In order to monitor the various training grant awards funded since 2012, AHRQ collects progress reports from training grantees on an annual basis. AHRQ officials told us that participants learn about CER methods and apply what they learn to conduct research projects as part of their training. AHRQ requires that grantees annually submit progress reports to assess their performance on these activities. These reports include performance information, such as (1) a description of career development and research-related activities undertaken; (2) a list of accomplishments including publications, scientific presentations, dissemination activities, new collaborations, inventions, or project-generated resources made; (3) any methodological changes implemented; (4) key preliminary findings from research; and (5) an annual evaluation statement of the award recipient’s progress by the mentor. AHRQ officials told us that they are considering an interim evaluation of the training grant program for fiscal year 2016 and an overall evaluation after the program is complete in fiscal year 2019. Officials stated that they expect to document specific details about their plans before the evaluations occur, which would be consistent with findings in our prior work that a plan for data collection and evaluation is a key attribute of effective training and development programs and can guide an agency in a systematic approach to assessing effectiveness and efficiency. AHRQ officials emphasized that the training program is ongoing and grantees are not yet expected to have outcomes. For these evaluations, they have collected baseline data from progress reports and they plan to collect additional data once the grant program ends to help inform their evaluations, such as a recipient’s promotion and tenure status to measure academic progress. ASPE has coordinated among various agencies to fund projects intended to build data capacity for CER. However, its approach to building data capacity for CER lacks key elements, such as defined objectives, milestones, and time frames, that are necessary to ensure effectiveness. ASPE officials have coordinated and funded projects that they say will help build data capacity for CER. According to ASPE officials, building CER data capacity involves improving data infrastructure, such as facilitating the creation of new health data sets or the sharing of existing health data via the creation of needed standards, services, policies, federal data, and governance structures. ASPE officials say the agency intends these projects to enable interoperable data networks that could support the efficient collection, linkage, and analysis of data for CER from multiple sources. ASPE officials told us that the agency’s goal is to identify a number of investment opportunities through fiscal year 2019 for enabling the development of a CER data infrastructure using funds from the PCORTF. Beginning in fiscal year 2013, ASPE officials worked with the Office of the National Coordinator for Health Information Technology (ONC) to develop a strategic road map to guide both the identification and selection of ASPE’s PCORTF projects beginning in fiscal year 2014 through fiscal year 2019. The strategic framework for the road map, completed in January 2014, specified five component types—standards, services, policies, federal data, and governance structures—necessary to build CER data capacity. As of October 2014, ASPE has funded a total of 10 projects. (See appendix II for descriptions and funding amounts for the 10 ASPE projects.) ASPE has obligated about $23 million of the total estimated $190 million it expects to receive through FY 2019 from the PCORTF. Prior to the development of the road map, ASPE worked with HHS’s Leadership Council, responsible for overseeing ASPE’s PCORTF investment process, to identify and fund new projects that utilized the expertise of an HHS agency. Some projects extended the work of existing Recovery Act projects, with the initial projects beginning in 2011. These projects focused on developing new or enhancing existing data resources, such as expanding administrative and clinical data sets for CER and establishing health information technology standards to leverage electronic health records for CER. For example, ASPE funded a new project conducted by ONC known as the Structured Data Capture initiative. For this project, ONC identifies standards for common data elements that consist of structured data definitions and electronic case report forms, to capture patient data from electronic health records for CER studies. ASPE’s approach to building data capacity for CER through investments in data infrastructure lacks key elements necessary to ensure its effectiveness. Specifically, ASPE updated the strategic framework for the road map in February 2014, but did not define specific objectives linked with performance metrics or establish milestones and time frames that could be used to gauge its progress toward the goal of coordinating relevant federal health programs to build data capacity, as required by PPACA. Without these key elements, ASPE may be unable to gauge its progress towards meeting the requirements of the law. Standard practices for project management call for agencies to conceptualize, define, and document specific goals and objectives in the planning process, along with the appropriate steps, milestones, time frames, and resources needed to achieve those results. Although the updated February 2014 strategic framework for the road map highlighted a purpose—to identify a set of investment opportunities for developing CER data infrastructure to build CER data capacity—and included guiding principles and objectives, it did not clearly define those objectives, nor did it include other elements such as milestones or time frames that would help allow for monitoring and reporting on progress. Specifically, ASPE identified several guiding principles, such as ensuring that data infrastructure projects are “non-duplicative of other related federal and non-federal investments” and “achieve synergy with PCORI and AHRQ.” It also included priority objectives, such as further enabling the collection of standardized clinical data, but many of the objectives were broad and not clearly defined—and did not specify milestones or time frames—as would be consistent with effective project management. Although ASPE identified and considered related, ongoing federal and non-federal data infrastructure investments in an attempt to identify needs or gaps, opportunities where contributions could be made, and ways to avoid duplication, its strategic road map was unclear on the timing and level of coordination necessary for its investments to work together with existing projects—such as PCORI’s PCORnet initiative—to improve data capacity. For example, ASPE officials were not clear on how precisely the standards for common data elements resulting from the ONC Standard Data Capture initiative could be incorporated into PCORnet or other existing publicly funded data networks, although ASPE does plan to make them available for use, and officials told us that they will work with other HHS agencies and PCORI to determine adoption strategies. Furthermore, the ONC Standard Data Capture initiative is not expected to be completed until 2016, which occurs after PCORI’s common data model for the PCORnet initiative is expected to be used for conducting research, beginning in September 2015. Having more clearly defined objectives and establishing milestones and time frames can also help ASPE assess how it expects the results of its CER investments to build data capacity, and how they will be coordinated in a timeline with many other entities’ existing and planned efforts. Moreover, this information can help ASPE officials understand the extent to which their efforts are not duplicative and align with other federal efforts. ASPE officials told us that as of October 2014, they are planning to award a contract for developing an evaluation framework that will be used to assess the effectiveness of their CER data infrastructure projects. They also told us that they monitor and assess the 10 individual projects by collecting quarterly reports and assessing progress against the statements of work that were developed for each project. However, it is unclear from ASPE’s strategic road map whether these efforts will be sufficiently timely and coordinated with other federal and non-federal efforts to result in improvements to CER data capacity. Comparative clinical effectiveness research can give health care providers information to help decide which treatments may be most beneficial for a given patient, and it also can inform decisions by patients and caregivers. However, this information is often incomplete or unavailable. While HHS has multiple, ongoing efforts to meet its requirements under PPACA related to CER, it has not determined how it will fully address some of these requirements, particularly those related to dissemination and data capacity building. Disseminating CER in a timely manner is particularly challenging given the length of time and uncertainty inherent in applying research findings to help improve health care practice. AHRQ, for instance, has taken steps to disseminate CER and documented these processes, including time frames for some, but not all, of its key dissemination activities. Such time frames may become especially important due to the large volume of CER research expected from PCORI in the near future, which will increase AHRQ’s dissemination responsibilities, and the need to maximize the investment of PCORTF appropriations made through fiscal year 2019. Additionally, effective dissemination of research findings involves multiple stakeholders, some of which are specified in PPACA. Without clear plans to target each of these stakeholder groups, including federal and private health plans and vendors of health information technology focused on clinical decision support, it is unclear whether pertinent CER findings are being directed to key targeted stakeholders identified in PPACA and presented in a meaningful way to those groups. Other aspects of AHRQ’s dissemination process, such as its plans for a publicly available database of CER—including whether AHRQ’s instructions and CER search terms will be effective to meet the needs of various potential users in the general public—and its collaboration with NIH on dissemination activities, have not been fully defined. HHS’s plan to build data capacity involves identifying projects that would enhance existing data resources for CER. While HHS has a strategic road map with information on projects that it is funding to build the capacity for CER data, the road map does not include key elements, such as clearly defined objectives, milestones, and time frames needed to assess the agency’s progress toward the goal of building data capacity for CER, as would be consistent with practices for effective project management. Without defining these key elements, for example, it is unclear to what extent ASPE’s projects will build on or contribute to other similar federal or non-federal activities, rather than being duplicative. ASPE officials, for instance, could use more defined objectives and time frames to help them better assess the extent to which the CER projects they choose to fund will be useful and timely for other relevant federal and non-federal work, such as PCORI’s PCORnet initiative. To help ensure that HHS fully addresses its dissemination requirements under PPACA, we recommend that the Secretary of Health and Human Services direct AHRQ to take the following four actions: 1. identify and document time frames for the implementation and distribution of marketing plans and informational tools; 2. expand dissemination efforts to federal and private health plans and vendors of health information technology focused on clinical decision support; 3. document and complete plans to develop a publicly available database, including plans to meet the needs of various potential users in the general public; and 4. develop specific plans on how it will collaborate with NIH on its dissemination activities. In addition, to ensure that HHS fully addresses the PPACA requirements to build data capacity for CER, the Secretary should direct ASPE to include clearly defined objectives, milestones, and time frames, or other indicators of performance, in its strategic road map that is used to identify its PCORTF projects. We provided a draft of this report to HHS, and HHS provided written comments, which are reprinted in appendix III. HHS concurred with all five of our recommendations and provided additional information about its work to build data capacity for CER. Additionally, HHS provided technical comments, which we incorporated as appropriate. Specifically, for the first four recommendations, HHS—including AHRQ— stated that it would ensure that starting and ending time frames for the implementation and distribution of patient-centered outcomes research findings are clearly specified and documented. continue and expand dissemination activities that target federal and private health insurance plans, as well as vendors of health information technology focused on clinical decision support. HHS stated that it recently issued a funding opportunity announcement focused on the use of clinical decision support to disseminate and implement patient-centered outcomes research findings. document and complete its plans to ensure that multiple potential users, including the general public, have access to patient-centered outcomes research studies and their findings. As noted in our findings, these plans include creating a web page to list and provide users with links to existing publicly available databases that could be used to search for these studies. Complete plans would include time frames, strategies to address potential limitations, and whether the needs of various users are being met. continue to collaborate with NIH institutes and centers, and develop and document specific collaborations around patient-centered outcomes research dissemination activities. HHS stated that AHRQ has begun regular meetings with NIH—through its Office of Science Policy and the NIH Deputy Director for Science, Outreach, and Policy—to discuss how NIH’s and AHRQ’s activities can best complement one another. Regarding our last recommendation, HHS stated that it intends, through ASPE, to further develop the road map by specifying milestones with corresponding time frames. HHS will also develop specific performance indicators for its portfolio of data capacity investments. Consistent with our findings and conclusions, HHS’s comments also stated that its data capacity investments need to coincide with other key HHS policy initiatives and be responsive to the needs of CER data networks, including PCORI’s PCORnet. We are sending copies of this report to the Secretary of Health and Human Services, the Director of AHRQ, the Assistant Secretary for ASPE, and other interested parties. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. According to the Agency for Healthcare Research and Quality (AHRQ), it conducted 74 systematic reviews—syntheses of existing research—that were related to comparative clinical effectiveness research (CER) and resulted in findings disseminated between June 2012 and June 2014. Table 3 lists each systematic review with dates for each processing step leading up to posting the results of the review on AHRQ’s website for the public. Based on GAO’s analysis of the 74 systematic reviews, the time frame from when a systematic review began to when the findings were disseminated, including posting via AHRQ’s website, ranged from 1 year to more than 4 years. Appendix II: ASPE’s Comparative Effectiveness Infrastructure Projects (FY 2011-2014) Amount obligated (dollars in millions) $1.7 a centralized inventory of CER studies to serve as the foundation for a publicly accessible database of current publicly and privately funded CER projects, and related published policy and scientific literature. algorithms to accurately identify and classify CER studies. an improved web-based tool to provide a better understanding of the landscape of current CER activity to users. a mechanism and plan to pilot test the tool prior to making it publicly accessible. Enhancements to the existing database will combine claims data from public and private sources, matching patient information as appropriate, as is necessary for cross-payer and longitudinal analysis. pursue options to test the value of secure distributed data networks for research applications like CER. Ongoing pre-existing project to support CER through a research database that provides researchers with Medicare and Medicaid beneficiary claims and assessment data linked by beneficiary across the continuum of care. Funded enhancements include expanding the amount of Medicaid data available and security enhancements. Collaboration between the Office of the National Coordinator for Health Information Technology and the National Library of Medicine to integrate clinical information and research information within a “template” that can be utilized by researchers. Expand the amount of data collected by a nationwide network of 19 community health centers and five research organizations in 10 states, which together collect CER-related data about patients in underserved communities. Build upon previous Centers for Disease Control and Prevention efforts by augmenting a publicly available dataset for CER with additional longitudinal follow-up data on disease recurrence and vital status for colon, rectum, and breast cancer cases. Enhance software tools and methodology for management and consolidation of electronic data reported on a real-time basis from electronic health records to registries. Identify concrete, strategic opportunities to contribute long term to building data infrastructure for CER, and help maximize the impact of the Patient- Centered Outcomes Research Trust Fund investments. Assess the current landscape of data infrastructure for CER, identify gaps, and opportunities. Develop, select, and validate standards for common data elements for use in CER and a template to collect data from electronic health records for research purposes. Allow providers to access data in their own electronic health records in a standardized way to support CER. Allow researchers outside of the organization who have remote access authorization to access an organization’s electronic health record data for the purpose of CER. These projects have two initiatives under the same project description. In addition to the contact named above, Will Simerl, Assistant Director; Jennie Apter; La Sherri Bush; Christine Davis; Ashley Dixon; Colbie Holderness; Andrea Richardson; and Jennifer Whitworth made key contributions to this report.
PPACA imposed new requirements on HHS related to CER—research that evaluates and compares health outcomes and the clinical effectiveness, risks, and benefits of two or more medical treatments or services. Among other things, PPACA required AHRQ to broadly disseminate findings from federally funded CER and the Secretary of HHS (who, by delegation, charged ASPE) to coordinate federal programs to build data capacity for CER. PPACA also mandated that GAO review HHS's CER activities. This report examines (1) AHRQ's activities to disseminate the results of federally funded CER and (2) ASPE's activities to coordinate federal programs to support CER by building the capacity to collect, link, and analyze data, among other objectives. GAO reviewed relevant legal requirements and HHS documentation; interviewed HHS officials; and obtained information from five stakeholder groups that AHRQ targeted to receive disseminated information or were otherwise involved in AHRQ's dissemination efforts. The Agency for Healthcare Research and Quality (AHRQ), an agency within the Department of Health and Human Services (HHS), has taken some steps to disseminate comparative clinical effectiveness research (CER), as required under the Patient Protection and Affordable Care Act (PPACA), but has not taken other actions to help it fully address its dissemination requirements. The steps it has taken include the creation of tools that organize and disseminate research findings to certain targeted stakeholder groups and the development of plans for a publicly available database that includes CER. For example, AHRQ's marketing plans—customized plans to help convey key messages about AHRQ's research—include various informational tools to disseminate CER, such as research summaries that communicate research findings to clinicians, consumers, caregivers, and policymakers. However, the agency has not clearly defined how to disseminate information to certain stakeholder groups specified in the law, nor has it identified and documented time frames to implement the marketing plans and distribute the associated informational tools, as would be consistent with federal internal control standards, which state that significant events need to be clearly documented to ensure management goals are carried out. Additionally, in order to implement PPACA's requirement for developing a publicly available database that contains CER evidence, AHRQ officials told GAO that they plan to create a web page to list and provide users with links to existing publicly available databases that could be used to search for CER, but they have not documented a specific implementation plan that includes time frames and strategies to address known potential limitations, such as difficulties that certain users may face in searching the databases for CER results. HHS's Assistant Secretary for Planning and Evaluation (ASPE) has coordinated among various agencies to fund projects intended to build data capacity for CER, but its approach lacks key elements needed to ensure its effectiveness. For example, these projects include an effort to better standardize data that could be used in multiple research projects. However, HHS's approach to building data capacity for CER lacks key elements, such as defined objectives, milestones, and time frames, that are necessary to ensure effectiveness. ASPE officials worked with the Office of the National Coordinator for Health Information Technology to develop a strategic road map to guide both the identification and selection of ASPE's projects beginning in fiscal year 2014 through fiscal year 2019. Although the February 2014 strategic framework for the road map highlighted several priority objectives, such as enabling the collection of standardized clinical data, these objectives were broad and not clearly defined. For example, although ASPE identified and considered related, ongoing federal and non-federal data infrastructure projects in an attempt to identify needs or gaps, among other things, its strategic road map is unclear on the timing and level of coordination that would be necessary for its projects to work together with these related projects to improve data capacity. Standard practices for project management call for agencies to conceptualize, define, and document specific goals and objectives in the planning process, along with the appropriate steps, milestones, time frames, and resources needed to achieve those results. GAO recommends that HHS direct (1) AHRQ to take several actions related to its dissemination efforts, including identifying and documenting time frames for the implementation and distribution of marketing plans and informational tools, and (2) ASPE to include clearly defined objectives, milestones, and time frames, or other indicators of performance, in its strategic road map used to identify its CER-funded projects. HHS concurred with the recommendations.
Since the collapse of communism in Central and Eastern Europe, Poland has undertaken some of the most dramatic economic reforms in the region. Donors have actively encouraged Poland in its efforts to make the transition from a communist-led, centrally planned economy to a free-market economy and a democratic political system. The United States has supported Poland’s transition both financially and diplomatically. The major industrial countries and the international financial institutions had committed about $36 billion in assistance to Poland from 1990 through December 1994. These commitments consisted of emergency, humanitarian, infrastructure, and economic transformation assistance; debt forgiveness; private sector investment; export credits; and investment guaranties. The Group of 24 (G-24) countries committed approximately $26.8 billion in bilateral assistance to Poland and the International Monetary Fund (IMF), the World Bank, and the European Bank for Reconstruction and Development (EBRD) committed about $8.9 billion. (See tables 1.1 and 1.2.) The G-24 countries designated the European Commission, the executive arm of the European Union (EU), as the coordinator of these assistance activities. However, the European Commission acts primarily as a clearinghouse for information on G-24 bilateral assistance to the region rather than as a coordinator. One of the Commission’s functions is generating the G-24 Scoreboard of Assistance Commitments to the Central and Eastern European Countries, a listing of donor assistance pledged to the region by G-24 countries. According to an EU official, the main function of the Commission’s delegation in Poland has been to arrange donor meetings. Donor coordination is generally handled by the Polish government’s Council of Ministers’ Foreign Aid Office. However, donors often bypass this office and deal directly with the relevant ministries, or rely on organizations outside the government of Poland to implement their programs. For example, most U.S. assistance programs have been implemented either directly with the private sector recipients or through contractors and nongovernmental organizations with little direct involvement on the part of the government of Poland. The Support for Eastern European Democracy (SEED) Act of 1989 (P.L. 101-179) authorized funding for Poland and other countries in Central and Eastern Europe for fiscal years 1990 through 1992. Since 1993, obligations for programs in the region have been funded under both the SEED Act and the Foreign Assistance Act of 1961, as amended (P.L. 87-195). The United States had obligated about $719 million in assistance as of September 1994 to help Poland’s transformation to a democracy and a market-oriented economy; the United States has also provided about $700 million in Overseas Private Investment Corporation financing and insurance for U.S. businesses to facilitate their investment in Poland, $355 million in Eximbank loan guarantees and investment credits, and about $2.4 billion in official debt forgiveness. Poland was one of the first countries of Central and Eastern Europe to receive U.S. assistance because it took the lead in the transformation from communism to democracy and a market-oriented economy. Poland has received the largest share of U.S. assistance in the region. This assistance was initially expected to be necessary only for a transition period of about 5 years starting in 1990; however, the U.S. Agency for International Development (USAID) representative in Warsaw now believes that Poland will probably need assistance for at least the next 5 years or until the country is closer to economic integration with the EU. Pursuant to the SEED Act, the Deputy Secretary of State was designated as the Coordinator of U.S. Assistance to Central and Eastern Europe in 1990. The Coordinator was assisted by special advisors from the Department of the Treasury, the Council of Economic Advisors, and USAID. In 1993, the Coordinator’s office was placed within State’s Bureau for European Affairs. The U.S. assistance program in Central and Eastern Europe was initially designed with a regional rather than country-specific approach and was centrally managed in Washington, D.C., with limited authority delegated to U.S. personnel in-country. However, this approach changed in 1993 as USAID devolved many of the management responsibilities to the field at the direction of Congress. The USAID/Poland representative said that he now has an understanding with USAID/Washington that no projects will be initiated in Washington without the field office’s concurrence. The USAID representative also said that he has requested control over all contracts and work orders, indicating that this oversight and control was necessary to coordinate and develop strategic plans for future work in Poland. As shown in figure 1.1, the majority of U.S. assistance to Poland has been devoted to economic restructuring and assisting in Poland’s transformation to a market-based economy. The remainder of the funds have been obligated to support democratic initiatives and quality of life issues. Democratic initiatives projects included training for parliamentary and local government officials and grants to support the small and independent press media. Quality of life projects included technical assistance for the Polish public and private housing sector, a model unemployment benefit payment office, and technical assistance to help improve public sector environmental services. The Polish Stabilization Fund and the Polish-American Enterprise Fund account for the majority of funds obligated under the Economic Restructuring Program for Poland. (See fig. 1.2.) Under the SEED Act, the United States provided a $199-million contribution to the multi-donor $1 billion Polish Stabilization Fund. The fund was established to (1) support a relatively fixed exchange rate for the zloty (Poland’s currency) after a sharp devaluation and (2) help ensure that the zloty would be convertible for current account transactions; that is, to allow residents to freely purchase currency through authorized foreign exchange banks. These objectives have been accomplished, and the United States has authorized Poland to use the $199 million held in reserves to recapitalize and privatize the Polish state-owned banks. The SEED Act also authorized the Polish-American Enterprise Fund as a private corporation with maximum flexibility in implementing the fund’s investment policies. As of September 1994, about $250 million had been obligated and the fund had disbursed about $227 million. The fund primarily makes loans to, or invests in, small- and medium-sized businesses in which other financial institutions are reluctant to invest. The objectives of this review were to (1) assess the status and progress of Poland’s economic restructuring in the key areas of macroeconomic stabilization, foreign trade and investment, privatization, and banking, (2) describe impediments to these restructuring efforts, (3) discuss the role donors have played in the transformation process, and (4) identify lessons learned that could be useful to other transition countries. To address these issues, we interviewed officials of the Departments of State and the Treasury and USAID in Washington, the Economic Commission for Europe (ECE) in Geneva, the EU in Brussels, the Organization for Economic Cooperation and Development (OECD) in Paris, and the EBRD in London. We also met with officials at the British Know How Fund as well as Central and Eastern European experts at the London School of Economics and other organizations. In Warsaw, we met with U.S. embassy officials, USAID representatives, U.N. officials, IMF and World Bank officials, EU officials, and officials of the British and Japanese embassies. We also met with officials from the Polish government, representatives of the Polish-American Enterprise Fund, representatives of private sector promotional organizations, and managers from U.S. and German companies doing business in Poland. We reviewed pertinent U.S., host, and donor-government documents, as well as reports and studies by international organizations, academia, and private sector groups. We also used information from PlanEcon, Inc., an economic consulting group specializing in Central and Eastern Europe and the former Soviet Union, and information from the Warsaw Economic Research Institute, a policy institute at the Warsaw School of Economics. To describe factors hindering Polish exports, we relied heavily on the reports and studies of international organizations as well as the opinions of Polish and international organization officials. The data presented in the tables and figures of this report were obtained from a number of different sources. These data should be interpreted and used with caution since the quality of the data could not be verified in some cases. We performed our review from January 1994 through May 1995 in accordance with generally accepted government auditing standards. The foundation for Poland’s current economic recovery and continued restructuring was the major stabilization and macroeconomic reform efforts, referred to by some as “shock therapy” or “the big bang approach,” which began in late 1989 and early 1990. The Polish government took a wide range of actions to encourage stabilization, including tightening fiscal and monetary policy, liberalizing prices, devaluing the currency, and controlling the growth of debt. Western donors provided important support for such reforms and the United States played a key role in initiating these forms of assistance. Poland’s economy is now experiencing healthy growth. In October 1989, the Polish government began implementing macroeconomic stabilization and liberalization measures, and accelerated the reform movement in January 1990. Subsidies to industry and households, for example, food subsidies, were sharply cut. Public investment spending was substantially reduced. Money growth was tightly controlled; the zloty was sharply devalued and made convertible. Wage growth was controlled with an excess wage tax designed to limit the rate of increase in the wage bills of state enterprises. Prices were liberalized, bringing about a one-time jump in the price level corresponding to the reduction in the real value of the zloty. Additional liberalization measures included the establishment of a free-trade regime and liberalization of legal requirements for setting up private enterprises. Together, these efforts gave Poland the basic operating features of a market economy and were widely considered to be essential first steps toward overall economic restructuring. The stabilization measures decreased inflationary pressures, lowered government expenditures, and improved the balance of payments. However, these measures also contributed to declines in economic output and corresponding growth in unemployment. The liberalization measures freed most of the domestic price system, allowed for corrections in the relative prices of goods still under state control, removed the state from large-scale detailed direction of the economy, and provided an environment conducive to the growth of a new private sector. Some benefits resulted from important linkages between specific measures. For example, Poland’s liberalization of trade subjected the state sector to foreign competition. Such competition provided international relative prices that the previous monopolistic Polish firms would not have offered, thus enabling the government to liberalize prices. Western support for early Polish stabilization measures is cited by Polish and donor officials as among the most significant assistance provided to Poland. For example, the Director of Poland’s Bureau for Foreign Assistance asserted that some of the most important assistance efforts to date involved donor support for early Polish macroeconomic stabilization actions in the form of the stabilization fund, balance of payments support, and debt restructuring and forgiveness. The IMF’s senior resident representative in Poland said that these forms of assistance were timely and critical to Polish macroeconomic stabilization efforts. The United States took the initiative in 1989 to mobilize $1 billion from the international community for a Polish Stabilization Fund to (1) support a relatively fixed exchange rate for the zloty after sharp devaluation and (2) help ensure that the zloty was convertible for current account transactions by creating additional foreign exchange reserves. Poland’s foreign exchange reserves were further bolstered by a $700 million standby arrangement with the IMF. This balance of payments support helped allow Polish authorities to introduce in January 1990 a convertible and stable exchange rate, and the additional backing for the zloty made defense of the currency more credible. Another important form of early assistance to Poland was temporary cash flow relief from external indebtedness. To increase the chances of successful stabilization, some believed it was important that debt service payments be minimized in the early stages of transition. Poland’s external debt in convertible currencies at the end of 1990 was about $44 billion. An estimated $33 billion was owed to official bilateral creditors, referred to as the Paris Club, and $10.7 billion, including $1.2 billion of short-term revolving credit, was owed to Western commercial banks, known as the London Club. Poland’s gross debt service in 1990 was about $9 billion, or about 80 percent of its convertible currency merchandise export earnings. In March 1991, under U.S. leadership, the members of the Paris Club agreed to forgive 50 percent of Poland’s official debt. In the first stage, which was contingent on Poland’s signing an agreement with the IMF to restructure its economy, the official debt was reduced by 30 percent. In the second stage, which was contingent upon Poland’s fulfillment of the terms of the IMF agreement, an additional 20-percent reduction was authorized in April 1994. As part of the initial 30-percent reduction, annual interest payments during the first 3 years were reduced by 80 percent. Principal payments were also limited to less than $600 million annually. For its part in the agreement, the United States agreed to forgive 70 percent of its bilateral debt with Poland, 50 percent in the first stage, and 20 percent in the second, which reduced Polish debt to the U.S. government from $3.4 billion to about $1 billion. Under the Paris Club agreement, Poland also committed to seeking from the London Club of commercial banks a debt reorganization on terms comparable to the Paris Club, allowing Poland to cease servicing this debt in the interim. After suffering substantial declines in gross domestic product (GDP) during the first 2 years of transition, Poland now leads post-communist Europe in economic growth. According to PlanEcon, while Poland has made considerable progress in reducing inflation from the high levels that existed when reforms began in 1989, the country’s projected rate of inflation for 1994 remained relatively high at 31 percent. Poland’s unemployment rate was projected to gradually decline in 1994 to a level of 15.9 percent by the end of the year. However, the country’s official GDP grew by an estimated 5 percent in 1994 and is projected to grow by another 6 percent in 1995. Figure 2.1 shows official and what has been termed “corrected” Polish GDP levels for 1989-95. Although Poland’s official figures indicate that the country’s GDP has not completely recovered from the substantial output declines experienced in the first 2 years of transition, PlanEcon’s “corrected GDP” figures show that the country’s GDP has recovered from these declines and surpassed its pretransition levels. The IMF’s senior resident representative in Poland said that Poland’s early macroeconomic stabilization measures coupled with consistent macroeconomic policy over several years were critical factors in the country’s economic recovery. Trade is widely viewed as a crucial factor in Poland’s economic restructuring. Increased trade with the West, and the EU in particular, is key to Poland’s integration into the world economy, especially since the collapse of trade among Poland’s former Council for Mutual Economic Assistance (CMEA) trading partners. Although Polish exports increased in 1994, Poland continues to run a large trade deficit with the EU. Despite the importance of Poland’s trade with the EU, West European trade barriers continue to hinder Polish exports of certain products to that market, thereby hampering restructuring efforts. Donors have rendered limited assistance to help facilitate Polish exports, and some assistance that has been provided was of questionable usefulness. Foreign investment is considered essential to Poland’s economic restructuring efforts. Although Poland has made progress removing some impediments to foreign direct investment, many obstacles remain that can be corrected by only the Polish government. Nevertheless, a number of U.S. and foreign companies have recently made significant investments in Poland. Some early U.S. assistance geared toward improving the investment climate lacked focus because of pressure to spend the money quickly, and U.S. programs to support Polish investment promotion have had limited impact. In 1990, as part of its transition efforts, Poland liberalized foreign trade regimes. This included eliminating many import restrictions, demonopolizing foreign trade, allowing free access to foreign currency, and establishing convertibility of the zloty. Growth in exports to the West is widely recognized as important to Poland’s continued economic recovery and integration into the world economy. In addition to increased imports resulting from opening up its own markets to Western products, Poland achieved dramatic increases in exports to the West, beginning in 1990. As Poland entered the initial stages of reform, exports to the industrialized market economies were essential to preventing even larger declines in output than had already occurred as a result of the collapse of trade with former CMEA countries and the drop in Poland’s internal demand. According to OECD, access to the more stable OECD area markets is vital for Poland’s continued economic growth and political stability. The ECE has reported that increased access to Western markets can also act as a powerful stimulant to foreign investment seeking an eastern base for exporting. Of the OECD area markets, the large and geographically close EU market represents Poland’s most important trade partner. For example, in 1994, about 53 percent of Poland’s exports and 54 percent of its imports consisted of trade with the EU. Though the United States represents a potential market for Polish products, it accounts for only 2 to 4 percent of Poland’s trade. Tables 3.1 and 3.2 describe aggregate trade for selected countries and regions between 1988 and 1994. As indicated in tables 3.1 and 3.2, Poland’s trade with former CMEA partners has declined in importance. Although a Central European Free Trade Area (CEFTA) agreement was negotiated among Poland, the Czech Republic, Slovak Republic, and Hungary and went into effect in March 1993, the ECE reported that the significance of the agreement has been downplayed within the CEFTA countries and that few steps have been taken to promote these trade links. OECD officials echoed that sentiment, explaining that while eastern markets could be very important to Poland in the future, Polish companies engaging in restructuring currently do not have enough “margin for error” to emphasize dealings in countries with small markets and little ability to pay for products. Although Polish exports increased in 1994 to over $17 billion, Poland continues to run a large trade deficit with the West, primarily the EU. Poland’s 1993 trade deficit of $7.8 billion was the largest in its history, and $4.2 billion of this amount was with the EU (see fig. 3.1). However, in 1994, as a result of slower growth in imports versus that of exports, Poland’s trade deficit narrowed to $6.6 billion, a 14-percent decline compared to 1993. Poland’s 1994 trade deficit with the EU was $3.8 billion, a 10-percent decline compared to 1993. A preferential trade agreement between Poland and the EU is part of the EU-Poland Association Agreement, which became fully effective on February 1, 1994. The trade segments of the accord went into force on March 1, 1992, in the form of an interim agreement, but under the agreement barriers to trade in certain sensitive areas such as textiles are to be removed only over a number of years. Major improvement in Poland’s access to agricultural markets in the EU is not expected soon.Further, Polish officials maintain that the EU continues to limit access to its markets through contingent protective measures such as anti-dumping duties, countervailing duties, and safeguard actions. The Association Agreement is considered a precursor to Poland’s eventual membership in the EU, and a key feature is the gradual elimination of tariffs over a 10-year period, leading to a free trade area between the EU and Poland. The agreement is considered to be asymmetric in that the EU is required to grant immediate duty free access on many goods, while Poland has a longer period of time to grant full reciprocity. The Association Agreement also provides for immediate elimination of quantitative restrictions on many industrial products, with the exception of textiles, coal and steel, which are accorded special treatment as “sensitive products.” Tariff reductions and phase-out of quantitative restrictions for these sensitive products will take place more gradually.Customs duties levied on exports are also slated for eventual elimination. While the agreement provides for limited trade preference for selected agricultural products over 5 years, in many cases, tariffs and tariff rate quotas will remain in place at the end of the phase-in period, with the agreement merely calling for the parties to consult on the possibility of granting further concessions. According to a report published by the IMF, the EU decided in 1993 to further improve market access for Poland and other CEFTA countries in response to criticism that, under the Association Agreement, the EU was delaying access to those markets in which CEFTA countries have the highest export potential. The IMF and ECE reported that this EU decision (1) accelerated by 2 years the scheduled reduction of EU customs duties on certain imports of sensitive basic industrial products, (2) increased by 10 percentage points the annual expansion in quotas and ceilings for certain industrial products, (3) implemented 6 months earlier than scheduled a reduction in levies/duties on certain agricultural products subject to quotas, and (4) began exempting from customs duties outward processing operations in 1994. The IMF report indicated that the most important remaining restrictions appeared to be quotas on textiles, nontariff barriers on agricultural products, and the threat of resorting to safeguard provisions or anti-dumping actions. Under the Association Agreement, tariffs existing in the EU and Poland as of February 29, 1992, served as the base from which reductions were to occur. Reduced tariff levels agreed to in the General Agreement on Tariffs and Trade (GATT) Uruguay Round replace these tariffs as the base once such reductions go into effect. Certain trade liberalization clauses in the Association Agreement are contingent on agreements reached in the Uruguay Round. For example, the ECE reported that, for textiles and clothing, the agreement provides for the elimination of EU quotas on imports from Poland over a period equaling half of that agreed to in the Uruguay Round, but not less than 5 years. Poland was a member of GATT before undertaking economic reforms; however, it was required to accept special terms reflecting the state-controlled nature of its economy. According to a Polish official, Poland is now renegotiating its terms of accession with GATT to reflect its economic reforms, and the country became a founding member of the new World Trade Organization, an outcome of the Uruguay Round agreements, on July 1, 1995. Under the Association Agreement, anti-dumping actions and other contingent protective measures are permitted in accordance with GATT articles. The EU no longer includes Poland in its list of state-trading economies for purposes of determining “normal prices” in anti-dumping actions, but the ECE has reported that a country’s classification as a market economy does not necessarily imply that it will be subject to fewer actions. On the other hand, the ECE reported that transition countries such as Poland will benefit from a Uruguay Round strengthening and extension of GATT rules and authority, especially if this leads to stricter control of anti-dumping procedures and contingent protective measures. A recent and as yet unpublished OECD study reported that the Uruguay Round agreement should bring more clarity and certainty regarding the initiation of anti-dumping actions. However, another recent, unpublished OECD study reported that how the new rules are implemented will determine their actual impact and that Uruguay Round results will probably make only modest changes to the way anti-dumping regulations are applied to transition economies such as Poland’s. The report also said that less stringent conditions on safeguards may cause sufferers of import competition to choose this method of protection rather than an anti-dumping investigation. Polish government and ECE officials told us that Poland’s membership in the World Trade Organization will help the country become more fully integrated into the world economy. Poland’s representative for GATT issues at the Polish Mission in Geneva said that Poland hoped to benefit from the Uruguay Round and membership in the World Trade Organization in that it would help the country consolidate its own reforms in trade-related management and systems, rendering such systems more stable, predictable, and coherent. An ECE official added that this development means that Poland is becoming more fully grounded in the market system, making it more difficult to backtrack on market reforms. The Directors of Poland’s Bureau for European Integration and Bureau for Foreign Assistance, the Director of the Trade Instruments Department of the Polish Ministry of Foreign Economic Relations, and the Economic and Commercial Counselors of the Polish Embassy in London told us that they support the EU-Poland Association Agreement. However, they maintained that the EU continues to limit Polish access to its markets through contingent protective measures (anti-dumping duties, countervailing duties, and safeguard actions) or the threat of such measures. These officials said that when Poland proves to be competitive in a particular area, these barriers often come into play. The ECE reported that imports of certain sensitive eastern goods generated complaints in West European countries that culminated in a number of import restrictions. The result was that while standard measures of protection (such as tariffs and quotas) diminished, contingent protection measures were used more frequently. They also reported that a steady stream of warnings in addition to official actions may result in eastern exporters making voluntary reductions in the growth of sales to reduce the probability of formal complaints being lodged. Table 3.3 lists EU protectionist measures against Polish imports between July 1992 and December 1993, as reported by the IMF. As earlier noted, Polish exports to the EU grew substantially in recent years. (See table 3.1.) For example, between 1990 and 1993, Poland’s total exports to the EU increased at an annually compounded rate of 11.5 percent. However, for the selected commodities that were targeted by the EU contingent protective measures listed in table 3.3, Poland’s exports to the EU declined at an average annual rate of 8.1 percent during the same period. In 1990, these exports, valued at $429 million, accounted for 6.5 percent of Poland’s total exports to the EU. By 1993, the value of these exports had declined to $334 million, or 3.6 percent of Poland’s total exports to the EU. These data indicate that although EU protectionist measures did not prevent Poland from expanding its total exports to the EU, such measures did have an adverse impact on Polish exports of the targeted products. According to the Director of Poland’s Bureau for European Integration, one of the most unfair examples of contingent protection measures involves the filing of anti-dumping cases. A study prepared for the OECD found that although EU authorities had agreed to speed up tariff reductions and enlarge zero-duty ceilings and quotas for some sensitive goods, EU anti-dumping investigations had the effect of minimizing these actions.The above-mentioned Polish official and the Economic and Commercial Counselors of the Polish Embassy in London said that the idea that Polish companies can afford to engage in predatory dumping on the EU markets is not logical. Another example of contingent measures involves the use of EU health and sanitation standards to restrict Polish agricultural exports. OECD officials told us that some EU countries abuse these standards to protect their own industries, while technically complying with GATT rules. For example, in May 1993 the EU imposed a 1-month ban on imports of live animals, meat, milk, and dairy products from across Eastern Europe for sanitary reasons. The OECD reported that, although EU officials portrayed the action as an urgent health measure, Polish and other East European officials described it as a protectionist measure. The Director of Poland’s Bureau for Foreign Assistance told us that his government had estimated Poland’s losses related to the temporary ban at $60 to $80 million. The ECE quotes reported estimates of Poland’s losses related to the ban at closer to $30 million. Polish officials pointed out that now, when Poland is engaging in the painful aspects of economic restructuring, the country is in need of export markets. Limited market access could necessitate extremely disruptive scaling down, which may be of a magnitude greater than necessary in some industries. Such disruption makes it more difficult for Polish politicians to maintain support for reforms. Polish officials added that barriers such as anti-dumping mechanisms are being employed by the EU in areas where Poland is undergoing some of the deepest and most disruptive economic restructuring. Polish officials noted the irony in the fact that it is necessary to use EU technical assistance to obtain advice on how to shrink Polish industries that have been negatively affected by EU trade barriers. Another problem with barriers to Polish exports is that it makes it difficult for Polish politicians to resist demands for increased Polish protectionism. Polish officials told us that recently the government of Poland has paid increased attention to calls for protecting certain of its own markets. Indeed, the IMF reported that, in 1992, Poland raised duties on a variety of products and that, in 1993, Poland further revised its tariff structure, lowering duties on imported raw materials and semi-finished products and increasing it on finished products and agricultural goods. Poland also introduced a tax on sugar content, established licensing requirements on the imports of chicken meat, milk products and wine, and, in 1994, introduced variable import levies on several agriculture products. The ECE reported that Polish authorities claim that such measures are a response to protectionism in the West. Some observers fear that increased Polish protectionism could boost domestic fiscal imbalances and erode trust and much needed financial support abroad or that such policies could become entrenched, as they are in the West. Donors have not been helpful in responding to Polish requests for assistance in establishing an effective export credit insurance program in Poland. Although numerous donor and Polish officials stressed the importance of developing Poland’s capability in this area, officials from Poland’s fledgling export credit insurance cooperation (known by the Polish acronym “KUKE”) experienced difficulty in obtaining capital or any other practical assistance aside from consultant-produced studies. KUKE has established a limited commercial risk insurance program, but it has been unable to establish a political risk insurance program. While commercial risk insurance is useful for exporting to the stable OECD countries, both types of insurance are considered important for Poland to reenter riskier markets in former Soviet Union countries. KUKE officials told us that the Polish government had estimated export losses of $2 to $3 billion per year due to the lack of political risk insurance. Foreign investment is expected to play a major role in the transformation of Poland’s economy. Although Poland has made progress removing some impediments to foreign direct investment, a number of obstacles remain. The country’s 1991 Foreign Investment Law is generally regarded as a satisfactory legal foundation for foreign direct investment, and the telecommunications and transport infrastructures in large urban areas have been much improved. Nevertheless, bureaucratic, tax, and other impediments persist that only the Polish government can correct. Polish and donor officials as well as foreign investors repeatedly told us that bureaucratic bottlenecks and indecision at the middle management level in Polish ministries were persistent obstacles to individual investment deals. The officials said that while there is strong support for foreign investment at the highest levels of the Polish government, there is a large disparity between such high-level support and actual practice within ministries. Some officials pointed to suspicion about foreign investment on the part of the Polish people and media as having increased the wariness of local governments and a middle-level bureaucracy already deeply steeped in a culture of indecision. A lack of access to credit was also cited as a continuing obstacle to investment—a situation related to legal impediments insofar as inadequate collateral law and other such difficulties contribute to the problem. (See ch. 5 for a discussion of Poland’s banking sector.) Private sector and investor officials in Poland repeatedly cited uncertainties and inconsistent interpretation of tax law on the part of various governmental bodies from the Minister of Finance down to local tax authorities as a recurring investment impediment. For example, the Chairman of the American Chamber of Commerce in Poland told us that he knew of investors that had begun construction on new plants predicated on the assumption that they would receive certain tax exemptions, only to see the tax exemptions repealed for all but those already transacting business. Officials at the Foreign Investors Chamber of Industry and Commerce said that there have been cases where the Finance Ministry declared that a company was not liable for a particular tax, only to find several years later that the local tax authority disagreed. The officials said that the interest and penalties associated with such multiyear discrepancies are at a level spelling bankruptcy for companies choosing to acquiesce to the local tax authority. In another case involving tax obstacles, the investor retreated. In October 1992, Amoco signed a $20-million contract with Poland for petroleum exploration and exploitation. According to company officials, the agreement was conditional on the resolution of certain tax issues that would have involved aligning Poland’s oil and gas taxation with that in Western Europe and other developed countries. The officials said that they engaged in negotiations with the Ministry of Finance through December 1993, culminating in high-level meetings with the President, Prime Minister, and Finance Minister and that progress was slow but encouraging up until that time. They were poised to sign the final agreement with the Minister of Finance, when the Prime Minister dismissed him, followed by a Finance Ministry retrenchment from previously agreed to positions. After several months of indecision within the Ministry, in April 1994, Amoco finally relinquished its rights to explore. According to company officials, if agreement had been reached and sizable deposits were found, it could have led to a development contract of $100 to $150 million. Company officials recently told us that although the Polish government has since resolved the tax issues and the company has proceeded with other exploration projects in Poland, the original exploration project will not be resumed. Notwithstanding the existing impediments, a number of U.S. and other foreign companies have recently made significant investments in Poland. According to PlanEcon, the inflow of investments is expected to accelerate in coming years now that a London Club agreement has been reached and as more state-owned enterprises are offered for sale. According to the Polish Agency for Foreign Investment, the value of direct investment in Poland exceeded $5 billion, and another $5.1 billion had been committed as of March 1995. The United States is the largest investor country in Poland, accounting for more than one-third of investments, or $1.7 billion, followed by multinational companies, Germany, Italy, and the Netherlands. (See fig. 3.2.) Many U.S. firms investing in Poland are among the Fortune 500 companies, including Coca-Cola, PepsiCo, International Paper, and others. German investors are predominantly represented by small- and medium-sized businesses, giving Germany the largest number of individual investments in Poland. In contrast, Italy’s ranking as a leading investor in Poland is primarily due to the investment of one company, Fiat. Table 3.4 shows the 10 largest company investors in Poland from January 1990 to March 1995. According to the Polish Agency for Foreign Investment, the largest investment outlays went into the financial, food processing, electro-mechanical, and telecommunications industries. (See fig. 3.3.) Polish and donor officials told us that the size of Poland’s domestic market, with 38 million inhabitants, is the single most important factor in companies’ decisions to invest in Poland. Some early U.S. technical assistance geared toward improving the investment climate in Poland was unfocused. For example, the United States launched a program to help Poland improve its commercial law, but the program design included few specific goals. Rather than designing projects to complement other efforts addressing key economic restructuring impediments, USAID simply contracted with a number of institutes and the Department of Commerce to develop projects that would fit into several broad areas of commercial law development—an approach that USAID officials said was driven by congressional pressure to “get the money spent quickly.” According to USAID officials, the result has been scattered activities in an area where efforts should be sparing and cautionary because of the need for new laws to intermesh properly with existing legal codes. For example, one project involved having volunteers spend 4 to 6 weeks in Poland working on specific tasks such as helping to draft legislation. A USAID contractor working on a related project was critical of this approach. He indicated that it was too dependent on the personalities of individual volunteers who generally have little Polish language ability or lack the professional stature to work with officials in Poland. As an illustration of the difficulties posed by such an approach, the contractor cited a recent endeavor to work with Polish legal associations to establish a commercial law library in Warsaw. The volunteer in charge of this task, an American divorce lawyer with no Polish language skills, had difficulty getting the legal associations to work together effectively, and the project faced delays in getting started. USAID officials acknowledged that the person in charge was “not the best person” for the job. The USAID contractor also expressed concern that Polish officials were unable to use the results of a World Bank-sponsored project in the area of collateral law. He said that the Bank sponsored a Western expert to draft legislation in London but that the work was not useful to Polish officials because it did not intermesh properly with existing laws. He told us that, based upon his experience, no important piece of legislation will be adopted in Poland that is not prepared by Polish lawyers and that, while Polish legal experts appreciate assistance and guidance from western specialists, the legislation must ultimately be the work of Polish legal experts. The United States supported a variety of efforts to help promote investment in Poland, but many of these activities had limited impact. For example, the United States established an American Business Center in Warsaw, and the U.S. Commercial Service (USCS) in Warsaw was given responsibility for running it. The centers were specifically authorized under the SEED Act and were intended to provide temporary office space, phone, fax, and copy capabilities on a reimbursable basis to U.S. companies doing business in Central and East European countries where reliable services of this sort were not readily available when the transition process began. The center in Warsaw experienced difficulty in obtaining property and equipment in a timely manner and found that the USCS office could not effectively do its own work and run the center. The center served over 500 firms but lost money, partly because comparable services quickly became available through the private sector. Nevertheless, according to a USCS official in Warsaw, the center served as a good test case for centers opening later in the countries of the former Soviet Union. USAID also financed a project to (1) support the identification, analysis, and marketing of large infrastructure projects; (2) promote investment and trade, joint ventures, and co-ventures; and (3) assist in project packaging and marketing. However, USCS officials in Warsaw said that the project was wasteful because it utilized expensive consultants while lacking a clear plan. USAID officials acknowledged that the project was unsuccessful and said that they had retargeted it. The Overseas Private Investment Corporation (OPIC) has provided $700 million in insurance and financing for U.S. businesses investing in Poland. OPIC insures U.S. investments in that country against political risks and provides investment financing in the form of direct loans and loan guaranties. According to OPIC officials, a 1990 OPIC effort to provide guaranties for an investment fund targeted toward Poland and other Central and East European countries was dropped because the investment company managing the fund was unsuccessful in obtaining the necessary private, counterpart funds. However, OPIC is now providing debt guaranties to cover a significant portion of the capitalization for a venture capital fund to invest in small- and medium-sized private enterprises in Poland. The fund had raised about $65 million in capital as of May 1995. OPIC is currently developing three additional investment funds targeted toward Poland and other Central and East European countries. OPIC also sponsors investment missions to Poland for U.S. executives to learn about Poland’s investment climate and to meet with government officials, banks, prospective joint venture partners, and officials from U.S. companies already doing business there. Probably the most fundamental change in transitioning from a socialist command economic system to a market-oriented system is privatization or changing the system of ownership. Despite numerous reforms, many of which were intended to lessen the role of government in the economy, Poland’s record in privatization thus far is mixed. The country’s private sector has grown and many small- and medium-sized retail businesses have been privatized. Privatization laws have set the framework for reducing the rest of the state sector, but the pace of privatization for larger state-owned enterprises has been slower than expected, and significant portions of the Polish economy remain in the hands of the government. The United States and other donors are actively supporting Poland’s efforts to restructure enterprises and implement the country’s Mass Privatization Program; however, persistent delays threaten continued donor support. Changes in government, the reluctance among state-owned enterprises to enter the privatization process, and the poor financial condition of many enterprises have delayed privatization efforts. When reforms began in late 1989 and early 1990, Polish reformers and many Western economists were convinced that, to improve the efficiency of the Polish economy, the state-owned enterprises had to be converted to private ownership. By transferring such enterprises to private ownership, it was argued, the new owners would have a vested interest in the success of the enterprise and therefore seek to maximize profits by better utilizing labor, improving management, and investing in capital improvements. At the outset of reforms, Poland was in a better position with respect to ownership transformation than most other transition countries in Central and Eastern Europe. Since Poland had not collectivized its agricultural sector during the socialist years as other countries in the region had, most of this sector was already private. Further, the nonagricultural private sector was allowed to expand between 1982 and 1989 as an element of the limited socialist economic reform taking place during that period. At the end of 1989, over 23 percent of the Polish workforce was employed in private agriculture, over 10 percent was employed in the nonagricultural private sector, and 30 percent of the country’s GDP was located in both the agricultural and nonagricultural private sector. After reforms began, Poland’s private sector grew larger, both through the privatization of existing firms and through the establishment of new private firms. Poland’s early privatization efforts, termed “small privatization,” concentrated on rapidly selling small labor-intensive firms, such as hotels, restaurants, and shops. Hundreds of thousands of small- and medium-sized retail businesses have now been privatized, placing over 90 percent of this sector in private hands. After legal requirements for the set-up of private enterprises were liberalized, many new private businesses also emerged. By December 1993, the number of small businesses had risen to about 1.8 million, the number of private companies employing more than 5 people had grown to over 66,000, and about 60 percent of Poland’s employment and over 50 percent of its GDP were located in the private sector. Poland’s private sector is now the primary source of the country’s GDP growth. A World Bank economist estimated that Poland’s private sector GDP grew by 13 percent in 1993, while its public sector GDP declined by 4.1 percent. Despite the increased importance of Poland’s private sector in generating economic growth, the country continues to rely on state-owned enterprises for a substantial portion of its industrial production. While the private sector share of Poland’s industrial output is rapidly growing, state-owned enterprises still accounted for about two-thirds of the country’s industrial production at the end of 1993. The Polish government took steps to encourage these state-owned enterprises to operate more independently as part of its initial reforms. The elimination of price controls, the opening of the economy to international competition, the removal of most state subsidies and the discontinuance of Central Bank soft money policies encouraged some state-owned enterprises to become more cost conscious and to search out market opportunities. A 1993 World Bank paper on the performance of 75 large, Polish state-owned enterprises following the introduction of reforms reported that two-thirds of the studied enterprises showed signs of adapting to the new marketplace conditions. One of the paper’s authors later wrote that two of the most important lessons learned from the study were that (1) hard budgets and competition can stimulate state-owned enterprises to restructure before privatization and (2) the incentive effects of anticipated privatization are very important. Nevertheless, Poland’s state sector is being outperformed by the country’s emerging private sector. According to a 1994 World Bank paper, Poland’s state-owned enterprises lag behind emerging private firms in output growth, employment growth, investment growth, and profitability. The authors used a sample of 40 emerging private firms, 45 privatized firms, 41 State-Treasury companies, and 81 state-owned enterprises. Leszek Balcerowicz, Poland’s former finance minister and author of the country’s 1989 and 1990 economic reform program, wrote that while Poland’s early reform measures induced many state enterprises to adjust to the conditions of the market economy, an even larger increase in their overall economic performance could be achieved if they were privatized. Notwithstanding Poland’s success in privatizing small- and medium-sized retail firms, relatively few of the larger state-owned enterprises have been privatized. The Privatization Law for State-Owned Enterprises of July 1990 established the legal framework for Poland’s privatization program. The law allows for two methods of privatization: (1) capital privatization for larger enterprises and (2) liquidation for small- and medium-sized enterprises. The workers and management of the state-owned enterprises, in consultation with the Ministry of Privatization, select the method of privatization. Poland’s privatization program called for the sale of 50 percent of Poland’s state-owned enterprises over 3 years with the eventual goal of privatizing 80 percent of such enterprises. In December 1994, 4-1/2 years later, Poland’s Ministry of Privatization reported that approximately 36 percent of the original 8,441 state-owned enterprises had been transformed under the privatization process. (See fig. 4.1.) In addition, the government continues to play a significant role in many of these transformed enterprises. For example, more than 500 of these enterprises are commercialized corporations belonging to the State Treasury that are awaiting either capital privatization or participation in Poland’s Mass Privatization Program. The World Bank’s Resident Representative in Poland recently acknowledged some advantages to commercialization of state-owned enterprises; however, he emphasized that commercialization is not an effective substitute for privatization. He stated that privatization is one of the main policies for further developing the productive potential of the economy and that it is therefore of crucial importance that it be accelerated rather than slowed down. In commenting on a draft of this report, the Department of the Treasury emphasized the importance of large-scale privatization. Treasury said that while some of Poland’s state sector may have been successful at restructuring, a still-large state sector continues to promote misallocation of investment, poor fiscal controls, and excessive monetary growth, and that repeated delay in the privatization of larger concerns continues to be a drag on economic growth and inflation control. (Appendix I provides a discussion of the various privatization processes available to Polish enterprises, as well as the number of enterprises that have participated in each process.) Early in the restructuring process Poland determined that the restructuring and privatization of state-owned enterprises on an individual basis would be too time consuming and expensive, and as of January 1991, only five such enterprises had been successfully sold. Thus, the Ministry of Privatization sought to develop a program that would privatize hundreds of state-owned enterprises at once. On April 30, 1993, the Polish Parliament passed the Law on National Investment Funds that provides the legal framework for Poland’s Mass Privatization Program. The goal of the program is to (1) improve the efficiency and value of several hundred Polish state-owned enterprises, (2) accelerate the privatization process in Poland, and (3) provide each adult citizen with a stake in the privatization process. As part of the Mass Privatization Program, Poland was to establish 20 specially constituted National Investment Funds. These funds were to assist in the restructuring of Polish companies by holding the shares of state-owned enterprises taking part in the Mass Privatization Program.Each fund, operating as a joint stock company, was to be run by a management team under contract to a specially selected supervisory board. The funds were to seek a listing on the Warsaw Stock Exchange within a year of operation and remain in existence for at least 10 years. Each state-owned enterprise entering the program is expected to have its shares divided as follows: 33 percent held by a lead National Investment Fund; 27 percent distributed equally to all other such funds; 25 percent retained by the State Treasury; and 15 percent distributed free of charge to enterprise employees. Special share certificates are to be offered free of charge to certain pensioners and state employees. Ministry of Privatization documents state that donors have committed approximately $245 million in assistance to Poland’s Mass Privatization Program. (See table 4.1.) Based on available figures, the World Bank has provided the largest share of funding and technical assistance to date. The EBRD has also committed a large share of this assistance, mostly in National Investment Fund working capital and assistance to privatized enterprises. USAID, EU PHARE, and the British Know How Fund have provided additional advisors and technical assistance to the program. Continued delays in the Mass Privatization Program have caused many donors to question the Polish government’s commitment to the privatization process, and some donors have indicated that they may consider cutting their assistance in this area. Donor officials have stated that progress in implementing the Mass Privatization Program is necessary for continued commitment of assistance. In May 1994, Ministry of Privatization officials were confident that the program would demonstrate satisfactory progress and the National Investment Funds would begin operation in late 1994. Nonetheless, since then implementation of the program has slowed. Although by the end of August 1994, approximately 466 state-owned enterprises had committed to participation in the program, the government of Poland delayed final approval of the participating enterprises until mid-October. As of October 1994, approximately 444 state-owned enterprises were approved for the Mass Privatization Program. However, government differences over the composition of the National Investment Fund managers continued to delay implementation until late 1994. Some donor officials expressed concern about the signal being sent to state-owned enterprises by delays in implementing the Mass Privatization Program. An OECD official told us that the beneficial restructuring of such enterprises will not continue if privatization is put on hold. He said that an important motivation for some of the state-owned enterprises to restructure themselves is that they anticipate that they will eventually be privatized. Without the certainty of eventual privatization, these enterprises might not continue restructuring but instead might lobby the government to reinstate subsidies. Another donor official said that some state-owned enterprises, particularly the larger ones, are avoiding necessary changes in the hope that the government will announce a program to alleviate their problems without the enterprise having to go through privatization. USAID officials in Poland also expressed concerns about the effect of delays in the privatization process. According to one USAID official, some members of the Polish government have discussed a program of mass commercialization without any specific date for privatization. This proposed program would involve more than 1,000 state-owned enterprises, which, once commercialized, would be part of the new “Ministry of Treasury.” This new Ministry would act as a holding company for Poland’s commercialized enterprises. USAID and State Department officials in Poland were concerned that such a program could have a negative effect on the overall privatization process, potentially creating a commercialized state sector without a plan for privatization. A Polish law expert at the Library of Congress agreed that the proposed program would cause further privatization delays and would allow the government to maintain indefinite ownership of commercialized firms. Some donors have indicated that they may consider cutting their assistance in this area if privatization is further obstructed or delayed. The Ministry of Privatization has expressed some concerns about continued donor support for the Mass Privatization Program. A Ministry official said he was concerned that the EBRD’s Special Restructuring Project would absorb some of the money already set aside for the Mass Privatization Program. For example, he said the EU PHARE may decide to reallocate money to the Special Restructuring Project if it is the first program to get underway. Another official said the EU PHARE has been upset with the delay in the Mass Privatization Program and may cut or end all future assistance to the program. According to the EU PHARE representative in Poland, no additional PHARE funding was provided to the Ministry of Privatization in 1993 because the Ministry had spent very little of the earlier assistance. According to the representative, Poland now has a 2-year funding pipeline of PHARE assistance, and therefore, no new funding is needed. U.S. and EBRD support for post privatization was announced in July 1994, in connection with President Clinton’s visit to Poland. The United States proposed a new effort to provide $75 million of equity capital and technical assistance to support Poland’s Mass Privatization Program. Under the new proposal, the Polish American Enterprise Fund and EBRD will each commit $15 million for equity investment, USAID will commit $10 million to technical assistance, and an additional $50 million in financing will be generated by the EBRD and others. A U.S. Treasury official familiar with the proposal said the EBRD is expected to put forward the majority of the capital. The EBRD had already announced that the newly privatized enterprises were eligible for approximately $300 million in EBRD restructuring assistance that would be available to any Polish enterprise on a case-by-case basis. Although more than 5,000 state-owned enterprises may remain after implementation of the Mass Privatization Program, Poland and the donor community expect the program to restart a delayed privatization process, provide millions of Polish citizens with a stake in the transformation process, and set the stage for continued privatization in Poland. According to the Department of the Treasury, enterprises in the Mass Privatization Program account for only 8 percent of the state sector. However, a USAID official said that the state-owned enterprises being privatized under the Mass Privatization Program and other privatization initiatives represent a significant share of Poland’s state sector, with most of these privatized enterprises coming from the ranks of the larger state-owned enterprises. Poland’s slow progress in privatizing larger state-owned enterprises can be attributed to at least three factors: (1) government indecision brought about by the changes in Poland’s government over the past 4 years, (2) the reluctance among state-owned enterprise employees and management to enter the privatization process, and (3) the poor financial position of many state-owned enterprises. The government of Poland announced its Mass Privatization Program in June 1991, but did not enact laws to make such a program possible until April 1993. Meanwhile, the rate of privatization slowed as each new coalition government reassessed the privatization approach in the face of public criticism of the process. For example, the new government elected in 1993 reevaluated the country’s privatization efforts, and debates over revisions to the privatization legislation and the roles of various ministries in the privatization process have also delayed the process. Also, high unemployment, fear of foreigners buying up the country’s assets, and concern over undervaluation of state-owned enterprises have given advocates of the status quo greater representation in the government. Uncertainty among workers and management at the state-owned enterprises has also delayed privatization. Under the Privatization Act of 1990, the founding body and the managers and workers’ councils at the state-owned enterprises must mutually agree on a method of privatization and then apply to the Ministry of Privatization for approval. The Mass Privatization Program is also dependent on workers and managers volunteering their enterprise for the program. While a large number of smaller state-owned enterprises were liquidated in the early years of reform, the larger ones and related trade unions were able to maintain the status quo until they were granted a larger role in the privatization process. According to one Ministry of Privatization official, the state-owned enterprises were more or less self-governing under the Solidarity unions before the privatization process began. Many state-owned enterprises perceived any change in their status as a threat. A ministry official said that both management and workers at these enterprises need to be educated on the benefits of privatization. Some of the worker and management concerns were addressed in the Enterprise Pact, a document that came out of talks between the government, state-owned enterprise managers, and trade unions. The provisions of the agreement were intended, among other things, to increase employee participation in the management and equity distribution of privatized enterprises and encourage the financial restructuring of state-owned enterprises. The Enterprise Pact was signed by all parties in February 1993, and implementing the provisions of the pact was a Ministry of Privatization priority for 1994. The poor financial condition of many state-owned enterprises has also delayed the privatization process. A Ministry of Privatization official stated that many of the healthiest state-owned enterprises have already been privatized, and before the remaining enterprises will be attractive candidates for privatization, they need to be restructured—a process that takes additional time. A number of these state-owned enterprises also have assets not related to their core business that need to be sold separately, such as schools, housing, hotels, resorts, and police stations. Various financial restructuring paths are available to these troubled state-owned enterprises, all of which may require added time before privatization can take place. Among other methods, restructuring can occur under (1) the Law on Financial Restructuring of Enterprises and Banks, (2) the EBRD’s Special Restructuring Project, and (3) the Ministry of Privatization’s Restructuring Through Privatization Program. (Appendix II provides a more detailed discussion of these three methods.) The United States adjusted the emphasis of its assistance program to Poland when Poland’s privatization programs experienced delays and some of the U.S. assistance efforts proved ineffective. The U.S. government, through its reprogramming of earlier contributions, has assisted in the restructuring of Polish state-owned enterprises prior to privatization. In addition, USAID has shifted its assistance program to work more closely with the government of Poland after USAID’s initial approach proved costly and time-consuming. In 1989, both Poland and the donor community were in favor of the rapid privatization of the country. However, as the financial condition of many state-owned enterprises became apparent and the pace of privatization began to slow, the U.S. and donor community responded by helping to develop restructuring programs. This included using donor resources from the no longer needed Polish Zloty Stabilization Fund, including the $199 million U.S. contribution, to establish the $415 million Polish Bank Privatization Fund. The Bank Privatization Fund was created to support the recapitalization of Poland’s ailing banks and to indirectly stabilize and restructure Poland’s indebted state-owned enterprises. (See ch. 5 for a discussion of donor assistance in Poland’s banking sector.) The enterprise restructuring being implemented by Poland and the donors may better prepare some of the state-owned enterprises for eventual privatization. An October 1994 EBRD report stated that rapid privatization is “often at the expense of ownership and governance quality,” whereas financial restructuring prior to the sale of a state-owned enterprise “aims to attract high-quality owners.” Transition Report: Economic Transition in Eastern Europe and the Former Soviet Union, EBRD, Oct. 1994, p. 49. firm-specific and sectoral assistance was too time-consuming and costly. For example, the $3.7 million in USAID funding for the glass sector led to only four state-owned enterprise privatizations, a cost of more than $900,000 per enterprise privatized. In addition, as of May 1994, only four of eight targeted enterprises had been privatized under the almost completed furniture sector project. USAID’s sector-specific strategy problems were due in part to the Ministry of Privatization’s unwillingness to relinquish control over certain state-owned enterprises and withholding of important information related to the restructuring and privatization efforts. According to a USAID official, the government of Poland had initially supported the firm-specific and sectoral assistance, but the Ministry of Privatization wanted to include these enterprises in the Mass Privatization Program and proved to be a powerful opposition force to the USAID-supported contractors. Other USAID projects encountered government unwillingness to follow through with privatization. For example, USAID spent more than $1 million restructuring LOT Polish Airlines in preparation for its privatization. This was USAID’s largest single firm-specific privatization effort in Poland. Although the assistance has been a restructuring success, the project’s goal of privatizing the airline has not been met. According to a USAID official, foreign investors have shown interest in the airline, but the Polish government has rejected these overtures. In 1993 USAID’s privatization work in Poland began to shift from the firm-specific and sectoral assistance approach and toward projects assisting the Ministry of Privatization with the privatization process. According to the USAID representative in Poland, the early privatization efforts were misdirected because they were based on an assumption that the privatization work was short term and could be performed with a 90-day consultant team. USAID is now building on its earlier work at the Ministry of Privatization. Beginning in 1992, USAID assisted the Ministry with the National Investment Funds as well as share trading and distribution practices. USAID assistance in late 1993 included a project that placed specialists in corporate finance as well as mergers and acquisitions in the Ministry to assist with privatization transactions. Additional projects to assist the Ministry with the Mass Privatization Program were being planned as of May 1994. USAID has also started a regional privatization project with the Ministry of Privatization to assist Polish regional governments with the privatization of state-owned enterprises. The Ministry is providing technical assistance to state-owned enterprises undergoing privatization, assisting the regions with privatization strategies, and helping to identify possible investors. USAID is supplying the training component for the overall program, while the EU PHARE program will provide advisory services. According to a USAID official, the agency’s Warsaw office is also planning a new pilot program to assist some of these state-owned enterprises with their privatization transactions, helping them to become eligible for credit and capital from the Polish-American Enterprise Fund and other donor programs. Over the last 5 years, Poland has fundamentally reformed its banking sector. Multilateral and bilateral donors have provided important support for recapitalizing Poland’s state-owned banks and for restructuring the banks’ problem loan portfolios. Early problems with donor technical assistance have been resolved. Nonetheless, bank privatization has been limited; many small private and cooperative banks are in poor financial condition; policies regarding the licensing of foreign banks are unclear, and small- and medium-sized businesses continue to lack sufficient bank credit to develop and expand their operations. Donors have undertaken various activities to help. Poland’s banking sector has undergone fundamental changes since the beginning of reforms in 1989. The country’s old command economy central bank has been transformed into an independent central bank and its old regional branches have been converted into individual commercial banks, some of which have been or are being privatized. A number of new private banks have also been established. The government has recapitalized the country’s state-owned banks and has made significant progress in restructuring their problem loan portfolios. The National Bank of Poland Act and the Polish Banking Act, both enacted in 1989, provided the framework for reforming the Polish banking system. These laws transformed the old central bank, which had served as the state conduit of credit to enterprises in the command economy, into an independent central bank with responsibilities for macroeconomic policies and supervision of banks. The 1989 legislation also transformed the regional branches of the original central bank into nine new state-owned commercial banks, three of which have since been privatized. These nine banks dominate Poland’s banking sector, and, along with four specialized banks that remain from the prereform era, accounted for over 75 percent of total banking sector assets as of mid-1993. The remaining banking sector assets are located in about 1,600 small cooperative banks, which existed prior to reforms to serve agrarian interests, and 60 private and foreign banks established pursuant to the 1989 legislation. In 1991, the profitability of state-owned enterprises deteriorated following the collapse of Poland’s trade with its former CMEA partners. As a result, many state-owned enterprises relied increasingly on debt to finance operations while their ability to service such debt diminished. The state-owned banks rolled over credits, capitalized unpaid interest, and extended new loans to these firms. In mid-1991, a Ministry of Finance audit of the state-owned commercial banks revealed a high percentage of problem loans. The audit classified 16 percent of outstanding loans as not recoverable, 22 percent as having doubtful recovery, and 24 percent as not current, and revealed that the banks’ capital adequacy ratios were significantly less than those required by Polish banking regulations. A USAID-contracted study reported that, despite divergent interests within the government, the Ministry of Finance directed the state-owned commercial banks to tighten credit discipline over delinquent enterprise borrowers in 1991 and 1992. By 1992, according to an IMF study, the quality of the portfolios had stabilized somewhat. To deal with the bad debts of state-owned enterprises and the inadequate capitalization of banks, the government of Poland enacted the Financial Restructuring of Enterprises and Banks Act (FREB), in February 1993. The FREB approach involved banks in the restructuring of state-owned enterprises with delinquent debts. In the process, the portfolios of the state-owned commercial banks were improved, and the banks were recapitalized in preparation for privatization. In September 1993, the Polish government recapitalized these banks using special government restructuring bonds. The bonds held by a particular bank are to be serviced by the central government until privatization occurs, after which the bonds are to be serviced and redeemed by the $415-million Polish Bank Privatization Fund created with resources from the former Polish Stabilization Fund. The primary elements of the program required banks to segregate loans by likelihood of repayment, create reserves against those loans considered unlikely or doubtful of recovery, set up workout departments to manage the bad loans, and restructure their loan portfolios. The restructuring act prohibited giving new loans and advances to enterprises with loans classified as substandard. Under the act, each bank was to liquidate or restructure its loans assigned to a problem portfolio by the end of April 1994, unless (1) the loans had been restructured, (2) the debtor had been declared bankrupt, (3) liquidation proceedings had been instituted with respect to the debtor, or (4) the debtor had been servicing his debt obligations for at least 3 months without interruption. Considerable progress has been made under this plan, and according to PlanEcon, by April 1994, the seven state-owned commercial banks had settled over one-half of the bad debts that qualified for the program. Notwithstanding Poland’s progress in reforming its banking sector, several major hurdles remain. According to Polish and donor officials as well as other observers, bank privatization has been limited; many small private banks are undercapitalized and badly managed; the country’s licensing policies for foreign banks lack transparency; and Poland’s small rural cooperative banks are in poor financial condition. According to these officials, Poland’s bank supervision capacity needs further strengthening, and bankers need additional training. Also, small- and medium-sized enterprises in Poland continue to lack sufficient bank credit to develop and expand their operations. Although Poland has made considerable progress in restructuring the portfolios of state-owned commercial banks, the government’s plans for reforming the financial sector go beyond improving the banks’ health. A final goal of the FREB act is to privatize the remaining state-owned banks, but progress has been slow. Three of the nine original state-owned commercial banks have been privatized, and another is scheduled for privatization by the end of 1995. However, PlanEcon recently reported that those remaining will take more time to privatize due in part to the weak performance of the Warsaw Stock Exchange. At the end of 1994, the Polish government was still the largest shareholder in the banking sector, with over 69 percent of the equity of all commercial banks and control of almost 80 percent of all assets. According to Polish government and donor officials, between 1990 and 1992, liberal bank licensing requirements led to the establishment of a large number of small banks, many of which were undercapitalized and badly managed. Poland’s central bank significantly tightened regulations in 1993, resulting in a decline in the number of new bank licenses issued. However, donor officials told us that about 25 percent of these banks remain technically insolvent and qualify for closure; and IMF reports confirm that many of these banks have loan portfolio problems. Poland’s central bank has directly “bailed out” some private banks. However, according to donor officials, the Polish government is reluctant to close banks without compensating depositors. Because the central bank is concerned about the cost of closing banks, it is, instead, encouraging the consolidation of financially troubled banks with banks that are financially sound. The government recently made progress on another problem affecting most private banks—a lack of deposit insurance. In December 1994, the government passed a law creating the Bank Guarantee Fund, which will insure deposits at all banks—private and state-owned. Donor officials said that requirements for banks to submit to the Guarantee Fund’s strict lending standards and supervision would encourage better lending policies and provide more stability for private banks. Foreign banks currently constitute the strongest portion of the banking sector; however, a donor official told us that foreign financial institutions are concerned about a lack of transparency in the bank licensing process. PlanEcon reported that, despite many applications, Poland’s central bank had issued only one new license to a foreign bank between March 1992 and late 1994 and that the government’s licensing policy had been unclear. According to an EBRD official, Poland’s central bank has tried to “force” Western banks to buy problem banks as a prerequisite for obtaining banking licenses in Poland. However, he said that this policy has not been well received by Western banks. The PlanEcon report noted that the central bank appeared to have become more willing to negotiate the licensing of foreign banks by the end of 1994. According to Polish and donor officials, Poland’s 1,600 small cooperative banks serving largely rural areas are also in poor financial condition. According to an IMF report, about 200 of 1,000 banks examined by the central bank qualified for bankruptcy as of March 1994. PlanEcon reported that about 1,200 of the cooperative banks are affiliated with the Bank for Food Economy, which was recapitalized under Poland’s FREB program in 1994. However, the restructuring of this bank’s bad debts has been addressed only recently. Because many of the bad loans were owed by farmers, restructuring these loans is considered politically difficult. The cooperative banks represent only 6 percent of Poland’s banking assets; however, they are a principal source of banking services for Poland’s agricultural population. The failure of these cooperatives would have severe budget consequences as these deposits are guaranteed by the Treasury. Additionally, given the large number of institutions, they require a disproportionate amount of supervision from the central bank. According to Polish government and donor officials, it is important to create a cadre of Polish experts in areas such as banking supervision and credit analysis before good lending practices can be fully integrated into Poland’s banking system. A USAID-contracted study concluded that while Poland’s central bank has made rapid progress in building its capacity in some areas, additional work remained to be done in developing the bank’s capacity to supervise the banking sector. The study also reported that training of bank staff in Poland was needed and would continue to be needed for some time. According to Polish and donor officials as well as other observers, small- and medium-sized enterprises in Poland continue to lack sufficient bank credit to develop and expand their operations. Poland’s emerging private sector has generally encountered a risk-averse, domestic banking system, and foreign commercial banks that are unwilling to lend to new Polish ventures. According to Poland’s Ministry of Finance, more than 80 percent of the country’s banking sector’s business continues to take place in state-owned banks. While state banks have concentrated their attention on working out the bad debts of state-owned enterprises and providing new loans to the healthier state-owned enterprises, these banks have remained cautious about providing new loans to small- and medium-sized enterprises. According to a development expert at the London School of Economics, the reluctance to make loans to small- and medium-sized businesses is compounded by Polish bankers’ lack of expertise in evaluating small business propositions. He added that poor collateral laws also limit the amount of credit available to such firms. The government of Poland expects state-owned banks to continue focusing on state-owned enterprises. The Ministry of Finance’s “Strategy for Poland” commits the state banks to supporting such firms in future years, stating “the government will be using domestic banks to a larger extent for managing state-owned wealth, for the privatization of state-owned enterprises, and for bringing them back to health.” The Ministry’s financial sector strategy says very little about bank assistance to Poland’s emerging private sector, particularly the small- and medium-sized enterprises. Foreign commercial banks in Poland also have been cautious with their lending. According to an OECD official, the few foreign commercial banks operating in Poland have limited their activities to larger Western investors. One Western banking official said his bank would prefer a few large transactions over numerous small transactions. Some of this cautiousness was also attributed to the lack of a debt accord between Poland and its commercial creditors; however, this obstacle was resolved in October 1994 when Poland and the London Club of commercial creditors signed an agreement to reduce and reschedule Poland’s more than $13 billion in private sector debt. Donors have recognized the lack of available credit for small and medium-sized enterprises and have undertaken various activities to help fill the gap. According to Polish government and donor officials, the U.S.-sponsored Polish-American Enterprise Fund has been more successful than other donor programs in this area. The Enterprise Fund’s small loan component, the Enterprise Credit Corporation, has assisted Poland’s small- and medium-sized enterprises with more than 2,300 small business loans worth over $56 million. Fund and donor officials attribute the program’s success in reaching the smaller enterprises to the fact that it did not depend upon the existing banking skills in Poland, but instead trained and monitored the staff of the banks used as intermediaries. We reported on the Enterprise Fund’s success in 1994. The centerpiece of assistance in the Polish banking sector was donor support for Poland’s FREB program to restructure enterprises and banks. In collaboration with the World Bank, the Polish government issued bonds to recapitalize the banks that are to be serviced and redeemed by the Polish Bank Privatization Fund after the banks are privatized. This fund was established using resources that were no longer needed for the Polish Zloty Stabilization Fund. The World Bank also provided a $450-million loan to assist in the FREB program. As part of this effort, the World Bank plans to help the Polish government supervise an intervention fund. This fund is intended to act as a “hospital” for state-owned enterprises that are too large to be liquidated. According to Polish government officials, some early technical assistance to Poland’s financial sector was of limited value, but many of these problems have been resolved and donors are now providing more useful assistance. For example, officials told us that in the early stages of reform, many consultants came to Warsaw for 1- to 2-week stays, interviewed some officials, and then produced reports that merely repeated everything they had been told. Polish officials told us that donor technical assistance and training is now addressing some of the most important needs remaining in this area, such as bank supervision and credit analysis. For example, the United States has provided long-term Department of the Treasury advisors to various banks, Poland’s central bank, and the Warsaw School of Banking. In addition, Peat Marwick-KPMG, through a USAID contract, is working with the central bank to develop an on-site inspection manual for bank supervision. The manual development is accompanied by advice on strategic planning for bank supervision. Peat Marwick-KPMG also provides advisers to the central bank to help develop operations procedures for the General Inspectorate of Banking Supervision, the central bank’s unit for bank supervision and examination. U.S. Treasury advisers have been assigned to Polish commercial banks, the central bank, and the Ministry of Finance. Typically, these advisers are fluent in Polish, reside in Poland, and are assigned for a year or more. They provide advice and training on a multitude of subjects. The adviser in the General Inspectorate of Banking Supervision provides daily assistance to the officials and staff on all aspects of banking supervision; helps formulate policy, develop examination techniques, and train staff in financial analysis and inspections; assists in development of the supervision manual; and serves as a liaison with donors. The U.S. Treasury Department also provides a long-term adviser to the Warsaw School of Banking, along with some short-term instructors, through a contract with Peat Marwick-KPMG. This school is one of three banking schools in Poland and focuses on training middle and senior level managers. With approximately 100,000 to 150,000 banking personnel in Poland, a primary goal of the school is to develop a cadre of Polish trainers to multiply the training effect of the Western advisers. The Financial Service Volunteers Corps is also supported by the U.S. program, and provides volunteer technical expertise to countries making the transition to a market economy. The advisers provided by this program tend to be short term and have worked on projects such as drafting legislation and regulations, training bank managers, advising policymakers, and assisting with the development of basic financial products and services. Other bilateral and multilateral donors are also active in the banking sector. For example, the British Know How Fund supports 14 advisers at 3 Polish banks and the Ministry of Privatization, and has 2 advisers in the Ministry of Finance. The fund has been instrumental in the privatization of a major bank, and has funded training for bank staff at the Katowice Banking School. The EU PHARE Program has provided training to many financial institutions, provided consultants to the workout departments of the state-owned commercial banks, performed audits, and provided audit assistance to the General Inspectorate of Banking Supervision. Since the reform process began in Central and Eastern Europe, Poland has undertaken some of the most dramatic economic reforms in the region. While Poland continues to face a number of impediments to its restructuring efforts, the country has made significant progress toward economic restructuring in key areas such as macroeconomic stabilization, foreign trade and investment, privatization, and banking. The United States and other donors have actively supported Poland in its transition efforts, although this assistance has been more useful in some areas than in others. After 5 years of reforms, Poland’s experience in transitioning to a market-oriented system offers some lessons that could be of interest to countries such as Russia, Ukraine, and others not as far along the reform path as Poland. Because there are tremendous differences among transition countries in Central and Eastern Europe and the former Soviet Union as to the size of their economies and populations, their political situations, their ethnic compositions, and a host of other variables, the lessons of Poland have differing applicability to each of the other transition countries. Nevertheless, there are a number of lessons learned from Poland’s restructuring efforts in several key areas that, at a minimum, merit consideration by the other transition countries and those involved in assisting these countries. Two such lessons involve Poland’s early efforts to stabilize and liberalize its economy. The first is that Poland’s own efforts in coupling tough reform measures with consistent macroeconomic policy over several years were critical to the country’s current economic recovery. The second is that some of the most important forms of donor assistance provided in support of Poland’s transition were those that backed Poland’s early macroeconomic stabilization and liberalization measures. The Polish government took a wide range of actions, including cutting subsidies to industry and households, tightening monetary policy, devaluing the currency, liberalizing prices, establishing a free-trade regime, and liberalizing the legal requirements for setting up private enterprises. Donors supported these measures in the form of the Polish Stabilization Fund, balance of payments support, and debt restructuring and forgiveness. By creating the basic operating features of a market economy, Poland effectively set the stage for further economic restructuring and integration into the world economy. The country is now experiencing healthy economic growth. While Poland’s economy is currently among the fastest growing in Europe, the country’s continued economic growth and integration into the world economy is widely considered to depend at least in part upon increased foreign trade and investment. Some of the most important factors for improvement in these areas require Polish or donor government actions beyond the confines of assistance. Poland has achieved dramatic increases in its exports to the West, and a number of U.S. and other foreign companies have recently made significant investments in the country. However, Poland continues to run a large trade deficit, trade barriers hamper its exports of certain products to the EU, and a number of obstacles continue to impede foreign investment. Some of the most persistent investment impediments, such as bureaucratic and tax uncertainties, demand the attention of the Polish government rather than of donors. On the other hand, further reducing EU trade barriers could help Poland increase its exports, diminish its trade deficit with the EU, and earn additional foreign exchange for further restructuring. In addition to its long-term importance for economic growth, foreign trade had a more immediate bearing on the success of early reform measures, and early liberalization of foreign trade played a critical role in helping state-owned enterprises adapt to market conditions. Poland liberalized foreign trade regimes as an element of the country’s early stabilization and liberalization measures. By doing so, Poland subjected its state sector to foreign competition and provided international relative prices that monopolistic Polish firms would not have offered in an environment of liberalized prices. The opening of the economy to international competition, the removal of state subsidies, and the discontinuance of central bank soft money policies encouraged some state-owned enterprises to become more cost conscious and search out market opportunities. Poland’s experience in creating market conditions suggests another important lesson—that encouraging the early development of a dynamic private sector is at least as important as the timing for undertaking large-scale privatization. While the pace of privatization for Poland’s larger state-owned enterprises has been slower than expected, this slow progress has been offset by the success of the country’s private sector. Poland’s early measures to remove the state from large-scale detailed direction of the economy and to provide an environment conducive to private sector development resulted in a rapidly growing private sector. Many new businesses have emerged and a large number of small- and medium-sized retail businesses have been privatized. Poland’s private sector is now the primary source of the country’s economic growth and a substantial base of Polish employment. Notwithstanding the success of Poland’s private sector, significant portions of Polish productive capacity and employment remain in the hands of the government. Some are concerned that without the certainty of the eventual privatization of larger state-owned enterprises, such firms might not continue restructuring but instead might lobby the government to reinstate subsidies. Donors have actively supported Poland’ efforts to restructure enterprises and implement the country’s Mass Privatization Program despite waning public and governmental support. However, donors are concerned about continued delays in implementing the program. If the Polish government fails to follow through on its promise to move forward on the Mass Privatization Program in 1995, donor support for the program may erode. Poland’s experience in restructuring its banking sector offers some additional lessons. Although several major problems remain in this area, the country’s banking sector has undergone fundamental changes since the beginning of reforms in 1989. Nevertheless, even when faced with hard budget contraints and other market reforms that included curtailed government-to-industry subsidies, many state-owned enterprises were able to circumvent the constraints and continue financing loss-making operations through their relationships with state-owned banks. When the profitability of state-owned firms deteriorated following the collapse of Poland’s trade with its former CMEA partners, many such firms relied increasingly on debt to finance operations while their ability to service such debt diminished. The state-owned banks reacted by continuing to lend, rolling over credits, and in many cases capitalizing unpaid interest, contributing to a high percentage of problem loans and technical insolvency. Poland’s experience in restructuring its banking sector shows that donors were able to play a useful role in supporting the country’s reform efforts in this area. Multilateral and bilateral donors provided strong support for Polish efforts to recapitalize and restructure the problem loan portfolios of state-owned banks, and considerable progress has been made under this plan. In addition to recapitalizing the banks, the program has contributed to the restructuring of many of the indebted state-owned enterprises. Donor technical assistance has also been useful in the banking sector. Although a great deal of economic restructuring remains to be done in Poland, the country has made impressive progress toward the goal of transforming its economy into a full-fledged, market-oriented system. Poland’s task would certainly have been more difficult without donor support in certain key areas; however, donor assistance is not a guarantee of success. Without Poland’s consistent commitment to reforms, its determination to take early and decisive reform actions, and its persistence in building the basic institutions and legal infrastructure required for a functioning market economy, the country’s progress could not have been as substantial. Poland’s experience suggests that the ultimate success or failure of reform efforts is far more dependent upon the actions of the transition country than it is upon those of outside participants.
GAO reviewed economic restructuring and donor assistance in Poland, focusing on: (1) the status of Poland's economic restructuring efforts in the areas of macroeconomic stabilization, foreign trade and investment, privatization, and banking; (2) impediments to these restructuring efforts; (3) the role donors have played in the transformation process; and (4) lessons learned that could be useful to other transition countries. GAO found that: (1) Poland has made major progress in stabilizing and restructuring its economy and has one of Europe's fastest growing economies but is still struggling to overcome relatively high rates of inflation and unemployment; (2) the International Monetary Fund and other major donors played an important role in the early stages of the reform process by requiring Poland to adopt tough macroeconomic reforms in return for receiving substantial donor assistance, but Poland's efforts to implement tough reform measures and apply consistent macroeconomic policy have been critical factors in the country's economic recovery; (3) Poland has achieved significant increases in its exports to the West, a number of foreign companies have made significant investments there; (4) trade barriers hamper Poland's exports of certain products to the European Union, internal obstacles continue to impede foreign investment; (5) donor assistance has had only a marginal impact in facilitating trade and investment, some of the most essential improvements require Polish government or donor actions beyond the confines of assistance programs, such as removing bureaucratic and tax obstacles to foreign investment and making markets more accessible to Polish exports; (6) progress toward privatizing Poland's economy has been mixed, economic reforms have resulted in a rapidly growing private sector, but significant portions of the economy remain in the hands of the government; (7) the United States and other donors are actively supporting Poland's efforts to restructure enterprises and implement its Mass Privatization Program but persistent delays threaten continued donor support; (8) Poland has fundamentally reformed its banking sector, but several major problems remain, including delays in bank privatizations, unclear policies regarding the licensing of foreign banks, and inadequate banking expertise and bank supervision skills; (9) donors provided key financial support for recapitalizing the state-owned banks and restructuring their problem loan portfolios; (10) some problems with donor technical assistance were encountered but have been resolved, donors are now addressing some of the sector's more important remaining needs, such as the need for improved banker training and enhanced bank supervision; (11) while the situations of other transition countries vary greatly, Poland's experience offers lessons that merit consideration by countries such as Russia, Ukraine, and others not as far along the reform path; and (12) the lessons suggest that, while donor assistance can be important in supporting economic restructuring efforts in certain key areas, the ultimate success or failure is more dependent on the actions of the transition country than those of outside participants.
DOE relies on contractor organizations to manage, operate, maintain, and provide support to its environmental cleanup and science and energy research at government-owned facilities. Contractors at environmental cleanup sites direct remediation efforts for radioactive and hazardous waste contamination generated during former nuclear weapons research and production activities. The eventual completion of cleanup activities at individual contractor locations without the existence of other ongoing operations, called site completion or site closure, generally leads to transition into long-term stewardship activities such as monitoring and surveillance requiring significantly fewer resources. Contractors at research sites complete a variety of ongoing research and development activities at national laboratories and universities. Contractors at cleanup and research sites may sponsor and pay pension and postretirement health benefits, collectively called postretirement benefits, for employees providing service under DOE contracts in order to attract, motivate, and retain qualified individuals to assist the agency in carrying out its mission. Contractors administer postretirement benefits for these employees either by establishing separate benefit plans solely for these individuals or by arranging for their participation in existing corporate plans, where contractor employees at DOE sites and those contractor employees assigned to non-DOE work participate in the same benefit plans. DOE reimburses contractor payments for employee compensation, including postretirement benefits as authorized by applicable regulations and each contractor’s operating agreement. For example, the Federal Acquisition Regulation (FAR) establishes uniform policies and procedures for the acquisition of goods and services by executive agencies. The FAR cost principles include factors to be considered by an agency when determining whether a contractor-claimed cost is to be allowed and reimbursed by an agency. Generally, consideration of whether compensation costs incurred under a government contract with a commercial organization are allowable includes whether they are, among other things, reasonable, allocable, and compliant with other applicable standards and the terms of the contract. For fiscal year 2003, DOE reimbursed approximately $431 million in contractor postretirement benefit contributions at 39 different DOE contractor sites. Contractor employees qualify for retiree benefits in pension and postretirement health plans differently, resulting in different methodologies for the payment of the two types of benefits. Pension benefits are determined using a formula based on employee salary and years of service as specified by contractor plan provisions. Employees accrue, or earn, future pension benefits throughout their period of service and are generally required to work for a certain period, called a vesting period, before they have a right to receive any accrued retirement benefits. DOE contractors that offer defined benefit pension plans are subject to the minimum funding standards established by the Employee Retirement and Income Security Act of 1974 (ERISA).The ERISA requirements set minimum standards regarding how much contractors must set aside each year in order to provide for future defined benefit pension payments when they are due. In contrast to contributions for pension benefits, there are no legal requirements to fund postretirement health benefits in advance for payments to retirees. Therefore, DOE contractors generally pay for postretirement health plans on a pay-as-you-go basis. Retired contractor employees are usually entitled to participate in contractor health plans after they complete a period of service immediately prior to their retirement. However, unlike pension benefits, the future amount of postretirement health benefits earned by a contractor employee cannot be expressly defined at the employee’s retirement date. This is due, in part, to the potential for future contractor changes in benefit provisions, such as retiree contributions, copayments, and coverage limitations, or cancellation of postretirement health coverage. Figure 1 summarizes the previously discussed relationships between DOE, contractors, third-party administrators, and contractor employees in the payment, sponsorship, and delivery of postretirement benefits. Consistent with the long-term nature of DOE research and cleanup activities, it is DOE’s policy to provide for the continuation of postretirement benefit plans when there are changes in individual contractors due to contract competitions. Typically, these scenarios would not result in the need for the cancellation and re-creation of these benefit plans. Although future contractor employee benefits earned may change as a result of contract negotiations, DOE attempts to continue the existing benefit plan with the new contractor as the sponsor, or offer comparable benefits in a successor contractor benefit plan during changes in contractors. Under this scenario, prior contractor retirees continue receiving benefits from the new plan sponsor and current employees continue to accrue benefits according to existing plan provisions. It is also DOE’s policy to facilitate the continuation of postretirement benefits following the completion of activities at environmental cleanup sites. DOE officials stated that an agency review of contracts, benefit plan documents, and labor agreements determined that contractor postretirement plans set forth the terms of an exchange between the contractor and contractor employees. In exchange for current services, contractors provide benefits after retirement (i.e., monthly pension payments and payments toward postretirement health insurance premiums) as defined by the terms of the postretirement benefit plans. DOE officials also stated that the continuation of pension and postretirement health benefits is necessary to reward former contractor employees for prior service at cleanup sites and to attract and retain future contractors and contractor employees to work at remaining cleanup sites. The completion of all contractor activities at environmental cleanup sites generally results in either the termination of the prime contract or a significant reduction in the scope of the outstanding contract. These contract changes at site closure differ from a change in contractor at an ongoing site because retirees who earned postretirement benefits under the terms of prior contracts are left without an active contractor to administer future benefit payments. It is DOE’s policy in these situations that future postretirement benefits earned by contractor employees may be satisfied by the outgoing contractor in one of two ways. Under the first option, the contractor can request reimbursement from DOE for the immediate settlement of outstanding benefit obligations, such as through the purchase of insurance contracts. Under the second option, the contractor may facilitate the continuation of the current benefit program and seek DOE reimbursement as postretirement benefit payments are made to retirees. The outgoing contractor can achieve the latter option through continuing to sponsor current postretirement benefit plans or through the transfer of plan administration to another party. This report refers to those benefits due and paid after site closure as post-closure benefits. In 1996, DOE issued Order 350.1 to establish responsibilities, requirements, and further cost allowability criteria for the management and oversight of contractor compensation programs. The order provides that contracting officers are largely responsible for the review and approval of allowable contractor compensation costs. It also details procedures for the management and oversight of postretirement benefits, such as the approval of new postretirement benefit plans, the approval of changes made to existing plans, and required procedures during contract and postretirement benefit plan terminations. The department’s Contractor Human Resources Management Division (CHRM) is responsible for providing contracting officers with policies and procedures for managing contractor postretirement benefits costs under the provisions of DOE Order 350.1. DOE’s Office of Procurement Assistance Management (OPAM) establishes overall performance objectives for contractor compensation programs and approves changes to pension and postretirement health benefit plans in excess of contracting officer authorization limits. The National Nuclear Security Administration (NNSA) assumes these responsibilities at current naval reactor sites and assists in the review of contractor compensation programs at other NNSA-designated locations. DOE Order 350.1 requires contractors to complete a recurring evaluation of their employee benefit programs, including pension and postretirement health plans, against the benefit programs of labor competitors in the private sector or other professionally recognized measures. These provisions are completed to aid contracting officers in assessing contractor benefit costs against the reasonableness standards of the FAR. Specifically, DOE Order 350.1 states that contractors may use either the results from (1) a benefit value study or (2) the annual U.S. Chamber of Commerce Employee Benefit Study, collectively called comparison studies in this report, to perform an appropriate evaluation of their benefit programs. Benefit value studies are intended to measure the relative worth of a contractor’s benefit programs to its employees. This is done through the calculation of a replacement value for the benefits offered in the contractor’s benefit program. Replacement values that may differ among employees, such as the use and extent of current employee health benefits, are calculated through the use of a hypothetical group of employees. This methodology allows comparisons between the provisions of benefit programs with different demographics, turnover and retirement rates, and benefit election patterns. As such, replacement values are also calculated for selected labor market competitors of the contractor and compared to the contractor replacement values. DOE contractors engage benefits consulting companies to assist with the benefit value studies and work with contracting officers to approve the methodologies used. Replacement values are found for each benefit component evaluated in the study and used to develop an overall benefit index program for that contractor. The final product of the benefit value study, called the net benefit value index, compares the relative value of the contractor’s employer-paid benefits to the employer-paid value of the average labor competitor’s benefits, represented by an index of 100. Therefore, a contractor with a net benefit value index of 107.0 offers benefits to its employees with a replacement value that is 7 percent above the average of the contractor’s labor competitors. As mentioned, the benefit value studies also create separate indexes for major individual benefit components, such as pension benefits and vacation time. The U.S. Chamber of Commerce Employee Benefit Study, or Chamber of Commerce cost study, provides a comparison of the annual employee benefit contributions and payments made by the contractor with the average contributions and payments of a survey population. The U.S. Chamber of Commerce Employee Benefit Study is an annual polling of domestic employers conducted by the U.S. Chamber of Commerce’s Statistics and Research Center and sponsored by American International Group, Inc. The survey publishes information on average employer benefit contributions and payments per full-time employee made during the preceding year and the percentage of total employer payroll spent on employee benefits. To analyze the agency’s estimated financial liability for contractor employee pension and postretirement health obligations, we obtained audited financial reports and disclosures on contractor employee postretirement benefit obligations for fiscal years 1999 through 2003, interviewed DOE officials from the Office of Finance and Accounting Policy and CHRM regarding the character of obligations at DOE research and cleanup sites, reviewed actuarial computations of DOE contractor benefit obligations to determine how obligations at cleanup sites were adjusted for expected site closure dates, and interviewed DOE officials from the Office of Finance and Accounting Policy and the Office of General Counsel regarding the agency’s liability with respect to contractor post-closure benefits. The calculation of financial liabilities for postretirement benefits earned by contractor employees involves the use of significant economic and demographic assumptions under the guidance of Statement of Financial Accounting Standards (SFAS) No. 87, Employers’ Accounting for Pensions, and SFAS No. 106, Employers’ Accounting for Postretirement Benefits Other Than Pensions. It was not our intent to assess, nor did we independently assess, the reasonableness of the assumptions used in the financial calculations or the accuracy of contractor data used in the calculations. For fiscal years 1999 through 2003, DOE’s financial statements, including estimates of contractor postretirement benefits, were audited by either independent public accountants or its IG. For each of these years, the auditing entity determined that DOE’s financial statements presented fairly, in all material respects, the financial position of the agency. To determine how DOE evaluates its contractor postretirement benefit programs and compares the benefits offered by DOE contractors with private industry benchmarks, we reviewed DOE Order 350.1 and other agency policy and procedure guidance related to the completion of contractor comparison studies, interviewed DOE officials from CHRM to determine procedures used to assess the quality of the contractor comparison studies, obtained and analyzed the most recent comparison studies completed by DOE contractors for all locations subject to the valuation provisions of DOE Order 350.1, and reviewed the most recent comparison studies for all locations subject to the DOE Order 350.1 valuation provisions for compliance with DOE policies and procedures. We reviewed contractor comparison studies for compliance with key controls in DOE’s policies and procedures, designed to provide reasonable assurance over the validity of the study results, including (1) timely completion and inclusion of major benefit components; (2) presence of recommended certifications to attest to the accuracy, relevance, and consistency of the data used in the study; (3) development of benchmark information through the selection of labor competitors and the use of up- to-date data for the competitors selected; and (4) calculation of desired (either required or recommended) performance measures. We summarized the results of these procedures in this report and communicated the detailed results of our testing to DOE officials. It was not our intent to verify, nor would we have been able to independently verify, the accuracy of actuarial calculations, assumptions, or competitor data used in the comparison studies due to the proprietary nature of benefits consulting firm databases used to conduct the studies. However, we confirmed that DOE requirements regarding the completion of these studies by national consulting groups with annual consulting revenues in excess of $5 million were met for all benefit value studies reviewed. We also did not independently assess the validity of the data supplied by DOE contractors for use in the comparison studies. To assess DOE’s oversight of its contractors’ pension and postretirement health benefit programs, we reviewed the FAR and other applicable standards related to allowable pension and postretirement health costs under contracts with commercial organizations; determined applicable internal control procedures for DOE’s contractor benefits program using our Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool; reviewed related DOE policy and procedure guidance and interviewed DOE officials regarding procedures for overseeing contractor benefit programs in existence through the end of fiscal year 2003; and reviewed contractor locations subject to the provisions of DOE Order 350.1 for compliance with DOE policies and procedures related to the review of changes to contractor postretirement benefit programs. In addition, we reviewed contractor operations and the oversight of contractor postretirement benefit programs at several federal agencies to determine whether contractor benefit programs at these agencies were comparable to those at DOE. We determined that the oversight of contractor benefit programs at the Department of Defense (DOD) was comparable, in some respects, to oversight at DOE and interviewed DOD officials to gain an understanding of that agency’s procedures and the differences between DOE and DOD contractor operations. In DOE’s fiscal year 2003 Performance and Accountability Report, the agency reported that the present value of estimated contractor postretirement and pension benefits that were unfunded as of September 30, 2003, totaled $13.4 billion. The unfunded balance of these deferred benefits has increased significantly over the past 4 fiscal years due to various operating and economic factors. An increasing portion of the future unfunded balance will relate to estimated pension and postretirement health obligations at completed or near-completed environmental cleanup sites. The expected magnitude of these benefits at site closure will require DOE to meet significant future budgetary and administrative challenges to facilitate the future payment of these benefits. DOE reimburses allowable contractor costs for employee postretirement benefits and records estimates of these future benefit payments in its financial accounting statements. The agency reported an estimated present value of $13.4 billion for pension and postretirement health benefits that have been earned by contractor employees under current postretirement benefit plan provisions but were unfunded as of September 30, 2003. This figure, also called the funded status, is an actuarial estimate of future postretirement benefits attributed to contractor employee service rendered prior to the measurement date less the fair market value of accumulated assets dedicated to the payment of the obligation. The calculation of financial accounting estimates involves the use of significant actuarial, demographic, and economic assumptions, including, among other things, future interest rates, health care cost trends, salary increases, and life expectancies of eligible retirees (and their survivors). Also, the estimation is inherently difficult because benefits earned by current contractor employees are deferred until retirement and the actual payment of these benefits may not occur for decades. The combined funded status for contractor pension and postretirement health benefits has changed from a $3.6 billion overfunded position in 1999 to a $13.4 billion unfunded position in 2003. There are several significant reasons for this deterioration in funded status over the last 4 fiscal years, including negative pension asset returns, declining discount rates over the past 3 fiscal years, and increasing trends in estimated postretirement health care costs. Table 1 summarizes the funded status for pension and postretirement health benefits for the last 5 fiscal year-ends as reported by DOE. In general, deterioration in the funded status of postretirement health benefits can be attributed to the excess of future benefits earned by current contractor employees, known as service costs, plus interest costs on outstanding obligations over the payments made to retirees to satisfy previously earned benefits. Postretirement health benefit service costs plus interest costs have ranged from 2.3 times to 2.5 times the payments to retirees made in each of the past 5 fiscal years. The significant increases in recent retiree health benefit costs, decreases in discount rates, and continuing accrual of postretirement benefits in existing contractor plans all affect the service and interest costs of contractor postretirement health plans, although we did not determine to what extent each of these individual factors affected the total funded status. Annual changes in the funded status of pension plans, unlike changes in the funded status of postretirement health plans, can be significantly affected by returns on dedicated pension assets. Contributions to pension plans are generally held in trust for the payment of benefits to participants and their beneficiaries. Plan trustees, usually banks or trust companies, make investment decisions for the plan with these assets. Contractor pension assets have, on average, experienced negative returns from 7 percent to 8 percent in each of the past 5 fiscal years. Negative asset returns decrease the fair market value of accumulated pension assets and therefore significantly contribute to changes in the funded status of pension benefits. However, because of current DOE policies, neither the current unfunded position nor the significant recent changes in funded status results in a requirement for contractors, or DOE, to make any additional annual postretirement benefit contributions. DOE Order 350.1 provides that in general, annual contractor contributions for pension benefits shall not exceed the minimum contribution required by ERISA. The order also provides that postretirement health benefits are paid using a pay-as-you-go method unless otherwise required by state or federal statute. See table 2 for pension contributions and postretirement health payments reimbursed by DOE over the last 5 fiscal years. However, certain contractors may face higher short-term pension contributions because minimum contributions calculated under ERISA rules factor in both current service costs and outstanding obligations. In any case, the reported $13.4 billion unfunded balance will, eventually, require additional contributions, investment gains, or favorable benefit experience within existing pension and postretirement health plans in order to satisfy future benefits when they come due. While DOE fiscal year 2003 reimbursements of postretirement benefits to contractors administering benefits following site closure totaled only approximately $6 million, future amounts will significantly increase with continuing environmental site closures. DOE has indicated that the agency is scheduled to close several environmental cleanup sites within the next few years. Contractor employee postretirement benefits at these sites had total unfunded balances in excess of $1.5 billion as of September 30, 2003. DOE Order 350.1 provides that when operations at a DOE facility are terminated and no further work is to be completed, pension and postretirement health benefit continuation will be provided for those contractor employees who earned retirement benefits in these plans. Consistent with DOE Order 350.1, contract language at anticipated closure sites (such as Fernald and Rocky Flats) indicates that the DOE contracting officer will designate and communicate the method of benefit continuation within the final 6 months of the contract and may direct any of a number of potential means of doing so, including, but not limited to, (1) termination and settlement of the plans in accordance with relevant laws and regulations, (2) continuation of the plans on a pay-as-you-go basis under a separate contract with the contractor, or (3) transfer of plan responsibilities to another contractor or third party. In conjunction with a site closure, the contractor may submit a claim, called a settlement proposal, for the final calculation of estimated postretirement benefits earned by contractor employees. The reimbursement of these costs would allow the contractor, generally through the purchase of insurance contracts, to complete the payment of future pension and postretirement health benefits without further DOE reimbursement. The ability of DOE to honor these claims largely depends on DOE’s available financial resources compared to the total settlement costs involved in the satisfaction of outstanding postretirement benefits. According to DOE officials, DOE has recently considered several options to avoid postretirement benefit settlements because the reimbursement of contractors for the purchase of annuity contracts and future health benefit payments involves significant costs above the calculated settlement amount. Because of the budgetary resources required to settle postretirement benefits at completed cleanup sites, DOE officials anticipate continuing the annual reimbursement of benefit payments by extending contracts with cleanup site contractors in some cases, solely to administer the benefits, thereby preserving the contractor relationship as the plan sponsor. The continuation of these benefits creates specific challenges for DOE, including the following: DOE currently attempts to pass the administrative responsibilities for the continuation of post-closure employee benefits to existing contractors. However, as the number of contractors with existing cleanup operations diminishes with additional site closures, DOE must either continue relationships with former contractors, many of which were created only to facilitate a site closure, or transfer responsibilities to another party. Even though contractor postretirement benefits are earned during previous employment periods, DOE will require continuing appropriations in order to reimburse contractors for the payment of postretirement benefits to former contractor retirees and other beneficiaries. DOE officials estimate that the post-closure obligations may extend through 2075. The continuation of postretirement benefits through another contractor or a third party requires DOE to pay for the allowable administrative expenses of these activities. The continuation of postretirement benefits requires DOE to monitor and evaluate the ongoing contractor reimbursement for post-closure benefit payments and any changes in those benefit programs made by the contractor. In response to these challenges, DOE announced plans in 2003 to establish an Office of Legacy Management to address the long-term management of former cleanup site contractor obligations. According to agency officials, a key mission of the Office of Legacy Management is to ensure the quality of service and continuity of former contractor employees’ pension and medical benefits. The office is planning a comprehensive approach to fulfill the agency’s pension and postretirement health obligations at current and future closure sites. DOE Order 350.1 generally requires that contractors periodically complete self-assessments of major nonstatutory benefit programs against professionally recognized measures. The most recent contractor comparison studies report average contractor benefits are 0.2 percent below the value of selected labor competitors. However, a significant number of contractor locations are not subject to the valuation provisions of DOE Order 350.1, or otherwise do not complete them. In cases where DOE Order 350.1 does not apply, alternative procedures are performed by DOE personnel; however, the procedures are inconsistent among contractor locations and are limited at completed, or near-completed, cleanup sites. We also found the comparison studies that were completed under DOE Order 350.1 often did not conform to existing DOE policies and recommended procedures. Each DOE contractor subject to the self-assessment provisions of DOE Order 350.1 is to periodically complete a comparison study evaluating its benefit programs against external benchmarks. This evaluation of contractor benefits may take the form of either a benefit value study, which measures relative replacement cost of employer-paid benefits against the benefits offered by a group of selected labor competitors, or a cost study, which measures the annual relative per capita benefit cost against companies surveyed by U.S Chamber of Commerce. The results of the comparison studies allow DOE contracting officers to measure the competitiveness of contractor benefit programs in the labor market and to assess contractor benefit program costs for reasonableness under applicable regulations and contract provisions. Table 3 summarizes the reported results from the most recent contractor comparison studies completed. The reported results of the contractor comparison studies suggest that DOE has been fairly successful in achieving its goal of limiting the total value of contractor benefits to no more 5 percent higher than the average total value of the contractor’s labor competitors at each location. As shown in table 3, only 5 of 21 studies have a benefits value of more than 105 and the average contractor benefits value is 0.2 percent below the employer-paid benefits level of selected study competitors. The reported results range from 29 percent below competitor averages to 48 percent above those averages; however, at 16 of 21 contractor locations, the reported benefits value falls from 90 to 110, or 10 percent below to 10 percent above labor competitor averages. As discussed later in this report, contractor nonconformance with DOE guidance on the completion of these studies raises questions about the validity of the comparison study results. A significant number of DOE contractors, and the postretirement benefits they offer, are not subject to the comparison study provisions of DOE Order 350.1. Contractors with postretirement benefits (1) offered in corporate plans, (2) reimbursed under support contracts, and (3) provided for employees at naval reactor sites are exempted from the requirements. In addition, the studies were not performed at six contractor sites that were closed, or nearing completion. DOE reimbursements of postretirement benefits at sites at which comparison studies were not completed accounted for $105 million of the $431 million, or 24 percent, in total contractor contributions made for contractor postretirement benefit programs in fiscal year 2003. Figure 2 illustrates DOE reimbursements for postretirement benefits made for fiscal year 2003 according to whether the contractor location is subject to the comparison study provisions and the reasons DOE officials provided for their exclusion. DOE officials complete alternative monitoring procedures at some locations where DOE Order 350.1 comparison studies are not required or otherwise completed. Examples of these procedures include reviews of benefit payment invoices, comparisons to other DOE contractor programs, and review of annual actuarial calculations. CHRM also periodically completes valuation and cost reviews at various contractor sites. CHRM procedures include reviews of contractors’ actual incurred costs for benefits and wages; actuarial valuation and accounting reports; and various annual pension plan reviews, such as salary replacement, plan investment, and cash flow requirement analysis. However, at completed or near-completed cleanup sites we found that DOE officials did not complete comparison studies and completed limited alternative procedures to assess the reasonableness of continuing pension and postretirement health payments at these locations. According to DOE officials, significant reasons for the absence of comparison studies for post-closure benefits include the lack of resources to perform the studies at former contractor sites that are nearing completion and the fact that three DOE sites were closed before the provisions of DOE Order 350.1 became applicable. Reimbursements at these locations in fiscal year 2003 totaled $31 million and, as previously mentioned, the postretirement benefits paid at closed locations are anticipated to increase as additional closure sites are completed. DOE Order 350.1 requires certain processes and procedures for completing the previously discussed comparison studies. In addition, DOE’s Value Study Desk Manual describes recommended methodologies for the completion of a benefit value study. Collectively, the procedures and methodology outlined in DOE Order 350.1 and the Value Study Desk Manual are intended to provide reasonable assurance that the comparison studies result in valid, reliable, and comparable information regarding the benefits offered by DOE contractors. To assess the studies completed by DOE contractors, we selected 12 significant provisions from DOE Order 350.1 and the Value Study Desk Manual and reviewed the most recently completed contractor studies for conformance with these provisions. Our review encompassed all 21 contractor sites subject to the comparison study provisions of DOE Order 350.1 (18 completed benefit value studies and 3 completed Chamber of Commerce cost studies). Based on our review of the studies performed at contractor sites subject to the valuation provisions of DOE Order 350.1 and the Value Study Desk Manual, we found one or more instances of nonconformance with required or recommended comparison study procedures at 18 of the 21 contractor sites. In summary, we found instances of nonconformance with guidance in the following areas: Contractors did not follow applicable provisions for selecting and documenting comparators used in the development of a benefit value index (11 of 18 sites completing benefit value studies). Contractors did not use the recommended methodologies to calculate the results of the comparison study (10 of 21 sites completing benefit value or cost studies). Contracting officers did not obtain recommended certifications from contractors and actuarial consultants to verify data used in the benefit value studies (16 of 18 sites completing benefit value studies). Since the results of the benefit value comparison studies are sensitive to the selection of a comparator group, DOE Order 350.1 and the Value Study Desk Manual provide that the comparator group include at least 15 participants, only 20 percent of which can be other DOE contractor sites that compete for professional level staff. Our review determined that 11 out of 18 contractors did not properly select comparator firms or maintain documentation on comparators in accordance with recommended procedures in the Value Study Desk Manual. Although DOE policies also require contracting officers to review and approve the contractor comparator group prior to the completion of the benefit value study, several contractors were not in compliance with this agency procedure because they did not provide the specific documentation recommended by the Value Study Desk Manual. This situation may result in inconsistent criteria selection for comparators among contractor studies. DOE Order 350.1 requires contractor comparison studies to generate appropriate comparison statistics. The Value Study Desk Manual recommends that benefit value studies calculate the contractor’s total employer-paid net benefit value using a comparison to the average total (e.g., the mean) net benefit value for the comparator group. DOE Order 350.1 requires Chamber of Commerce cost studies to calculate the contractor’s actual per capita benefits cost per employee compared to the most recently published survey from the same benefit year. Our review found that 10 out of the 21 contractor sites did not calculate the desired performance measure as required or recommended by DOE guidance. In several cases, we found that the contractor total benefit value index was computed based on the median, not the mean, of competitor replacement values. We also found that separate performance measures were presented for employee groups with tiered benefits without any indication of the total cost distribution between the groups. The failure to calculate consistent comparison study results makes it difficult for agency officials to compare results among sites and correctly determine whether corrective action plans are required. The Value Study Desk Manual also recommends that the assigned contracting officers obtain certifications from both the contractor and the benefits consulting group performing the comparison studies to verify the accuracy, consistency, and validity of comparisons completed. The certifications are key controls over the quality of the studies. For example, they would alert contracting officers if the contractor was to change comparator firms or valuation methodologies and assumptions or was unable to obtain up-to-date competitor benefit data. Our review determined that 16 out of 18 contractors that completed a benefit value study did not submit the contractor and actuarial certifications at the completion of the study. The absence of these certifications can result in the improper interpretation of the comparison study results by contracting officers. DOE could enhance its oversight of contractor employee benefits and address the challenges posed by the future administration of significant post-closure benefits by providing for greater management review of information developed at individual contractor sites and incorporating a focus on the long-term nature of pension and postretirement health benefits. The limited review of post-closure benefit payments completed by contracting officers at closed sites may make the continued decentralization of benefit program monitoring impractical. Also, the 70-year anticipated duration for some DOE reimbursements of contractor employee pension and postretirement health costs earned to date needs additional consideration in DOE’s evaluations of contractor benefit costs. DOE contracting officers are primarily responsible for determining the allowability of DOE contractor employee benefit costs and administering the benefits. Accordingly, DOE’s current monitoring and risk assessment process is largely performed by contracting officers who are responsible for reviewing benefit programs at one contractor site. Contracting officers have the ability to seek technical advice and policy support from various DOE resources, including CHRM, OPAM, and NNSA. DOE also maintains a Memorandum of Understanding with DOD agency offices to provide audit services. These management offices offer, as needed or requested, various issue- or location-specific monitoring activities; however, they do not routinely review the results of the monitoring and risk assessment activities of the contracting officers. Thus, agencywide information regarding nonconformance with guidelines for contractor employee benefit program assessments is not routinely analyzed by management so that corrective actions can be taken. Similarly, best practices are not routinely identified at individual contractor sites and propagated across the agency. Also, dissimilarities in benefit programs between contractor locations can lead to adverse situations for the contractor benefits program as a whole. DOE recently approved proposals submitted by contractor employee groups at two DOE sites to enhance each group’s pension benefits so they would be comparable with the pension benefits at another DOE site. The agency approved these benefit enhancements largely based on the argument that doing so would retain skilled staff, even though the most recent contractor benefit value studies indicated that these sites already had pension and postretirement health benefit replacement values exceeding average labor competitor programs. The fact that some sites have closed, and others are nearing completion, also suggests the need for more management attention to program reviews. We found that contracting officers at several closed, or near-completed, environmental sites did not perform comparison studies under the provisions of DOE Order 350.1 or complete other substantive monitoring procedures. The failure to do so was attributed to a lack of resources. We believe that transitioning these monitoring and risk assessment procedures to a management level that will still exist after site closure would better position DOE to address future challenges. Systematic monitoring reviews and risk assessments will be necessary for post-closure benefits since DOE officials contend that (1) current contractor pension and postretirement health plan provisions allow for changes in postretirement benefits subsequent to the site closure and (2) post-closure benefit payments remain subject to compliance with DOE’s guidance for comparison studies and applicable regulations, such as the cost reasonableness provisions of the FAR. Although the agency resources required to monitor DOD’s contractor benefits program are significantly greater than those needed at DOE, the organizational structure at DOD provides an example of an oversight group used to assist in compliance reviews and risk assessment at all contractor locations. DOD provides contracting officers significant operational support from the Defense Contract Audit Agency (DCAA) and the Defense Contract Management Agency (DCMA). The two agencies provide a consistent source of routine review and analysis of detailed benefit and cost information outside of individual contractor locations. This group is thus able to gain broad knowledge of contractor issues and decisions to apply a more consistent definition of reasonableness to the evaluation of contractor benefit costs. DOD also has formal guidance within the agency’s supplement to the FAR, which lists occurrences in postretirement health or pension programs that indicate heightened risk and should lead a contracting officer to request a separate in-depth evaluation of the policies, practices, and costs of a contractor benefit component that is performed jointly by DCAA and DCMA staff. DOE’s evaluation of total benefits in the benefit value study rather than a review of the individual benefit components does not fully address the differences in costs between deferred benefit programs, such as pension and postretirement health benefits, and other benefit components. A management focus on the long-term impacts of contractor benefit program decisions may provide improved information for decision makers in DOE and Congress. This information is important because decisions on changes to pension and postretirement health benefits can have a significant impact on DOE’s long-term budgetary needs. For example, a 1 percent increase in a contractor employee’s current year vacation benefits has less impact on DOE’s long-term costs and budgetary needs than a 1 percent increase in postretirement pension or health benefits, which have a continuous and compounding effect as they are paid out in each year of retirement. Nevertheless, DOE contracting officers decide whether corrective action plans are needed largely based on the review of the total benefit value index, which does not take into account the differences between the total cost of pension and postretirement health benefits and other benefit components. These cost differences may be significant because pension and postretirement health benefits can require DOE reimbursement long after an employee retires. As shown in table 4, the benefit value indexes for contractors’ pension and postretirement health benefits are significantly different from the total benefits indexes shown in table 3. Both the pension and postretirement health benefit indexes have larger programwide averages, larger index ranges, and more contractors with benefit indexes outside of DOE’s target range of 5 percent above the average of selected competitors. For example, postretirement health benefits average more than 44 percent greater than the average of the DOE contractors’ competitors, while defined benefit pensions average 29 percent greater. In addition, DOE’s review of current pension contributions and postretirement health payments through the Chamber of Commerce cost studies completed by three contractor sites is not consistent with the long- term nature of pension and postretirement health benefits. This inconsistency is largely due to the fact that annual employer contributions for pension and health benefits generally do not equal the estimated amount of postretirement benefits earned by current employees that year, also called the annual service cost of benefits. For example, DOE reimbursed $430 million in costs to its contractors for pension and health plan contributions in fiscal year 2003; however, the reported fiscal year 2003 service cost of those plans was $872 million. It is DOE’s policy to evaluate contractor requests for changes to existing pension and postretirement health plans by reviewing total benefit values and annual contributions, rather than total costs. DOE Order 350.1 requires contractors to submit proposed changes to contractor postretirement benefit programs with information on the impact of the changes on existing comparison studies and anticipated changes in cost. However, the order does not differentiate the annual contractor contribution cost from the total future cost of the changes. For example, the determination to accept proposed changes by one contractor noted that the increase in pension liabilities caused by the changes would not result in additional short-term reimbursements by DOE due to the positive funded status of the plan. Furthermore, our review of changes made to contractor postretirement benefit plans during fiscal year 2002 revealed that 3 out of 11 contractors that submitted changes to DOE for approval did not include either the effect of the plan changes on comparison study results or an estimate of savings or costs. The satisfaction of postretirement contractor benefits earned under current and prior contracts with the government will require significant amounts of budgetary and administrative resources to pay and monitor the payment of these benefits long after current research contracts and cleanup sites are terminated. Because DOE has excluded certain contractor locations from a requirement to complete periodic benefit valuation studies, it cannot apply a consistent evaluation of costs for all benefit programs. Within programs required to complete comparison studies, instances of contractor nonconformance with policies and guidance make the results difficult to interpret and use in making management decisions regarding the level of program benefits. The challenges associated with administering post-closure benefits and a lack of focus on the long-term nature of postretirement pension and health benefit obligations exacerbate these problems. Formal management reviews that attempt to identify and correct areas of nonconformance, propagate best practices agencywide, and focus on long-term budgetary needs could improve DOE’s oversight of the contractor employee postretirement benefits program. GAO recommends that the Secretary of Energy take the following four executive actions: 1. Institute systematic management review of pertinent data from each contractor location to enhance the consistency of benefit program evaluations and reduce the instances of nonconformance with the requirements of DOE Order 350.1 and other recommended procedures. The intent of the management review would be to correct areas of nonconformance, identify best practices, and disseminate this information across the agency. 2. Extend the comparison study requirements of DOE Order 350.1, to the extent practical, to all contractor locations with benefit obligations to provide better information about programwide contractor employee benefit costs. 3. In cases where the extension of the order is not practical, develop and perform appropriate alternative procedures to provide similar information. 4. Incorporate into DOE’s oversight process a focus on the long-term costs and budgetary implications of decisions pertaining to each component of contractor benefit programs, especially pension and postretirement health benefits, that have budgetary requirements beyond the current year. This would augment the current consideration of total annual benefit costs. We requested and received from DOE written comments on a draft of this report, which are reprinted in appendix III. In its comment letter, DOE noted that our findings were consistent with those of its own internal assessment and agreed with the report’s four recommendations. DOE also provided us with technical comments, which we have incorporated as appropriate. Additionally, we requested oral comments from DOD on applicable report excerpts. DOD did not have any comments on the report. We are sending copies of this report to appropriate House and Senate committees; the Secretary of Energy; and the Director of the Office of Management and Budget. We will also make copies available to others upon request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6131. You may reach me by e-mail at martinr@gao.gov. Contributors to this report are listed in appendix IV. Both DOE and DOD manage a large number of individual contracts and contractor operations. Both agencies also allow for the reimbursement of annual pension and postretirement health costs and have agency contracting officers who are responsible for reviewing these costs for compliance with applicable regulations. However, as shown in table 5, there are some underlying program differences that have an impact on the way the two agencies manage their contractor benefits. In addition to the individual named above, Sharon Byrd, Richard Cambosos, Lisa Crye, Frederick Evans, Darren Goode, Roger Thomas, and Scott Wrightson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Department of Energy (DOE), which carries out its national security, environmental cleanup, and research missions through extensive use of contractors, faces significant costs for postretirement health and pension benefits for contractor employees. Given DOE's long history of using contractors and the rising cost of postretirement benefits, the Chairman, House Committee on Appropriations, Subcommittee on Energy and Water Development, asked GAO to (1) analyze DOE's estimated financial obligation for postretirement health and pension benefits for contractor employees at the end of fiscal year 2003, (2) determine how DOE evaluates its contractor postretirement health and pension benefit programs and assesses the comparative levels of benefits offered by contractors, and (3) assess how DOE's oversight of these benefits could be enhanced. As of September 30, 2003, DOE reported an estimated $13.4 billion in unfunded contractor postretirement health and pension benefits. This figure is an actuarial estimate of all benefits attributed to employee service before September 30, 2003, minus the fair market value of assets dedicated to the payment of retiree benefits. The unfunded balance has grown over the past 4 fiscal years as a result of the continuing accumulation of benefits, declining interest rates, and negative returns on pension assets. A significant portion of the unfunded balance relates to benefit programs at contractor sites that have already closed or will close once the work is complete. DOE Order 350.1 generally provides that contractors periodically complete selfassessment studies comparing their benefits to professionally recognized measures. DOE uses these studies to make decisions about the level of contractor benefits. While the most recently completed comparison studies suggest that DOE has been successful in offering total contractor benefits that are comparable to those of selected competitors, the DOE Order 350.1 studies are not performed at a significant number of contractor locations, and alternative review procedures performed by DOE personnel are inconsistent from one contractor location to another; thus DOE's ability to evaluate the full range of programs is limited. In addition, GAO found that a number of contractor studies completed under DOE Order 350.1 did not conform to prescribed and recommended methodologies, calling into question the validity and comparability of the results. Moreover, DOE's current focus on total benefits rather than individual benefit components in evaluating benefits does not fully recognize the differences in costs between deferred benefit programs, such as pension and postretirement health benefits, and other benefit components. This distinction is important because changes to pension and postretirement health benefits can have a significant impact on DOE's long-term costs and budgetary needs. For example, a 1 percent increase in a contractor employee's current year vacation benefits has less impact on DOE's long-term costs and budgetary needs than a 1 percent increase in postretirement pension or health benefits, which have a continuous and compounding effect as they are paid out in each year of retirement. While reported total contractor benefits are comparable to selected competitors, as stated above, the postretirement health benefits of DOE contractor employees at these sites averaged more than 44 percent greater than the average of the contractors' competitors, while defined benefit pension benefits averaged 29 percent greater. The approval and monitoring of DOE contractor employee pension and postretirement health benefits is primarily the responsibility of DOE contracting officers, who administer contracts at individual contractor locations. Management does not systematically review information developed at individual contractor locations to identify best practices or areas where benefit comparisons do not adhere to agency requirements or guidance. Developing and disseminating this information agencywide would enhance DOE's oversight of contractor employee benefits and provide information needed to manage postclosure benefit costs.
In this report, we use the term “mission fragmentation” to refer to those circumstances in which more than one federal agency (or more than one bureau within an agency) is involved in the same broad area of national need. Historically, national need areas have been described by a classification system called budget functions. Developed as a means to classify budgetary resources on a governmentwide basis according to the need addressed, budget functions are, by intention, very broad. Presently, there are 17 national need areas, including such mission areas as international affairs and income security. Functional classifications have been used in the federal budget process for many years to serve a variety of purposes; since 1974, the Congress has used these categories as the framework for the concurrent resolution on the budget. Budget functions will also provide the framework for the governmentwide performance plan that is required by the Results Act to be included with the President’s Fiscal Year 1999 Budget submitted in February 1998. Although this type of system can indicate broad categories of fragmentation and overlap, it does not directly address the issue of program duplication. While mission fragmentation and program overlap are relatively straightforward to identify, determining whether overlapping programs are actually duplicative requires an analysis of target populations, specific program goals, and the means used to achieve them. For example, as an indication of duplication within employment training programs, we reported in 1994 on the extent to which 38 separate programs shared common goals, targeted comparable client populations, provided similar services, and used parallel service delivery mechanisms and administrative structures. Thirty of the programs shared characteristics with at least one other program. To respond to this request, we compiled an inventory of GAO reports and testimonies dealing with mission fragmentation and program overlap. As agreed, we did not update this issued work, but each identified product was reviewed for relevance and currency. Our goal was to capture the breadth of our published work but to include only those products which described or expanded previous discussions of mission fragmentation or program overlaps, “patchworks,” or duplications. Products which were very narrow in scope—for example, dealing with a program coordination question within a single agency—were not included unless they were part of a larger body of work on the specific program area. The abstracts contained in appendix I summarize matters relevant to fragmentation and overlap and do not necessarily reflect the entirety of the product’s message. Whether approached from a governmentwide perspective or on the basis of individual programs, our work has documented mission fragmentation and program overlap. Although this broad and diverse body of work—covering nearly a dozen missions and over 30 programs and involving most departments and agencies—clearly indicates the potential for inefficiency and waste, it also helps to disclose areas where intentional participation by multiple agencies may be a reasonable response to a specific need. In either case, the Result Act, and its emphasis on defining missions and expected outcomes, can provide the environment needed to begin the process of reassessment. In response to requests from the Senate Committee on Governmental Affairs and more recently from House Leadership, we attempted to quantify the question of mission fragmentation by using spending patterns to describe the relationship between federal missions and organizations.By mapping department and agency spending against the federal mission areas described by budget function classifications, we showed that most federal agencies addressed more than one mission and, conversely, most federal missions were assigned to multiple departments and agencies. In 1996, for example, most agencies made obligations to three or more budget functions, and six of the budget functions were addressed by six or more executive branch departments and major agencies. For example, seven major federal organizations made obligations in 1996 to the Natural Resources and Environment mission area, and seven to Community and Regional Development. While the use of broad budget functions as a proxy for federal missions cannot yield an exact measure of the extent of fragmentation, our analyses served to illustrate the potential scope of the issue and indicate areas for further assessment. We have also done a large body of work reviewing specific federal programs. Again, in program area after program area—from early childhood programs to land management and from food safety to international trade—the picture remains the same: widespread fragmentation and overlap, often involving many federal departments and agencies. Such unfocused efforts can waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. Table 1 summarizes the program areas included in appendix I, which contains an annotated bibliography of GAO products covering over two dozen federal program areas including, for example, the following: We have reported extensively on federal programs seeking to help people find productive employment. In 1995, we identified over 160 employment training programs scattered across 15 departments and agencies. While about 60 percent of the programs were administered by two departments, the remainder resided in departments not generally expected to provide employment training assistance. Many of the new employment training programs had emerged in these latter departments in recent years. We reported in 1995 that at least 12 federal departments and agencies were responsible for hundreds of community development programs that assist distressed urban communities and their residents. Historically, there has been little coordination among the agencies, imposing an unnecessary burden on urban communities seeking assistance. We reported that agencies tended not to collaborate with each other for a variety of reasons, including concerns about losing control over program resources. General science, space, and technology High performance computing National laboratories Research and development facilities Small business innovation research Border inspections Drug control Investigative authority Terrorism and drug trafficking Federal land management International environmental programs Hazardous waste cleanup Water quality Notwithstanding the performance problems suggested by this work, a common theme emerges—the evident fragmentation and overlap is the result of an adaptive and responsive federal government. As new needs were identified, the common response has been a proliferation of responsibilities and roles to federal departments and agencies, perhaps targeted on a newly identified clientele (e.g., at-risk children), or involving a new program delivery approach (e.g., credit programs in addition to grants), or, in the worst case, merely layered onto existing systems in response to programs that have failed or performed poorly. However, as noted in a recent House Government Reform and Oversight Committee report, “a certain amount of redundancy is understandable and can be beneficial if it occurs by design as part of a management strategy to foster competition, provide better service delivery to customer groups, or provide emergency backup.” Several of our products provide examples of these types of federal environments. In some situations, redundancy may be seen as inherently necessary due to the nature of the federal effort. For example, because of security requirements, the Department of Energy’s (DOE) processes for planning, funding, and evaluating nuclear weapons development work came to rely on competition among multiple weapons laboratories as a means of ensuring quality.In other cases, the involvement of multiple federal agencies may reflect the breadth of activities associated with a given federal mission. For example, countries formerly part of the Soviet Union have received assistance through over 200 federal programs, some as part of a multiagency approach established by law in 1992. Assistance provided by 23 federal departments and agencies has included food aid, private sector development, emergency humanitarian assistance, disposition of weapons of mass destruction, and democratic reform. Similarly, numerous federal agencies are involved in providing disaster assistance. The Federal Response Plan prepared by the Federal Emergency Management Agency identifies 27 federal agencies as service providers following any type of disaster or emergency that requires a federal response. In some program areas, inefficiencies may be difficult to address because of other, overriding goals. For example, the decentralized structure of federal statistical agencies has been cited as inefficient and contributing to data quality problems. However, the potential advantages of consolidation must be weighed against other concerns, such as the potential for abuse and breaches of confidentiality that could occur when so much information about individuals and businesses is concentrated in one agency. Nevertheless, whether seen as the cause of unfocused and confusing program structures or as a necessary consequence of federal approaches in a specific program area, the fragmentation and overlap described by our work inevitably leads to consideration of reorganization and restructuring. In testimony before the Senate Committee on Governmental Affairs, the Comptroller General noted that experiences in this and foreign countries suggested several basic principles associated with any reorganization assessment. The most important lesson gleaned from these experiences was that any reorganization demands a coordinated approach within and across agency lines, focused on specific, identifiable goals. With its emphasis on defining agency missions, goals and objectives, and strategies to achieve those goals and objectives—and its requirement for involvement of the Congress and other agency and external stakeholders—the Results Act provides a statute-based environment to begin such an assessment. The Results Act will present the Congress and the administration with a new opportunity to address mission fragmentation and program overlap. As we noted in our recent assessment of the status of Results Act implementation, the act’s emphasis on results implies that federal programs contributing to the same or similar outcomes should be closely coordinated, consolidated, or streamlined, as appropriate, to ensure that goals are consistent and that program efforts are mutually reinforcing.To implement the act, agencies will need to undertake three key steps: define mission and desired outcomes, measure performance, and use performance information. Each of these steps offers opportunities for the Congress and the administration to intervene in ways that could address mission fragmentation. For example, as missions and desired outcomes are determined, instances of fragmentation and overlap can be identified and appropriate responses can be defined; as performance measures are developed, the extent to which agency goals are complementary and the need for common performance measures to allow for cross-agency evaluations can be considered; and as continued budget pressures prompt decisionmakers to weigh trade-offs inherent in resource allocation and restructuring decisions, the Results Act can provide the framework to integrate and compare performance of related programs to better inform choices among competing budgetary claims. Perhaps the most important element of the Results Act, at least with respect to the challenge of mission fragmentation and program overlap, is that it creates a framework that enables and expects congressional and other stakeholder consultation in agency strategic planning. This should create the environment needed to look across the activities of individual programs within specific agencies and toward the goals and objectives that the federal government is trying to achieve. The consultation process should present an important opportunity for congressional committees and executive branch agencies to mutually address the extent and consequences of fragmented and overlapping agency missions and poorly targeted programs. In many areas, our previous work has shown that emphasizing missions is the best means to cut across organizational boundaries and identify fragmentation. By emphasizing the intended outcomes of related federal programs, our work has identified legislative changes needed to clarify the Congress’ intent and expectations or to address changing conditions that have arisen since initial statutory requirements were established. Examples include the following: In the area of rural development, we reported in 1994 that the patchwork of uncoordinated, narrowly focused programs was an inefficient surrogate for a single federal policy. At the time of our review, a federal interagency group had been established to address service delivery problems, but it could take only limited action because it lacked the authority to make changes in the programs. We suggested that the Congress consider establishing an interagency executive committee with a mandate to report on alternatives to the current fragmented environment, including establishing measurable program goals. Following on our work examining federal export promotion programs, the Congress tasked an interagency working group, the Trade Promotion Coordinating Committee, with establishing governmentwide priorities for promotion programs and proposing an annual unified federal budget reflecting those priorities. Our work had shown a lack of information on what federal export promotion programs were achieving, whether federal resources for export promotion were being used as effectively as possible, and obstacles to accessing the programs due to the fragmentation of needed services among several agencies. Healthy People 2000 is a national strategy for improving the health of the American people. Started in 1979, Healthy People is a series of outcome-based public health objectives developed and updated each decade by the U.S. Public Health Service in consultation with other federal agencies, state governments, and national organizations. Currently, three broad goals are supported by 300 objectives that address 22 priority areas. Over time, the Congress has required that Healthy People objectives be incorporated into other federal programs as a means to ensure that goals and objectives are coordinated to meet federal needs. The opportunity for congressional involvement in agency strategic planning could present challenges given the complexity of current committee jurisdictions. To address this, bipartisan teams in the House of Representatives have been established to coordinate and facilitate committee consultations with executive branch agencies. We have supported and will continue to actively support the House’s departmental staff teams as they review and consult on agencies’ draft strategic plans. For example, at the request of the Chairmen of the House Committees on Government Reform and Oversight, Appropriations, and the Budget, we recently developed a set of key questions to be used by the staff teams during their reviews. These questions dealt with identifying relationships among agencies’ strategic plans, determining similar or related efforts across agencies, and noting the extent of interagency coordination. The apparent challenge of integrating performance expectations for crosscutting programs with congressional oversight processes and executive management structures should also be aided by an additional Results Act requirement: the governmentwide performance plan. The act requires the Office of Management and Budget (OMB) to present a governmentwide performance plan, based on agencies’ annual performance plans, with the President’s Budget; the first plan is required to be issued in February 1998 with the fiscal year 1999 budget. The Congress intended that this plan present a “single cohesive picture of the annual performance goals for the fiscal year.” While the precise format is left to the discretion of the OMB Director, the plan is expected to be organized around budget functions, thus providing a mission-based, cross-agency perspective. This approach should facilitate identifying crosscutting programs while also supporting integration with the concurrent resolution on the budget—an important congressional oversight tool that also uses budget functions. The Results Act requires agencies to develop annual plans with suitable performance measures in order to reinforce the connection between the long-term strategic goals outlined in strategic plans and the day-to-day activities of managers and staff. To the extent that federal efforts are fragmented across agency lines, developing crosscutting performance measures through interagency coordination could ease implementation burdens while strengthening efforts to develop best practices. Complementary and, where appropriate, common performance measures could permit comparisons of related programs’ results and the tools used to achieve those results. Both the need for and the potential benefits arising from efforts to build a crosscutting perspective into outcome-oriented performance measurement development can be drawn from our previous work. However, the persistent theme from this body of work is that although results-based performance information would help federal managers improve their programs, little information is collected. Our work on employment training programs found that many federal agencies did not know if they were really helping people find jobs. For example, in a study of programs targeting the economically disadvantaged, we found that most agencies did not collect information on whether participants found jobs; or if they did, whether the jobs were related to the training provided; and if it was, what wages the participants earned. Without this information, program administrators could not determine if they were preparing participants for local labor market opportunities, whether employment resulted from participation in employment training, or if participants would most likely have found the same types of jobs on their own. The challenge of performance measurement development is increased when there are multiple nonfederal entities, in addition to multiple federal agencies, involved in a program area. For example, our work on ecosystem management noted that data needed to test the concept were often noncomparable and insufficient and that a governmentwide approach would require unparalleled interagency and federal/nonfederal coordination. Even where efforts are made to develop common performance information across overlapping programs, the information developed can still differ from program to program, hampering crosscutting comparisons. For example, our 1996 review of three agencies whose programs provide economic development assistance found that each cited a “performance ratio”—computed as a comparison of total dollars invested in a project to the dollars invested by the federal agency—as one measure of how they were meeting their goals. However, each agency defined total dollars invested differently and calculated the ratio for only a portion of its programs. While determining the outcomes of economic development programs certainly presents significant challenges, the use of different methods to calculate apparently similar performance indicators would in any case preclude comparison of the programs. Our work also suggests that sustained congressional involvement, in some cases spanning many years, will be required even where a legislated coordinating mechanism exists. For example, the Office of National Drug Control Policy (ONDCP) was established in 1988 after several previous legislative efforts were unsuccessful in causing development of a comprehensive national drug strategy. ONDCP is responsible for developing and coordinating implementation of a drug control strategy among the now more than 50 federal agencies involved in this program area. Recently, ONDCP began developing national performance measures to be collected in addition to individual agency performance data to help determine whether or how well counternarcotics efforts were contributing to the goals of the national strategy. Consistent with the intent of the Results Act, we recommended that ONDCP complete a long-term plan with meaningful performance measures and multiyear funding needs linked to the goals and objectives of the strategy. In February 1997, ONDCP proposed a 10-year strategy and is making progress toward developing performance targets and measures for each of its goals. Lastly, the Congress has a vital role regarding performance measurement development in addition to its consultative role with respect to agency strategic plans. This role can be particularly important in areas of uncoordinated and fragmented missions. For example, assessing the outcomes of science-related programs can be extremely difficult because a wide range of factors determine if and how a particular research and development project will result in commercial or other benefits, and the challenge of this type of assessment is heightened by the involvement of multiple federal agencies. Recently, the Research Roundtable, a consortium of federal research and development agencies, has been considering the extent to which its member agencies can and should adopt a common approach to measuring performance. The Roundtable is one of about 25 interagency groups, many of which were recently formed on an ad hoc basis to discuss common concerns in crosscutting issues, including goal setting and performance measurement. The Congress could work with these types of interagency coordinating groups to ensure that congressional data needs are met within any common performance measurement model. Moreover, this consultation will also reinforce earlier strategic planning consultations intended to clarify and harmonize missions. For the Results Act to achieve its objective of improved federal performance and accountability, the performance information made available must be used. Of course, different users will have different needs. Agency managers should use performance information to ensure that programs meet intended goals, to assess the efficiency of processes, and to promote continuous improvement. The Congress needs information on whether and in what respect a program is working well to support its oversight of agencies and their appropriations. In the specific area of fragmented and overlapping activities, performance information can help identify performance variations and redundancies and can lay the foundation for improved coordination, program consolidations, or the elimination of unneeded programs. However, developing useful performance information in an environment of fragmented missions presents unique demands, partly because of the number of federal decisionmakers involved. For example, federal employment training programs are not only spread across multiple departments and agencies but are also subject to multiple congressional authorization, oversight, and appropriations jurisdictions. In fact, for the major departments and agencies providing employment training programs, seven different appropriations subcommittees currently review and determine funding levels. Ideally, the consultation requirements associated with strategic plan development can help address these concerns. In particular, the House departmental teams, composed of representatives from relevant House authorizing committees as well as the appropriations, budget, and oversight committees, were specifically established to help coordinate committee consultations and simplify the provision of congressional views on agency strategic plans. These actions should help promote clarity and consistency of congressional information needs, thus setting the stage for subsequent congressional interest in collected performance information. But the performance measurement challenge of fragmented missions—that is, concentrating attention on redundancies or performance differences across agencies, in addition to performance gaps within a single agency—will present unique difficulties for both the executive branch and the Congress. Past efforts to deal with crosscutting federal activities suggest that even within the statutory framework of the Results Act, success will take time and will require sustained attention in both the executive branch and the Congress. At this very early stage of Results Act implementation, it is clear that much work remains to be done. In June 1997 testimony before a joint hearing of the Senate Appropriations and Governmental Affairs Committees, the Director of OMB acknowledged, “(A)gencies understandably have first focused on their own programs, and are only beginning to look at enhancing interagency coordination for programs or activities that are crosscutting in nature.” Our reviews of draft agency strategic plans, requested by House Leadership to assist the congressional consultation process, confirmed that agencies are only beginning to consider the challenges of fragmentation and overlap. Nearly all of the draft plans lacked evidence of interagency coordination, and some of the plans—including those from some agencies that operate complex programs where interagency coordination is clearly required—lacked any discussion of the need to coordinate with other agencies on crosscutting issues. For example, the ability of the Department of Health and Human Services to achieve its goal of self-sufficiency and parental responsibility for welfare recipients is likely to depend on employment, training, and education programs administered by the Departments of Labor and Education; yet, the draft plan makes no mention of the roles of these other agencies. Even if an agency’s draft plan recognized the need to coordinate with others, there was generally little information about what strategies would be pursued to address mission fragmentation and program overlap. For example, although the draft plans for the Departments of Justice and Veterans Affairs contained a general goal to improve coordination among agencies involved in related functions, no specific strategies to achieve this goal were discussed. These developments serve to emphasize a fundamental issue: the need for specific institutions and processes to sustain and nurture a focus on mission fragmentation and program overlap. The very nature of this issue presents special challenges for both the executive branch and the Congress. In the executive branch, the sheer number of departments and agencies, many of which are “holding” organizations for widely diverse subordinate bureaus, administrations, and services, will present a significant impediment. The Results Act establishes mechanisms to deal with this environment: strategic plans, emphasizing long-term goals and objectives in consultation with the Congress and external stakeholders, and the governmentwide performance plan, presenting “a single cohesive picture of the federal government’s annual performance goals.” However, notwithstanding consultation requirements and the iterative nature of strategic planning, such plans will likely focus internally, especially if there are no persistent, external, cross-agency integrating efforts. The recent growth of ad hoc interagency coordinating groups is an encouraging development, but sustained impetus from OMB will likely be needed to ensure that agency plans address fragmentation concerns. The governmentwide performance plan, prepared by OMB based on agency performance plans, offers perhaps the best opportunity for continued attention to coordination and integration issues within the executive branch, but it remains an untested approach whose relationship to the Congress, as discussed below, is unclear. The departmental staff teams established in the House of Representatives have provided a valuable means to coordinate congressional consultations, but mission fragmentation and program overlap will continue to present challenges to the traditional committee structures and processes. Moreover, the governmentwide performance plan raises a series of questions for the Congress, including the following: How can the Congress most appropriately respond to the performance goals specified in the plan? How can the Congress express its perspectives and priorities on governmentwide performance goals, especially with respect to areas of fragmentation and overlap? How can the Congress best stimulate development of common performance measures within fragmented mission areas and programs, especially for those that cut across jurisdictions of specific committees? These questions suggest continuing challenges for the Congress as it seeks to address crosscutting performance issues in the context of its current institutions and processes. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the House Minority Leader and the Ranking Minority Members of your Committees; the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, Committee on the Budget, and Committee on Governmental Affairs; and other interested Members of the Congress. We will also send copies to the Director, Office of Management and Budget, and will make copies available to others upon request. The major contributors to this letter were Michael J. Curro, Assistant Director, and Linda F. Baker, Senior Evaluator. Please contact me at (202) 512-9573 if you have any questions. This appendix lists principal GAO products regarding mission fragmentation and program overlap. Included are (1) products that provide general commentary on the subject and related issues; (2) products that pertain to mission fragmentation at a single department or agency; and (3) products that examine fragmentation within a particular mission or program area. Managing for Results: The Statutory Framework for Improving Federal Management and Effectiveness (GAO/T-GGD/AIMD-97-144, June 24, 1997) Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997) The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997) Budget Issues: Fiscal Year 1996 Spending by Budget Function (GAO/AIMD-97-95, May 13, 1997) Performance Budgeting: Past Initiatives Offer Insights for GPRA Implementation (GAO/AIMD-97-46, Mar. 27, 1997) Measuring Performance: Strengths and Limitations of Research Indicators (GAO/RCED-97-91, Mar. 21, 1997) Managing for Results: Enhancing the Usefulness of GPRA Consultations Between the Executive Branch and Congress (GAO/T-GGD-97-56, Mar. 10, 1997) Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997) Managing for Results: Key Steps and Challenges in Implementing GPRA in Science Agencies (GAO/T-GGD/RCED-96-214, July 10, 1996) Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996) Managing for Results: Achieving GPRA’s Objectives Requires Strong Congressional Role (GAO/T-GGD-96-79, Mar. 6, 1996) Budget Account Structure: A Descriptive Overview (GAO/AIMD-95-179, Sept. 18, 1995) Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995) Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995) Program Consolidation: Budgetary Implications and Other Issues (GAO/T-AIMD-95-145, May 23, 1995) Government Reorganization: Issues and Principles (GAO/T-GGD/AIMD-95-166, May 17, 1995) The Department of Housing and Urban Development: Information on Its Role, Programs, and Issues (GAO/RCED-97-173R, July 21, 1997) GAO presented information on the Department of Housing and Urban Development’s (HUD) role, organization, and resources and a description of its major programs, their condition, and related issues. For example, in addition to HUD, five other federal departments, two independent agencies, and three government-sponsored enterprises—as well as private investors, public housing authorities, and nonprofit groups—contribute to meeting our nation’s housing needs. Housing and Urban Development: Potential Implications of Legislation Proposing to Dismantle HUD (GAO/RCED-97-36, Feb. 21, 1997) GAO discussed the breadth of HUD’s responsibilities in housing assistance, community development, housing finance, and related regulatory functions in the context of a legislative proposal to dismantle the Department. The report summarized the potential impact of the proposal on HUD’s customers and the capacity of states and other federal agencies to assume functions proposed in the bill. GAO also discussed the evolution of HUD’s missions, noting that when created in 1965, it captured most federal housing and community development functions whose focus was primarily urban; HUD was not given responsibility for certain client-specific programs (e.g., veterans housing), for programs affecting rural areas, or for oversight of tax policies that affect housing or of financial institutions that participate in the nation’s mortgage markets. Government Reorganization: Observations on the Department of Commerce (GAO/T-GGD/RCED/AIMD-95-248, July 25, 1995) The missions and functions of the Department of Commerce have been among the most diverse of the cabinet departments, with its components responsible for such functions as expanding U.S. exports, developing innovative technologies, gathering and disseminating statistical data, measuring and fostering economic growth, granting patents and trademarks, promoting minority entrepreneurship, predicting the weather, and serving as an environmental steward. GAO noted that developing a strategic plan will be particularly challenging for Commerce because the Department does not have exclusive federal responsibility for any of these themes. Environmental Protection: Current Environmental Challenges Require New Approaches (GAO/T-RCED-95-190, May 17, 1995) The Environmental Protection Agency (EPA) has not been able to target its resources as efficiently as possible to the nation’s highest environmental priorities because it does not have an overarching legislative mission and its environmental responsibilities have not been integrated. Over the years, the Congress has responded to a series of environmental threats with individual laws that tended to assign pollution control responsibilities according to environmental medium (such as air or water) and often prescribed implementing requirements and mandated time frames for their completion. Department of Energy: Need to Reevaluate Its Role and Missions (GAO/T-RCED-95-85, Jan. 18, 1995) Created to deal with the energy crisis of the 1970s, Department of Energy’s (DOE) mission and priorities have changed over time, with new missions in weapons production and now environmental cleanup emerging. This testimony suggests a set of questions that could be used to clarify DOE’s mission, a necessary step to addressing its long-standing management problems. Department of Education: Opportunities to Realize Savings (GAO/T-HEHS-95-56, Jan. 18, 1995) GAO discussed (1) a need to reexamine programs previously suggested by Education for elimination because they duplicated other programs, had already achieved their purposes, or were more appropriately funded through nonfederal sources and (2) programs related to employment training that overlapped with each other and other programs outside Education. Food Safety: Changes Needed to Minimize Unsafe Chemicals in Food (GAO/RCED-94-192, Sept. 26, 1994) GAO identified fundamental weaknesses in the federal programs that monitor chemicals in food. Because the problems associated with the current fragmented federal system cannot be solved by individual agencies’ efforts, GAO recommended various actions that the Congress should take, including creating a single agency to carry out a cohesive set of food safety laws. Food Safety: A Unified, Risk-Based System Needed to Enhance Food Safety (GAO/T-RCED-94-71, Nov. 4, 1993) Efforts made in response to many recommendations to improve food safety had fallen short because the agencies continued to operate under different regulatory approaches contained in their basic laws. GAO suggested that a single food safety agency may be needed to effectively resolve long-standing problems, deal with emerging food safety issues, and ensure a safe food supply. Bank Oversight Structure: U.S. and Foreign Experience May Offer Lessons for Modernizing U.S. Structure (GAO/GGD-97-23, Nov. 20, 1996) In response to proposals to consolidate U.S. bank regulatory agencies, GAO examined how other countries structure and carry out their bank regulation and central bank activities. In contrast to foreign systems, the U.S. bank oversight system was relatively complex, with four different federal agencies having the same basic oversight responsibilities for those banks under their respective purview. Prior work showed that these agencies often differed on how laws should be interpreted, implemented, and enforced; how banks should be examined; and how to respond to troubled institutions. GAO also noted that differentiating oversight responsibilities by type of financial institution can result in overlap and a lack of accountability. Financial Market Regulation: Benefits and Risks of Merging SEC and CFTC (GAO/T-GGD-95-153, May 3, 1995) GAO commented on legislation which sought to improve the effectiveness and the efficiency of financial services regulation by merging the Securities and Exchange Commission and the Commodity Futures Trading Commission, the two agencies that regulate U.S. domestic equity and futures markets. Although a logical step to consider as part of continuing modernization efforts, the Congress must ultimately decide whether the potential benefits of a merger outweigh the risks. Bank Regulation: Consolidation of the Regulatory Agencies (GAO/T-GGD-94-106, Mar. 4, 1994) GAO supported in principle consolidating regulatory activities of the various agencies involved, endorsing a partial consolidation pending clarification of the role of the Federal Reserve. Bank And Thrift Regulation: Concerns About Credit Availability and Regulatory Burden (GAO/T-GGD-93-10, Mar. 17, 1993) The current regulatory system of four separate agencies evolved over decades of legislative efforts to address specific problems, resulting in a fragmented system that may no longer be capable of handling the complexities of today’s banking and thrift industries. However, further analyses of the root causes of regulatory burden would be needed so that the burden could be eased without adversely affecting safety and soundness and consumer protection goals. HUD: Inventory of Self-Sufficiency and Economic Opportunity Programs (GAO/RCED-97-191R, July 28, 1997) GAO inventoried and discussed the programmatic and funding linkages among 23 HUD self-sufficiency and economic opportunity programs that target tenants of public and assisted housing or low- and moderate-income residents in certain geographic areas. Community Development: Challenges Face Comprehensive Approaches to Address Needs of Distressed Neighborhoods (GAO/T-RCED-95-262, Aug. 3, 1995) The fragmentation of federal programs among at least 12 federal departments and agencies imposes a burden on distressed urban communities seeking assistance. Historically, there has been little coordination among the agencies, which have been protective of their own resources and separate organizational missions. Community Development: Comprehensive Approaches and Local Flexibility Issues (GAO/T-RCED-96-53, Dec. 5, 1995) GAO summarized its work on comprehensive approaches, noting that the many federal programs involved, considered individually, make sense but together often work against their intended purposes. Community Development: Comprehensive Approaches Address Multiple Needs but Are Challenging to Implement (GAO/RCED/HEHS-95-69, Feb. 8, 1995) Comprehensive approaches to helping distressed neighborhoods face many challenges. One such challenge is that community organizations have to piece together a complex web of funding from private and public sources, with coordination among the many federal agencies involved having been limited. Economic Development: Limited Information Exists on the Impact of Assistance Provided by Three Agencies (GAO/RCED-96-103, Apr. 3, 1996) The limited information available on the impact of economic development assistance provided by three programs—the Appalachian Regional Commission, the Department of Commerce’s Economic Development Administration, and the Tennessee Valley Authority—did not establish a strong causal linkage between a positive effect and agency assistance. As one measure of how an agency’s programs met their goals, each of the three agencies cited a 3-to-1 “performance ratio,” computed as a comparison of total dollars invested in a project with dollars invested by the agency. However, each agency defined “total dollars” differently and calculated the ratio for only a portion of its programs. Economic Development Programs (GAO/RCED-95-251R, July 28, 1995) This report lists and provides budgetary information on 342 economic development programs described in the 1994 Catalogue of Federal Domestic Assistance. Chemical Weapons Stockpile: Changes Needed in the Management of the Emergency Preparedness Program (GAO/NSIAD-97-91, June 11, 1997) GAO found that efforts to improve management of the chemical stockpile emergency preparedness program have been frustrated by continued disagreement between the Army and the Federal Emergency Management Agency (FEMA) over their roles and responsibilities. Because these disagreements risk the future effectiveness of the program, GAO recommended that the agencies work together to resolve differences or, alternatively, implement congressional direction to eliminate FEMA’s role in the program. Chemical Weapons Stockpile: Emergency Preparedness in Alabama Is Hampered by Management Weaknesses (GAO/NSIAD-96-150, July 23, 1996) The Army’s chemical stockpile emergency preparedness program in Alabama has been hampered by management weaknesses at the federal level and inadequate action by state and local agencies. Management weaknesses at the federal level include fragmented and unclear roles and responsibilities and a lack of teamwork in the budget process. GAO found these weaknesses contribute to time-consuming negotiations and delays in implementing projects critical to emergency preparedness. Disaster Management: Improving the Nation’s Response to Catastrophic Disasters (GAO/RCED-93-186, July 23, 1993) Following on two hurricanes in 1992, GAO summarized its analyses, conclusions, and recommendations concerning federal disaster management. GAO concluded that the federal strategy—encompassing 26 different agencies—does not promote adequate preparedness when there is advance warning of a disaster. Government-Sponsored Enterprises: Advantages and Disadvantages of Creating a Single Housing GSE Regulator (GAO/GGD-97-139, July 9, 1997) GAO reported that our work continued to indicate that the three housing GSE regulators—HUD, the Office of Federal Housing Enterprise Oversight, and the Federal Housing Finance Board—would be more effective if combined and authorized to oversee both safety and soundness and mission compliance. Although the GSEs operate differently, the risks they manage and their missions are similar. GAO noted that a combined independent regulatory agency should be better positioned to achieve the autonomy and prominence necessary to oversee the large and influential housing GSEs, which include the Federal National Mortgage Association, the Federal Home Loan Mortgage Association, and the Federal Home Loan Bank System. Homeownership: The Federal Housing Administration’s Role in Helping People Obtain Home Mortgages (GAO/RCED-96-123, Aug. 13, 1996) GAO identified several federal agencies and other entities which shared the basic mission of assisting households who may be underserved by the private market; however, none reached as many households as the Federal Housing Administration. Each of the programs differed in several key dimensions, including loan limits, allowable debt-to-income ratios, and the involvement of direct federal funding. Rural Development: Steps Towards Realizing the Potential of Telecommunications Technologies (GAO/RCED-96-155, June 14, 1996) As of December 1995, at least 28 federal programs administered by 15 federal agencies provided funds that were either specifically designated for telecommunications projects in rural areas or could be used for that purpose. Rural development experts and public officials suggested various needed changes to federal telecommunications programs, including making the multiple programs easier to identify and use. Rural Development: Patchwork of Federal Water and Sewer Programs Is Difficult to Use (GAO/RCED-95-160BR, Apr. 13, 1995) Seventeen programs administered by eight federal agencies are designed specifically for, or may be used by, rural areas to construct or improve water and wastewater facilities. The programs had common objectives but different eligibility criteria. The complexity and number of programs hampered the ability of rural areas to utilize them. Rural Development: Patchwork of Federal Programs Needs to Be Reappraised (GAO/RCED-94-165, July 28, 1994) The web of federal policies, programs, and regulations accompanying federal funding for rural development makes service delivery inefficient, according to local and regional officials. Moreover, the federal interagency group established to address some service delivery problems can take only limited action due to its restricted authority. Rural Development: Federal Programs That Focus on Rural America and Its Economic Development (GAO/RCED-89-56BR, Jan. 19, 1989) Using data from the Bureau of the Census and the Catalog of Federal Domestic Assistance, GAO identified 88 federal rural development programs. Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-95-4FS, Oct. 31, 1994) In fiscal years 1992 and 1993, there were over 90 early childhood programs in 11 federal agencies and 20 offices. This “system” of multiple programs with firm eligibility cutoffs could lead to disruptions in services from even slight changes in a child’s family status. While multiple programs targeted disadvantaged preschool-aged children, GAO noted that most such children did not participate in any preschool program. Department of Labor: Challenges in Ensuring Workforce Development and Worker Protection (GAO/T-HEHS-97-85, Mar. 6, 1997) The Department of Labor has taken some action to address fragmentation issues described by GAO, but these actions have not been enough to solve the problems. Passage of recent welfare reform legislation puts even greater demands on an employment training system that appears unprepared to respond. People With Disabilities: Federal Programs Could Work Together More Efficiently to Promote Employment (GAO/HEHS-96-126, Sept. 3, 1996) Federal assistance to people with disabilities is diffuse, involving 130 programs in 19 agencies. Often services are not coordinated between programs, and people with disabilities may receive duplicate services or face service gaps. Multiple Teacher Training Programs: Information on Budgets, Services, and Target Groups (GAO/HEHS-95-71FS, Feb. 22, 1995) In fiscal year 1993, at least 86 teacher training programs in 9 federal agencies funded similar types of services. Multiple Employment Training Programs: Information Crosswalk on 163 Employment Training Programs (GAO/HEHS-95-85FS, Feb. 14, 1995) GAO provided a crosswalk between employment training programs and their fiscal year 1995 appropriation, program purposes, authorizing legislation, budget accounts, target groups, and type of assistance provided. Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System (GAO/T-HEHS-95-70, Feb. 6, 1995) At least 163 programs—administered by 15 federal departments and agencies which received about $20 billion in fiscal year 1995—provide employment training assistance to a wide variety of client groups. The current fragmented system suffers from problems arising from a multitude of narrowly focused programs that often compete for clients and funds. Separate administrative structures raise questions about the programs’ efficiency; the system confuses those seeking assistance and frustrates employers and administrators. Multiple Employment Training Programs: Basic Program Data Often Missing (GAO/T-HEHS-94-239, Sept. 28, 1994) Federal agencies tended to focus their assessment efforts on inputs—dollars spent and participants served. Only about one-half of the programs surveyed collected data on what happened to participants after they received program services, and only about one-quarter collected data on wages earned. Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency (GAO/HEHS-94-193, July 11, 1994) Of the 38 programs in GAO’s analysis, 30 were determined to be overlapping. That is, they shared common goals, had comparable clients, provided similar services, and used parallel delivery mechanisms and administrative structures with at least one other program. Multiple Employment Training Programs: Conflicting Requirements Underscore Need for Change (GAO/T-HEHS-94-120, Mar. 10, 1994) Despite decades of efforts to better coordinate employment training programs, conflicting eligibility requirements and differences in annual operating cycles hamper the provision of needed services. Multiple Employment Training Programs: Most Federal Agencies Do Not Know If Their Programs Are Working Effectively (GAO/HEHS-94-88, Mar. 2, 1994) Federal agencies closely monitor their expenditure of billions of dollars for employment training assistance for the economically disadvantaged, but most agencies do not collect information on participant outcomes or conduct studies of program effectiveness—both of which are needed to know how well programs are helping participants enter or reenter the workforce. Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs (GAO/HEHS-94-80, Jan. 28, 1994) GAO’s review of nine programs targeting the economically disadvantaged showed those programs had similar goals, often served the same categories of people, and provided many of the same services using separate but parallel delivery structures. Multiple Employment Training Programs: Conflicting Requirements Hamper Delivery of Services (GAO/HEHS-94-78, Jan. 28, 1994) Despite decades of efforts to better coordinate employment training programs, conflicting eligibility requirements and differences in annual operating cycles are hampering the provision of needed services. For example, nine programs targeting the economically disadvantaged use several different standards for measuring income level, defining family or household, and determining what is included in income; 16 programs that target youth have four different operating cycles. Multiple Employment Programs: National Employment Training Strategy Needed (GAO/T-HRD-93-27, June 18, 1993) Federal, state, and local officials have struggled with the problems created by a fragmented system of employment training programs, with several states launching coordination efforts at the local level. Despite the elimination of some programs, the total number has continued to grow. Multiple Employment Programs (GAO/HRD-92-39R, July 24, 1992) In fiscal year 1991, 14 federal departments or independent agencies administered 125 federal employment training programs. Most of the programs and the majority of the funding were for programs administered by either the Department of Education or the Department of Labor. Department of Education: Information on Consolidation Opportunities and Student Aid (GAO/T-HEHS-95-130, Apr. 6, 1995) GAO described efforts by the Department of Education to consolidate its programs and noted instances of potential overlap with programs administered by other federal agencies. High Performance Computing and Communications: New Program Direction Would Benefit From a More Focused Effort (GAO/AIMD-95-6, Nov. 4, 1994) Much valuable research has been accomplished within the context of the High Performance Computing and Communications program, a coordinated effort among nine federal agencies to accelerate the availability and utilization of the next generation of high performance computers and networks. GAO stated that a more focused management approach could better ensure that new program goals regarding the national information infrastructure are met. DOE’s National Laboratories: Adopting New Missions and Managing Effectively Pose Significant Challenges (GAO/T-RCED-94-113, Feb. 3, 1994) GAO called for DOE to take a more strategic focus to managing and evaluating its laboratories, noting that with the collapse of the Soviet Union, the missions of DOE’s laboratories needed clarification. These labs—originally created to develop nuclear weapons—faced the prospect of limited future funding at the same time they were under pressure to address current national priorities, such as improving economic competitiveness and cleaning up the environment. Federal R&D Laboratories (GAO/RCED/NSIAD-96-78R, Feb. 29, 1996) For fiscal year 1995, 17 federal departments and agencies identified 515 federal research and development laboratories, including those operated by contractors. While the Department of Agriculture reported the largest number of laboratories (185), laboratories in the Department of Defense (DOD), DOE, the Department of Health and Human Services (HHS), and the National Aeronautics and Space Administration (NASA) accounted for 88 percent of the funding. Federal Research: Interim Assessment of the Small Business Innovation Research and Technology Transfer Programs (GAO/T-RCED-96-93, Mar. 6, 1996) GAO found that 11 federal agencies participate in the Small Business Innovation Research (SBIR) program, which requires agencies with substantial amounts of R&D spending to award a certain number of grants, contracts, or cooperative agreements to small businesses to encourage experimental, developmental, or research work. Each agency manages its own program, but the Small Business Administration issues policy directives and annual reports for the program. GAO identified instances in which companies received funding for the same proposals multiple times before agencies became aware of the duplication. Federal Research: Preliminary Information on the Small Business Technology Transfer Program (GAO/RCED-96-19, Jan. 24, 1996) GAO identified five agencies that participate in the Small Business Technology Transfer (STTR) program, which requires agencies with substantial amounts of R&D spending to award a certain number of grants, contracts, or cooperative agreements to small businesses who agree to collaborate with a nonprofit research institution to encourage experimental, developmental, or research work. The five STTR agencies also participate in the similar SBIR program. GAO concluded that similarities between the two programs raise questions about the need for the STTR program. Statistical Agencies: Consolidation and Quality Issues (GAO/T-GGD-97-78, Apr. 9, 1997) Of the 70 federal agencies engaged in statistical activities, 11 are considered the principal statistical agencies, with 2 Commerce agencies—the Bureau of the Census and the Bureau of Economic Analysis—together with the Department of Labor’s Bureau of Labor Statistics, accounting for about $825 million of a total $1.2 billion in fiscal year 1997. This decentralized system contributes to inefficiency, a lack of national priorities for allocation of resources, a burden on data users and providers, and restrictions on the exchange of data among statistical agencies. Centralization appeared to address these types of problems, but potential disadvantages could include diminished responsiveness to the needs of former parent departments and objections to the concentration of data in a single agency. Statistical Agencies: A Comparison of U.S. and Canadian Statistical Systems (GAO/GGD-96-142, Aug. 1, 1996) U.S. and Canadian statistical systems are characterized by different organizational approaches and legal frameworks. The U.S. system is highly decentralized; 11 agencies collect, analyze, and produce statistics as their primary mission. A number of laws, policies, or regulations, some of which apply only to a specific agency, govern the collection, use, and confidentiality of statistical information. Each agency has its own separate budget; in some cases, to protect the confidentiality of data providers, laws allow only the agency collecting specific data to have access to them. In Canada, a single agency, operating under a single law, produces and disseminates virtually all broadly used official government statistics. Federal Statistics: Principal Statistical Agencies’ Missions and Funding (GAO/GGD-96-107, July 1, 1996) OMB considers any agency spending at least $500,000 in a fiscal year for statistical activities to be part of the federal statistical system. In fiscal year 1995, 72 agencies met this threshold. Eleven of these agencies collect, analyze, and produce statistics as their primary mission, and these 11 agencies received over $1 billion in current appropriations in both fiscal years 1994 and 1995. Long-Term Care: Demography, Dollars, and Dissatisfaction Drive Reform (GAO/T-HEHS-94-140, Apr. 12, 1994) GAO noted that at the core of the considerable dissatisfaction with the long-term care system is a belief that services from a fragmented delivery system are difficult to access. Services for the Elderly: Longstanding Transportation Problems Need More Federal Attention (GAO/HRD-91-117, Aug. 29, 1991) GAO reported that fragmentation of special transportation serving the elderly was a major, long-standing barrier limiting the effectiveness of federal resources. Experts contacted attributed fragmentation to multiple funding sources, differences between social service and transportation providers, and the costs of coordination. Administration on Aging: More Federal Action Needed to Promote Service Coordination for the Elderly (GAO/HRD-91-45, Apr. 23, 1991) Officials and others contacted agreed that shared responsibility between multiple state and local agencies frequently resulted in fragmented service delivery. The Administration on Aging, this report stated, did not keep pace in the 1980s with growing coordination needs. Improving the efficiency and quality of services through stronger coordination will continue to be important in the 1990s as an aging population increases the demand for home and community-based services. Substance Abuse and Violence Prevention: Multiple Youth Programs Raise Questions of Efficiency and Effectiveness (GAO/T-HEHS-97-166, June 24, 1997) GAO identified 70 programs in 13 federal departments and agencies in 1995—in addition to state, county, and local government and private programs—which could be used to provide substance abuse and/or violence prevention services for youths. Previous GAO work raised questions about the efficiency and effectiveness of this overlapping system, which also creates difficulties for those seeking to access the most appropriate services and funding sources. Insufficient information exists on the accomplishments of the federal programs. Substance Abuse and Mental Health: Reauthorization Issues Facing the Substance Abuse and Mental Health Services Administration (GAO/T-HEHS-97-135, May 22, 1997) GAO noted that given the number of federal agencies with related responsibilities in the area of substance abuse and mental health services, SAMSHA has a particular challenge as well as an opportunity to coordinate activities and promote the development of effective linkages. Drug and Alcohol Abuse: Billions Spent Annually for Treatment and Prevention Activities (GAO/HEHS-97-12, Oct. 8, 1996) Federal funding for substance abuse treatment and prevention increased by $1.6 billion from fiscal years 1990 through 1994. Federal agencies involved also increased from 12 to 16. Three departments accounted for most of the federal funds available for substance abuse treatment and prevention—HHS, Education, and Veterans Affairs. Drug Use Among Youth: No Simple Answers to Guide Prevention (GAO/HRD-94-24, Dec. 29, 1993) GAO identified 19 federal prevention programs listed in the Catalog of Federal Domestic Assistance devoted exclusively to substance abuse prevention and analyzed these programs in terms of risk factors addressed. Nuclear Health and Safety: Consensus on Acceptable Radiation Risk to the Public Is Lacking (GAO/RCED-94-190, Sept. 19, 1994) Federal agencies have set different limits on human exposure to radiation, in part because the agencies have not agreed on calculation methods and have different radiation protection strategies. These differences raise questions about the precision, credibility, and overall effectiveness of federal radiation standards and guidelines in protecting public health. GAO also noted that historically, interagency coordination efforts, often prompted by congressional interest and concerns, have been ineffective. Telemedicine: Federal Strategy Is Needed to Guide Investments (GAO/NSIAD/HEHS-97-67, Feb. 14, 1997) From fiscal years 1994 to 1996, nine federal departments and independent agencies invested at least $646 million in telemedicine projects, with DOD the largest federal investor. Opportunities to share lessons learned have been lost due to the lack of a governmentwide strategy to ensure that maximum benefits are gained from the numerous federal telemedicine efforts. Efforts of the Joint Working Group on Telemedicine to develop a federal inventory—a critical starting point for coordination—have been hampered by definitional issues and inconsistent data. Child Care: Narrow Subsidy Programs Create Problems for Mothers Trying to Work (GAO/T-HEHS-95-69, Jan. 31, 1995) Although child care subsidies can have a dramatic effect on drawing low income mothers into the workforce, the fragmented nature of child care funding—with entitlements to some client categories, time limits on others, and activity limits on still others—produces unintended gaps in services, which limit the ability of low income families to become self-sufficient. Welfare Programs: Opportunities to Consolidate and Increase Program Efficiencies (GAO/HEHS-95-139, May 31, 1995) GAO discussed low-income families’ participation in multiple welfare programs; examined program inefficiencies, such as program overlap and fragmentation; and identified issues to consider in deciding whether and to what extent to consolidate welfare programs. Program areas discussed include employment training, food assistance, and early childhood programs. The report observes that little is known about the effectiveness of many welfare programs. At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions (GAO/HEHS-96-34, Mar. 6, 1996) GAO identified 131 federal programs serving at-risk or delinquent youth with total estimated appropriations for fiscal year 1995 of more than $4 billion. Many programs provided multiple services and had multiple target groups, raising questions about the overall efficiency of federal efforts. Multiple Youth Programs (GAO/HEHS-95-60R, Jan. 19, 1995) For fiscal year 1995, eight federal agencies administered at least 46 programs earmarked for youth development. This report lists each program, together with one-page overviews of program authority, objectives, and target groups. Promoting Democracy: Foreign Affairs and Defense Agencies Funds and Activities—1991 to 1993 (GAO/NSIAD-94-83, Jan. 4, 1994) GAO developed an inventory of U.S. government-funded programs aimed at democratic development. Because there is no governmentwide democracy program and no common definition of what constitutes such a program, the inventory was based on what agencies considered to be their support of democratic processes. Exchange Programs: Inventory of International Educational, Cultural, and Training Programs (GAO/NSIAD-93-157BR, June 23, 1993) GAO inventoried 16 federal agencies with about 75 programs funding international educational, cultural, and training exchange programs. Foreign Affairs: Perspectives on Foreign Affairs Programs and Structures (GAO/NSIAD-97-6, Nov. 8, 1996) This report summarizes the views of participants at a GAO-sponsored 1996 conference on foreign affairs issues. Among other issues, participants discussed a need for policymakers to understand how various U.S. agencies are operating overseas and whether coordination mechanisms need to be strengthened. State Department: Options for Addressing Possible Budget Reductions (GAO/NSIAD-96-124, Aug. 29, 1996) Among options to address budget reductions, GAO discussed lessening the degree of overlap within the structure of State’s bureaus and other agencies, noting that some decisions could necessitate an interagency forum or might require legislative approval. Former Soviet Union: Information on U.S. Bilateral Program Funding (GAO/NSIAD-96-37, Dec. 15, 1995) GAO summarized financial information on U.S. bilateral programs seeking to help the newly independent states of the former Soviet Union transition to democratic societies with market economies. From fiscal years 1990 through 1994, 23 departments and independent agencies implemented 215 programs in the former Soviet Union, with 3 agencies implementing the majority of noncredit programs. National Export Strategy (GAO/NSIAD-96-132R, Mar. 26, 1996) The Trade Promotion Coordinating Committee (TPCC) is an interagency coordinating group legislatively mandated to establish governmentwide priorities for federal export promotion activities and propose an annual unified federal budget reflecting those priorities. While TPCC has made efforts to develop interagency performance measures, it has yet to create measures sufficiently refined to influence budget allocation decisions. Farm Bill Export Options (GAO/GGD-96-39R, Dec. 15, 1995) GAO identified options for improving agricultural export assistance programs within the Department of Agriculture, including improving coordination among, restructuring, and abolishing some federal export promotion programs. Commerce’s Trade Functions (GAO/GGD-95-195R, June 26, 1995) GAO commented on how federal trade activities might be consolidated if the Department of Commerce were abolished. Commerce plays a significant role in several international trade functions, including trade policy-making and negotiating, export promotion, trade regulation, and trade data collection and analysis. GAO listed other agencies that are involved in performing these and other trade functions. Export Promotion: Initial Assessment of Governmentwide Strategic Plan (GAO/T-GGD-93-48, Sept. 29, 1993) TPCC’s initial effort to develop a governmentwide strategic plan for federal export promotion programs presented a status report on progress to date. TPCC did not, however, reach consensus on priorities, nor did TPCC create a unified budget proposal for federal trade promotion programs, as required under TPCC’s legislative mandate. Export Promotion: Improving Small Businesses’ Access to Federal Programs (GAO/T-GGD-93-22, Apr. 28, 1993) GAO endorsed in principle a network of “one-stop shops” to improve the service delivery of export promotion programs. Under the current fragmented system, contacting multiple offices can leave companies confused as to what services are available and may discourage some from seeking assistance. Export Promotion: Governmentwide Strategy Needed for Federal Programs (GAO/T-GGD-93-7, Mar. 15, 1993) While significant funds are devoted to export promotion programs, these are not allocated on the basis of any governmentwide strategy or set of priorities. Consequently, taxpayers do not have reasonable assurance that their money is being effectively used to emphasize sectors or programs with the highest potential return. The Export Enhancement Act of 1992 incorporated GAO’s recommendations for mandating the TPCC to devise a governmentwide strategic plan and propose an annual unified federal budget for export promotion. Export Promotion: Federal Approach is Fragmented (GAO/T-GGD-92-68, Aug. 10, 1992) In fiscal year 1991, 10 federal agencies offered export promotion programs, which spent about $2.7 billion. This system is characterized by funding imbalances and program inefficiencies. GAO recommended that the Secretary of Commerce, as chair of the TPCC, work with member agencies to develop a strategic plan and ensure that the budget requests for export promotion programs are consistent with priorities. Export Promotion: Overall U.S. Strategy Needed (GAO/T-GGD-92-40, May 20, 1992) Ten federal agencies offer export promotion services, in an often inefficient and sometimes confusing manner. This testimony describes specific instances of fragmentation and its consequences to the U.S. business community and taxpayers. Export Promotion: U.S. Programs Lack Coherence (GAO/T-GGD-92-19, Mar. 4, 1992) The lack of a governmentwide strategy for a system of export promotion programs implies that much more might be achieved with existing resources if they were allocated according to national priorities and administered by a different agency structure. TPCC has had some modest successes in coordinating federal export promotion efforts, but the government cannot devise a coherent export promotion strategy one agency at a time. Customs Service and INS: Dual Management Structure for Border Inspections Should Be Ended (GAO/GGD-93-111, June 30, 1993) Long-standing coordination problems between the two agencies responsible for primary inspections at land border points of entry could best be resolved by ending the dual management structure. GAO presented several options for change to prepare the government to meet the broader challenges posed by changing international business competition and increasing international migration flows. Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/T-GGD-97-97, May 1, 1997) Given the complexity of issues and the fragmentation of national drug control strategy among more than 50 agencies, GAO endorsed the continued need for a central planning agency, such as the Office of National Drug Control Policy (ONDCP), to coordinate the nation’s drug control efforts. ONDCP has recently begun a new effort to develop national drug control performance measures, relying on working groups consisting of representatives from federal drug control agencies and state, local, and private organizations. ONDCP and operational agency data should be considered together because results achieved by one agency in reducing the use of drugs may be offset by less favorable results by another agency. Drug Control: Observations on Elements of the Federal Drug Control Strategy (GAO/GGD-97-12, Mar. 14, 1997) This report provides information on ONDCP’s development of national-level measures of drug control performance and assesses the U.S. Coast Guard’s performance measures for its antidrug activities in the context of the Results Act. Drug Control: Long-Standing Problems Hinder U.S. International Efforts (GAO/NSIAD-97-75, Feb. 27, 1997) GAO endorsed ONDCP’s efforts to prepare a long-term strategic plan and suggested an approach to planning and budgeting for drug control similar to that used in DOD. Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/T-GGD-94-7, Oct. 5, 1993) Given the persistent severity of the drug problem and the large number of federal, state, and local agencies working on the problem, GAO saw a continuing need for a central planning agency to provide leadership and coordination. GAO recommended that the Congress reauthorize ONDCP for an additional finite period of time and suggested that ONDCP be directed to develop additional performance measures to assess progress in reducing drug use and to incorporate the measures into annual national drug control strategies. Drug Control: Coordination of Intelligence Activities (GAO/GGD-93-83BR, Apr. 2, 1993) GAO described instances of duplication and overlap in the analysis and reporting of drug intelligence data, listing federal centers involved in these activities. The report noted that ONDCP, charged with managing the nation’s war on drugs, establishes priorities and encourages agency cooperation but does not have the authority to direct agency intelligence activities. Drug Control: Inadequate Guidance Results in Duplicate Intelligence Production Efforts (GAO/NSIAD-92-153, Apr. 14, 1992) GAO cited areas of duplication and overlap and recommended that DOD develop guidance for DOD organizations involved in antidrug efforts. Controlling Drug Abuse: A Status Report (GAO/GGD-88-39, Mar. 1, 1988) GAO provided an overview of the drug problem and the federal response. The report noted that information about which antidrug programs worked best was lacking and that fragmented and uncoordinated antidrug policies and programs remained obstacles to the success of federal efforts. Federal Drug Interdiction Efforts Need Strong Central Oversight (GAO/GGD-83-52, June 13, 1983) At the time of this review, authority and responsibility for federal drug interdiction efforts were split among three agencies in three executive departments, each with different programs, goals, and priorities. Very little information was available that could be used as a basis for evaluating program results. Legislation passed in 1972 and 1976 recognized that fragmentation of federal efforts was a problem and required the President to develop a comprehensive national drug strategy and appoint a drug abuse policy coordinator. However, existing strategies had not defined agencies’ roles, and the drug abuse policy coordinator lacked authority to set priorities in federal drug efforts. Federal Law Enforcement: Information on Certain Agencies’ Criminal Investigative Personnel and Salary Costs (GAO/T-GGD-96-38, Nov. 15, 1995) Federal Law Enforcement: Investigative Authority and Personnel at 13 Agencies (GAO/GGD-96-154, Sept. 30, 1996) Federal Law Enforcement: Investigative Authority and Personnel at 32 Organizations (GAO/GGD-97-93, July 22, 1997) In this series, GAO reported on the jurisdictional overlaps among organizations authorized to investigate suspected criminal violations of federal law. GAO noted that the growth of federal law enforcement activities has been evolutionary, with additional organizations established in response to new laws and expanding jurisdictions. In the September 1996 report, GAO provided information on 13 federal organizations that employed 700 or more law enforcement investigative personnel; in the July 1997 report, 32 additional federal organizations, including 20 inspectors general offices, employing more than 25 but less than 700 personnel were profiled. Collectively, these organizations employed almost 50,000 investigative personnel as of September 30, 1996. Drug Trafficking: Responsibilities for Developing Narcotics Detection Technologies (GAO/T-NSIAD-97-192, June 25, 1997) Four agencies—ONDCP, Customs, DOD, and OMB—are primarily responsible for coordinating or developing narcotics detection technologies. ONDCP and Customs have differing views on the need for various detection technologies—for example, the specific types of technologies needed along the southwest border. GAO believes these differing views should be resolved as ONDCP and Customs work with other agencies in preparing a long-term technology development plan. Terrorism and Drug Trafficking: Responsibilities for Developing Explosives and Narcotics Detection Technologies (GAO/NSIAD-97-95, Apr. 15, 1997) Four agencies—the Federal Aviation Administration (FAA), the National Security Council, the Department of Transportation (DOT), and OMB—are responsible for overseeing or developing explosives detection technologies, while other agencies—DOD, ONDCP, Customs, and OMB—are primarily responsible for coordinating or developing narcotics detection technologies. GAO noted that these agencies have several joint efforts to strengthen development of explosives and narcotics detection technologies but have not yet agreed to formal understandings on how to establish standards for explosives detection systems, profiling and targeting systems, and the deployment of canine teams at airports. In addition, the agencies have not agreed on how to resolve issues related to a joint-use strategy and liability. Joint technology development is important because similar technologies are used to detect explosives and narcotics. GAO recommended that the Secretaries of Transportation and the Treasury establish a memorandum of understanding on how FAA, Customs, the Alcohol, Tobacco and Firearms Administration, and other agencies are to work together to address issues surrounding the development of these technologies. GAO also suggested that the Congress consider directing the Secretaries of Transportation and the Treasury to provide an annual report on all the government’s efforts to develop and field explosives and narcotics detection technology. Land Management Agencies: Major Activities at Selected Units are Not Common Across Agencies (GAO/RCED-97-141, June 26, 1997) At six land management agencies, little commonality existed among the 31 different mission-related activities—including cultural and natural resource management, habitat conservation, and rangeland management—identified by GAO. Visitor services, maintenance, and construction were the most common major activities, being performed at units of three or more of the six agencies, but most agency resources were devoted to unique activities related to their specific missions. Forest Service Decision-making: A Framework for Improving Performance (GAO/RCED-97-71, Apr. 29, 1997) In this report examining the Forest Service’s decision-making process, GAO discussed a variety of internal and external causes of inefficiency and ineffectiveness, including unresolved interagency issues. For example, although authorized to plan along administrative boundaries, such as those defining natural forests and parks, the agencies are required to analyze environmental concerns along the boundaries of natural systems, which can lead to duplicative environmental analyses, increased costs, and less effective land management decision-making. GAO also noted that land management and regulatory agencies do not work together to address issues that transcend their boundaries and jurisdictions and that environmental and socioeconomic data gathered by the agencies are often not comparable and have large gaps. Federal Land Management: Streamlining and Reorganization Issues (GAO/T-RCED-96-209, June 27, 1996) GAO’s work at four land management agencies—the National Park Service, the Bureau of Land Management, and the Fish and Wildlife Service within the Department of the Interior, and the Forest Service within the Department of Agriculture—indicated that streamlining the existing structure and reorganizing it are not mutually exclusive. However, such efforts will require a coordinated approach within and across agency lines to avoid creating unintended consequences for the future. Ecosystem Management: Additional Actions Needed to Adequately Test a Promising Approach (GAO/RCED-94-111, Aug. 16, 1994) GAO described barriers to the planned governmentwide ecosystem management concept, including the fact that data needed for ecosystem management, which are collected independently by various agencies for different purposes, are often not comparable and insufficient. A governmentwide approach to ecosystem management would require unparalleled coordination among federal agencies as well as consensus-building among federal and nonfederal parties. Forestry Functions: Unresolved Issues Affect Forest Service and Bureau of Land Management Organizations in Western Oregon (GAO/RCED-94-124, May 17, 1994) Summarizing efforts at the Forest Service and the Bureau of Land Management to rethink their organizational structures and relationships, GAO suggested that an agency-by-agency approach to downsizing and restructuring may not have the potential to achieve efficiencies that could be derived through a collaborative federal approach to land management. International Environment: U.S. Funding of Environmental Programs and Activities (GAO/RCED-96-234, Sept. 30, 1996) At least five federal agencies spent nearly $1 billion from 1993 through 1995 in support of 12 international environmental agreements. These agencies exhibited significant differences in both the amount of their spending and in the purposes for which the money was spent. Federal Facilities: Consistent Relative Risk Evaluations Needed for Prioritizing Cleanups (GAO/RCED-96-150, June 7, 1996) EPA has designated 154 sites, involving facilities operated by at least five federal departments, as priorities warranting further study and possible cleanup. However, EPA’s listing does not fully and completely identify the most contaminated facilities because, among other reasons, (1) some federal agencies have not finished identifying the universe of their contaminated sites or completed the preliminary assessment of the extent of contamination, and (2) EPA has not developed evaluation priorities because of the poor quality of data received from other federal agencies. Water Quality: A Catalogue of Related Federal Programs (GAO/RCED-96-173, June 19, 1996) GAO identified 72 federal programs and other initiatives in eight departments and agencies that assist states, municipalities, individuals, and others in their efforts to improve and/or protect water quality from various pollution threats. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO described the challenge of multiple and overlapping federal programs within the framework of the Government Performance and Results Act, focusing on specific ways in which the Results Act can focus attention on these management challenges and help to develop strategies to harmonize federal responses. GAO noted that: (1) GAO's work has documented the widespread existence of fragmentation and overlap from both the broad perspective of federal missions and from the more specific viewpoint of individual federal programs; (2) GAO's work has shown that as the federal government has responded over time to new needs and problems, many federal agencies have been given responsibilities for addressing the same or similar national issues; but GAO's work also suggests that some issues will necessarily involve more than one federal agency or more than one approach; (3) taken as a whole, this body of work indicates that fragmentation and overlap will present a particular and persistent challenge to the successful implementation of the Results Act; (4) but at the same time, the Results Act should offer a new and structured framework to address crosscutting issues; (5) each of its key stages--defining missions and desired outcomes, measuring performance, and using performance information--offers a new opportunity to address fragmentation and overlap; (6) for example, the Results Act is intended to foster a dialogue on strategic goals involving the Congress as well as agency and external stakeholders; (7) this dialogue should help to identify agencies and programs addressing similar missions and associated performance implications; (8) the act's emphasis on results-based performance measures should lead to more explicit discussions of contributions and accomplishments within crosscutting programs and encourage related programs to develop common performance measures; (9) if the Results Act is successfully implemented, performance information should become available to clarify the consequences of fragmentation and the implications of alternative policy and service delivery options, which, in turn, can affect future decisions concerning department and agency missions and the allocation of resources among those missions; (10) emphasizing missions, goals, and objectives, as envisioned by the Results Act, should facilitate a broader recognition of the nature and extent of fragmentation and overlap; and (11) however, past efforts to deal with crosscutting federal activities and recent developments in both the executive branch and the Congress underscore the need for specific institutions and processes to sustain and nurture a focus on these issues.
The FSM, the Marshall Islands, and Palau are among the smallest countries in the world. In 2008, the three FAS had a combined resident population of approximately 179,000—104,000 in the FSM, 54,000 in the Marshall Islands, and 21,000 in Palau. In 1947, the United States entered into a trusteeship with the United Nations and became the administering authority of the FSM, the Marshall Islands, and Palau. The four states of the FSM voted in 1978 to become an independent nation, and the Marshall Islands established a constitutional government and declared itself a republic in 1979. Both the FSM and the Marshall Islands remained subject to the authority of the United States under the trusteeship agreement until 1986, when a Compact of Free Association went into effect between the United States and the two nations. In 1994, Palau also entered a Compact of Free Association with the United States and became a sovereign state. Under the compacts, FAS citizens are exempt from meeting the visa and labor certification requirements of the Immigration and Nationality Act (INA) as amended. The migration provisions of the compacts allow compact migrants to enter the United States (including all U.S. states, territories, and possessions) and to lawfully work and establish residence indefinitely. In addition, under the compacts, the United States provided economic assistance, and access to certain federal services and programs, among other things. Also under the compacts, the United States has a responsibility for the defense of the FAS, and the compacts provide the United States with exclusive military use rights in these countries. A further compact-related agreement with the Marshall Islands secured the United States access to the U.S. military facilities on Kwajalein Atoll, which are used for missile testing and space tracking activities. In the 1986 compacts’ enabling legislation, Congress stated that it was not its intent to cause any adverse consequences for United States territories and commonwealths and the state of Hawaii. Congress further declared that it would act sympathetically and expeditiously to redress any adverse consequences and authorized compensation for these areas that might experience increased demands on their educational and social services by compact migrants from the Marshall Islands and the FSM. The legislation also required the President to report and make recommendations annually to the Congress regarding adverse consequences resulting from the compact and provide statistics on compact migration. In November 2000, Congress made the submission of annual impact reports optional and shifted the responsibility for preparing compact impact reports from the President, with Interior as the responsible agency, to the governors of Hawaii and the territories. In December 2003, Congress approved the amended compacts with the FSM and the Marshall Islands and took steps in the amended compacts’ enabling legislation to address compact migrant impact in U.S. areas. The legislation restated Congress’s intent not to cause any adverse consequences for the areas defined as affected jurisdictions—Guam, Hawaii, the CNMI, and American Samoa. The act authorized and appropriated $30 million for each fiscal year from 2004 to 2023 for grants to the affected jurisdictions, to aid in defraying costs incurred by these jurisdictions as a result of increased demand for health, educational, social, or public safety services, or for infrastructure related to such services, due to the residence of compact migrants in their jurisdiction. Interior’s Office of Insular Affairs (OIA) reviews the affected jurisdictions’ annual proposals for the use of the funds and provides them to affected jurisdictions as grants. Grants are to be used only for health, educational, social, or public safety services, or infrastructure related to such services due to the residence of compact migrants. Under the amended compacts’ enabling legislation, the affected jurisdictions are to receive their portion of the $30 million per year through 2023 in proportion to the number of compact migrants living there, as determined by an enumeration to be undertaken by Interior and supervised by Census or another organization at least every 5 years beginning in fiscal year 2003. The act permits Interior to use up to $300,000 of the compact impact funds, adjusted for inflation, for each enumeration. The legislation defines the population to be enumerated as persons, or those persons’ children under the age of 18, who pursuant to the compacts are admitted to, or resident in, an affected jurisdiction as of the date of the most recently published enumeration. In contrast to the original compacts’ enabling legislation, the amended compacts’ enabling legislation permits, but does not require, affected jurisdictions to report on compact migrant impact. If Interior receives such reports from the affected jurisdictions, it must submit reports to Congress that include, among other things, the Governor’s comments and Administration’s analysis of any such impacts. Under the initial compacts with the FSM and the Marshall Islands, the United States provided $2.1 billion in economic assistance to these governments. Under the amended compacts, the United States will provide an estimated combined total of $3.6 billion in economic assistance, much of it in a form known as “sector grants,” in annually decreasing amounts from 2004 through 2023. The amended compacts require that the sector grants be targeted to sectors such as education, health care, the environment, public sector capacity building, private sector development, and public infrastructure, or for other sectors as mutually agreed, with priority given to education and health. The amended compacts also established two management committees, the U.S.-Federated States of Micronesia Joint Economic Management Committee and the U.S.-Republic of the Marshall Islands Joint Economic Management and Financial Accountability Committee. Each committee has two FAS representatives and three U.S. representatives from, respectively, Interior, the Department of State, and the Department of Health and Human Services, with the Interior representative serving as chair. These committees review and approve compact grant allocations and performance objectives for the upcoming year on an annual basis and may attach conditions to the grants. Figure 1 shows the locations of the FAS and the affected jurisdictions. Census has gathered data through multiple efforts that can be used to describe aspects of the FAS migration to U.S. areas. Data are currently available through Census’s decennial censuses, which cover all U.S. areas, and its American Community Survey (ACS), which covers the 50 states, the District of Columbia and Puerto Rico. In addition, under agreements with Interior, Census has conducted special enumerations of compact migrants in affected jurisdictions, such as in 2003 and the 2008 enumeration required by the amended compacts’ enabling legislation. The next such required enumeration must occur by 2013. The decennial census is an enumeration of the U.S. population that is constitutionally required every 10 years. The 2010 decennial census data was gathered as of a specific day: April 1, 2010.  The 2010 decennial census in the 50 states is a 10-question survey that gathers limited demographic data such as sex, age, and race. The 2010 decennial census race question provided multiple choices for respondents to identify the race of each member of the household, with one choice being “Other Pacific Islander.” Respondents could identify multiple races for each individual and could further identify themselves by writing in a specific race. The decennial census in the 50 states does not collect respondents’ place of birth or year of entry, both of which are needed to identify those who arrived during the period of the compacts and are defined as compact migrants by the amended compacts’ enabling legislation.  Unlike the census in the 50 states, the decennial census in the insular areas—including Guam and the CNMI—is a detailed survey that collects a variety of demographic and economic information including respondents’ place of birth and year of entry, both of which are needed to identify compact migrants. Begun in 2005, the ACS uses a series of monthly samples to produce data for the same small areas of the United States (census tracts and block groups) used in the decennial census and formerly surveyed via the decennial census long form. To conduct the ACS, Census mails survey forms to selected households and, if a household does not respond, follows up by telephone and sometimes in person. Census then uses the information obtained from the surveys to estimate results for the entire population of larger areas that have a determined level of statistical precision. The ACS collects a variety of demographic and economic information on an ongoing basis, including data such as place of birth and year of entry that can be used to identify compact migrants. The ACS reports estimates from individual or multiple years of data rather than point-in-time counts, such as the decennial census provides. Interior has conducted four sets of enumerations of compact migrants in affected jurisdictions. The 1993, 1998, and 2003 surveys used the “snowball” technique; Census, working under an interagency agreement with Interior, employed a two-pronged approach in 2008. In the snowball technique used by Census for the 2003 survey and by prior surveys, trained workers who spoke the FAS languages asked the respondent in compact migrant households for referrals to other compact migrant households until they had surveyed every identified compact migrant. The surveys provided a count of compact migrants and demographic information such as employment, occupation, education, and reasons for migration. The snowball technique is a nonprobability method.  The two-pronged approach used one approach in Guam and the CNMI and another in Hawaii. In Guam and the CNMI, Census designed a block sample probability surveyand collected only the data needed to establish whether the respondents and th eir children could be classified as compact migrants. Census used the survey results to produce an estimate for each affected jurisdiction’s population. data from the 2005, 2006, and 2007 ACS needed to identify migrants and their children. In the CNMI, Census surveyed only the island of Saipan. Because few migrants and approximately 10 percent of the total CNMI population live on the CNMI’s other inhabited islands Census estimated the migrant population on the other islands using data from the 2000 decennial census. Interior for both enumerations. Census and Interior officials did not retain a record of the cost of the 2003 snowball enumeration, but in 2011 the director of the 2003 effort estimated the cost at $400,000 to $500,000, including Census headquarters and field costs but excluding the cost of a final report. In 2008, the two-pronged approach cost approximately $1.3 million, including headquarters and field costs and the cost of final reporting. The combined data from Census’ 2005-2009 ACS and the 2008 required enumerations in Guam and the CNMI estimated approximately 56,000 compact migrants—nearly a quarter of all FAS citizens—living in U.S. areas, with the largest populations in Guam and Hawaii. An estimated 57.6 percent of all compact migrants lived in affected jurisdictions: 32.5 percent in Guam, 21.4 percent in Hawaii, and 3.7 percent in the CNMI. According to ACS data, nine mainland states had estimated compact migrant populations of more than 1,000. (See fig. 2.) In comments on a draft of this report, the government of Arkansas stated that it had serious doubts about the accuracy of the ACS estimate for Arkansas shown in figure 2, particularly in comparison to the higher count implied by 2010 decennial Census data and school enrollment data from Springdale, Arkansas. See appendix IV for a discussion of the differences between these data sources, and see appendix II for the varying estimates in Arkansas. On the basis of these combined data, we estimate that approximately 68 percent of compact migrants were from the FSM, 23 percent were from the Marshall Islands, and 9 percent were from Palau. According to these estimates, although the FSM produced the highest number of migrants, Marshallese predominated in Arizona, Arkansas, California, and Washington. See appendix III for Census’s 2005-2009 ACS estimates of compact migrants by U.S. area and FAS of origin. Census has also published the 2010 decennial census counts of Pacific Islanders, with respondents identifying themselves by race, for the 50 U.S. states. Decennial census data include published state-level information on the Marshallese population and will provide counts of ethnicities from the FSM and Palauans in the future. According to these data, there are more than 1,000 Marshallese identified by race in five states, with the largest number in Hawaii and Arkansas. See appendix IV for information on the Marshallese population gathered through the 2010 decennial census in the 50 states. Surveys that Interior conducted in affected jurisdictions from 1993 through 2008 show growth in the compact migrant populations in Guam and Hawaii. In the CNMI, the compact migrant population declined between 2003 and 2008, mirroring a general decline in the CNMI population. (See fig. 3.) From 2003 through 2008, the percentage of the total compact migrant population grew in Guam and Hawaii. In 2008, compact migrants represented approximately 12 percent of the total population in Guam and one percent of the total population of Hawaii. (See fig. 4.) Census’s 2003 survey of compact migrants in the affected jurisdictions found that most migrated to the affected jurisdictions for employment or to accompany migrating relatives. Employment was the most common reason for migration in Guam and CNMI, followed by accompanying relatives. In Hawaii, slightly more migrants identified accompanying relatives than employment as their reason for migration. In Guam and the CNMI, less than 1 percent of migrants cited medical reasons for migration; in Hawaii, 10 percent cited medical reasons. Census’s 2008 survey did not ask about reasons for migration. However, during our interviews in 2011, compact migrants and officials from FAS embassies and consulates identified employment opportunities, educational opportunities, accompanying relatives, and access to health care as reasons for migration, similar to the findings from Census’s previous surveys. The two-pronged 2008 enumeration, on which the current allocation of compact impact grants is based, had certain strengths and limitations. Strengths of the 2008 enumeration included its reliance on available data in Hawaii, lowering the enumeration’s cost, and its use of a probability method in all jurisdictions that allows Census to statistically calculate the quality of the data and report margins of error for the enumerations in all three jurisdictions. Limitations of Census’s approach for the 2008 enumeration included its use of two different methods and its use of data from two different time periods, both of which affect the perceived fairness and usefulness of the enumerations. In addition, data resulting from the 2008 approach has limited comparability with data from the prior surveys and includes limited demographic information, limiting the usefulness of the 2008 data for purposes other than the required enumeration.  Use of two different methods. The data produced by block sampling in Guam and the CNMI and the ACS tabulations in Hawaii that were used for the 2008 enumeration are not fully comparable among the affected jurisdictions.  Use of data from different time periods. Because the 2008 enumeration used data from two different time periods—2008 for Guam and the CNMI and 2005 to 2007 for Hawaii—the enumerations for the respective jurisdictions do not reflect the continuing migration to Hawaii after 2007 but do reflect such migration to Guam and the CNMI. The effect of the earlier time frame is to undercount the compact migrants in Hawaii relative to the counts in Guam and the CNMI.  Limited comparability with prior enumeration. The shift in approaches from prior enumerations to 2008 limits Interior’s and affected jurisdictions’ ability to draw inferences from trends in the data. Some differences in the counts for those years may be attributable to the change in methodology rather than changes in the populations. However, using the same methodology at different points in time does not guarantee comparability of data across time.  Limited collection of demographic data. In 2008, Census did not collect information on characteristics of compact migrants in Guam and the CNMI beyond that required for the enumerations. Collected ACS demographic data on characteristics such as employment, income, and age distribution may not be statistically reliable for populations as small as that of compact migrants in Hawaii. The numbers of compact migrants found by the ACS in each year’s Hawaii survey are very small. For example, the 2006 ACS identified, by place of birth and arrival after the compact’s effective date, 55 persons from the FSM, 30 persons from the Marshall Islands, and 3 from Palau. The ACS for 2005-2007 identified a cumulative total of 295 compact migrants, from which Census estimated the total reported population. Affected jurisdictions expressed concerns about the potential accuracy and fairness of the 2008 enumeration approach. Guam, Hawaii, and CNMI officials expressed a preference for the snowball method that was used for the prior enumeration, stating that it was a more suitable approach to enumerating compact migrants. Further, though the snowball method used in the past had undercounted migrants, these officials were concerned that the 2008 approach would also miscount. Both Guam and CNMI officials stated that the compact migrant population changed addresses frequently, potentially affecting the sampling methodology and leading to a miscount. Hawaii expressed concern that the use of ACS data for the state from earlier years would not reflect recent migration and, in contrast to the special survey to be conducted in Guam and the CNMI, would eliminate Hawaii’s local input into the survey while permitting such input from Guam and the CNMI. However, Guam and the CNMI also stated that implementation of the survey was rushed and they had only limited opportunity to provide such input. In addition, Guam, Hawaii, and CNMI officials expressed concern that the approach chosen for 2008 would provide fewer demographic data. Led by the University of Hawaii, the Hawaii Governor’s office prepared an unsolicited proposal to Interior to conduct a snowball survey in all three affected jurisdictions. Responding to the affected jurisdictions’ concerns, Census officials stated that the snowball method was statistically insufficient and was unlikely to meet statistical survey criteria established by OMB in 2006. These criteria require agencies initiating a new statistical survey to document the precision required of the estimates (e.g., the size of differences that need to be detected). Interior offered to adopt Hawaii’s proposal if all affected jurisdictions agreed to it; however, Guam and the CNMI did not agree to the proposal. In addition, Interior cited Census’s independence as an advantage of its conducting the enumerations. Officials in all three affected jurisdictions, however, remained dissatisfied with the 2008 enumeration approach. (See app. V for attributes of the 2008 approach compared with the approach used for the prior enumeration.) Although Interior has not yet selected an approach for its 2013 enumeration of compact migrants in the affected jurisdictions, Interior and Census officials are discussing a preliminary approach that would have strengths and limitations similar to those we found in the 2008 approach. As of July 2011, according to Census officials, no agreement was in place for Census to conduct this work. However, according to both Interior and Census officials, if Interior employs Census for the 2013 enumeration, Census would again deploy a two-pronged approach, using the 2010 decennial census results for Guam and the CNMI and the ACS for Hawaii. Interior has not determined the cost of the preliminary approach or weighed its strengths and limitations. Our analysis shows that the strength of the preliminary 2013 approach would be its low cost; because it would draw solely from existing Census data, it would require no new data collection. However, it would have limitations similar to those we found in the 2008 approach, compromising both its fairness as a basis for distributing compact impact funds as well as the usefulness of the data it produces.  Use of two different methods. Using the counts provided through the full enumeration contained in the 2010 census in Guam and the CNMI produces single numbers for these jurisdictions. In Hawaii, use of the ACS would provide an estimated total based on a sample with calculated level of precision.  Use of data from different time periods. The preliminary approach would use data from April 1, 2010 for Guam and the CNMI and from multiple ACS monthly samples at different points in time for Hawaii.  Limited comparability with prior data. The change in enumeration method for Guam and CNMI would limit the comparability of the 2008 and 2013 enumerations.  Limited collection of demographic data. Detailed demographic data could be produced for compact migrants in Guam and CNMI, because the 2010 decennial census in those locations collected such data. However, as in 2008, demographic data from the ACS in Hawaii could lack statistical reliability because of the small number of migrants included in the ACS sample. In addition, the 2010 census and the ACS may provide different coverage of the compact migrant populations. The 2010 census collection followed a widespread campaign by Census and community groups to encourage participation, while the ACS collection efforts are not accompanied by this level of outreach. For 2004 through 2010, the affected jurisdictions’ reports to Interior show more than $1 billion in costs for services related to compact migrants. During that period, Guam’s reported costs increased by nearly 111 percent, and Hawaii’s costs increased by approximately 108 percent. The CNMI’s reported costs decreased by approximately 53 percent, reflecting the decline in the CNMI compact migrant population. Figure 5 shows compact impact costs reported by the affected jurisdictions for 1996 through 2010. For more details, see appendix VI. The affected jurisdictions reported impact costs for educational, health, public safety, and social services. Education accounted for the largest share of reported expenses in all three jurisdictions, and health care costs accounted for the second-largest share overall. (See table 1.) Our analysis of data in affected jurisdictions’ impact reports for 2004 through 2010 found that, reflecting the growing numbers of compact migrants, annual costs for educational services across all jurisdictions increased from approximately $46 million to $89 million, or by 93 percent. Annual costs for health services across all jurisdictions increased from approximately $33 million to $54 million, or by 66 percent. The affected jurisdictions’ impact reports, numerous studies, federal and state officials, and officials from affected jurisdictions have identified several other factors, in addition to growing migrant populations, that contribute to the cost of providing public services to compact migrants.  Educational services. Compact migrant school children generally lag academically owing to (1) poor-quality schools in the FAS; (2) limited language skills and experience with a school environment; and (3) difficulties in involving parents in their children’s education, due to language barriers. Various officials from affected jurisdictions, Interior, and service providers said these factors increase the resources required to provide educational services to compact migrants relative to other students.  Health services. FAS citizens have high rates of obesity; diabetes; hypertension; cardiovascular disease; and communicable diseases such as tuberculosis, Hansen’s disease, and sexually transmitted diseases. The Department of the Interior’s Inspector General has reported on inadequate health care systems in the FAS, which can lead to the prevalence of these health issues among FAS citizens. These health factors also lead some FAS citizens to migrate in order to gain access to the U.S. health care system. In U.S. areas, compact migrants’ low household incomes may lead many migrants to rely on public health services.  Social services. Like many other migrant populations, compact migrants often face challenges related to homelessness, reliance on public housing, and crowded living conditions. Various officials in Guam and Hawaii also cited compact migrants’ limited eligibility for a number of federal programs, particularly Medicaid, as a key contributor to the cost of compact migration borne by the affected jurisdictions. Table 2 shows compact migrants’ eligibility status for selected federal benefit programs. In some cases, affected jurisdictions have provided services for compact migrants at local expense that are similar to those available to U.S. citizens. For example, Guam, Hawaii and the CNMI provide funding for medical services that, prior to 1996, were available through Medicaid to low-income non-U.S. citizen compact migrants. U.S.-born children of compact migrants are eligible as citizens for the benefits available to them as U.S. citizens. We identified a number of weaknesses in affected jurisdictions’ reporting of compact impacts to Interior from 2004 through 2010 related to accuracy, adequacy of documentation, and comprehensiveness. Examples of such weaknesses include the following (see appendix VI for more details).  Definition of compact migrants. For several impact reports that we examined, the reporting local government agencies (state and territorial agencies in the affected jurisdictions) did not define compact migrants according to the criteria in the amended compacts enabling legislation when calculating service costs. For instance, some agencies defined and counted compact migrants using the proxy measures of ethnicity, language, or citizenship rather than the definition in the amended compacts’ enabling legislation. Using ethnicity or language as a proxy measure could lead to overstating costs, since neither measure would exclude individuals who came to the jurisdiction prior to the compact, while using citizenship as a proxy measure could lead to understating costs, since it would exclude U.S.- born children of compact migrants.  Federal funding. States and territories receive federal funding for specific programs that offsets a portion of the costs of providing services to compact migrants. However, two of the three affected jurisdictions’ public school systems and health agencies did not account for these offsets in their impact reporting, thus overstating reported compact impact costs.  Revenue. Multiple local government agencies that receive revenues, such as user fees, associated with services provided to compact migrants did not consider them in their compact impact reports, thus overstating reported costs.  Capital costs. Many local government agencies did not include capital costs in their impact reporting. Capital costs entail, for example, providing additional classrooms to accommodate an increase in students or additional health care facilities. In cases where compact migration has resulted in the expansion of facilities, agencies understated compact migrant impact by omitting these costs.  Per person costs. A number of local government agencies used an average per-person service cost for the jurisdiction rather than specific costs associated with providing services to compact migrants. Hawaii reported in 2008 that several costly diseases are overrepresented within the compact migrant population. Using the average cost may either overstate or understate the true cost of service provision. In some cases—for example, the provision of health services—the service cost for each compact migrant could be determined. However, a number of agencies apply a range of approaches, such as using the simple average cost (e.g., cost per student) or factoring in higher costs if additional or more costly services are used.  Discretionary costs. Some compact impact costs local government agencies reported were for benefits or services provided at the discretion of the affected jurisdiction.  Data reliability. One local government agency used data on compact migrants that were found to be in error in a subsequent compilation of their impact reporting and caused an overstatement of total costs in its impact reporting. A number of local government agencies did not disclose their methodology, including any assumptions, definitions, and other key elements, for developing impact costs making it difficult to evaluate reported costs. For those years when the affected jurisdictions submitted impact reports to Interior, not all local government agencies in the jurisdictions included all compact impact costs for those years. Between the affected jurisdictions the scope of reporting differed, with one jurisdiction not reporting cost related to police services. Guidelines that Interior developed in 1994 for compact impact reporting for Guam and the CNMI do not adequately address certain concepts key to reliable estimates of impact costs. Developed in response to a 1993 recommendation by the Interior Inspector General, the guidelines suggest that impact costs in Guam and the CNMI should, among other concepts, (1) exclude FAS citizens who were present prior to the compacts, (2) specify omitted federal program costs, and (3) be developed using appropriate methodologies. However, the 1994 guidelines do not address certain concepts, such as calculating revenue received from providing services to compact migrants; including capital costs; and ensuring that data are reliable and reporting is consistent. Several Hawaii and CNMI officials from the reporting local government agencies we met with, as well as Interior officials, were not aware of the 1994 guidelines and had not used them. Officials at the Guam Bureau of Statistics and Plans, which was in possession of the guidelines, said that the bureau attempts to adhere to them when preparing compact impact cost estimates. The bureau does not provide these guidelines in its annual letter to the agencies when requesting compact impact costs since the agencies do not submit their reports for Interior directly. The bureau said it applies the guidelines to the data it receives from the agencies prior to submitting the final report to Interior. However, we found some cases where the bureau and local Guam agencies did not follow the guidelines. Interior’s reporting to Congress on compact impacts reported by the affected jurisdictions has been limited. The amended compacts’ enabling legislation requires Interior, if it receives compact impact comments from the Governor of an affected jurisdiction by February 1, to submit a report with specific required elements on compact impact to Congress no later than May 1 of that year. As of August 2011, Interior had submitted one required report to Congress in 2010 but had not submitted any reports in 2004 through 2009, although at least one affected jurisdiction had reported compact impacts to Interior in each of those years. Although Interior officials stated they were preparing their 2011 congressional report, as of August 2011 it had not been submitted. Interior’s 2010 report did not address all elements required by the amended compacts’ enabling legislation: Interior’s report lacked information from the Guam and Hawaii governors’ compact impact reports regarding increasing compact migrant costs, the types of services being used, and the associated costs for each local government agency. Interior’s report did not analyze the impact cost information provided by the two governments. However, Interior noted that the affected jurisdictions’ compact impact reports do not calculate compact migrants’ contributions. Interior’s report did not state its views on recommendations for corrective action, such as Hawaii’s suggestion to authorize compact migrant eligibility for all federal assistance programs to reduce impact. However, Interior relayed requests from the governors of Guam and Hawaii for additional funds and provided a summary of Interior’s compact impact funding provided to Guam and Hawaii. In August 2011, Interior reminded affected jurisdictions of their option to submit annual compact impact reports and identified a point of contact at Interior to which the reports may be submitted. Interior also noted that it is currently developing a process to ensure timely submissions to Congress. Compact migrants participate in local economies through their participation in the labor force, payment of taxes, consumption of local goods and services, and receipt of remittances. Previous compact migrant surveys estimated compact migrants’ participation in the labor force, but existing data on other compact migrant contributions such as tax revenues, local consumption, or remittances are not available or sufficiently reliable to quantify their effects. According to data from the 2003 Surveys of Micronesian Migrants, the majority of compact migrants participated in CNMI’s and Guam’s labor force and over 40 percent participated in Hawaii’s labor force. However, compact migrants generally participate in the labor force at lower rates than the general population. The 2003 data also showed that compact migrants from the Marshall Islands generally had lower labor force participation rates than compact migrants from the FSM and Palau. Compact migrant workers generally work in low-skilled occupations. According to data from the 2003 survey, the majority of compact migrant workers work in the private sector as (1) operators, fabricators, and laborers; (2) service workers; and (3) technical, sales, and administrative support. Guam and Hawaii do not have more recent data on compact migrant workforce participation, but CNMI Department of Finance data show that, on average, compact migrants comprised 2.3 percent of the CNMI workforce from 2004 through 2009 and had income 14 percent higher than other workers. Persons born in the FAS may also serve in the U.S. armed services and, as of August 2011, 381 were serving on active duty. Compact migrants participate in local economies through taxation, but reliable data quantifying their effect are not available. Guam and Hawaii do not collect data on the ethnicity of taxpayers or other information that could be used to disaggregate the taxes paid by compact migrants from overall receipts. However, for Guam, our estimates show that compact migrant workers paid $971 less (68 percent less) per capita in taxes than other workers in 2009. Approximately 60 percent of this difference results from compact migrant workers’ being much less likely to be employed in Guam’s higher paying public sector. The remaining difference results from the higher number of exemptions that compact migrant workers could claim, on average, for family members when filing taxes. Alone among affected jurisdictions, the CNMI collects data on citizenship that could be used to identify the taxes paid by compact migrants. However, the data provided by the CNMI include only the amount of taxes withheld and not the amount ultimately paid. These data may overestimate the amount of taxes paid, since a portion of taxes withheld may be returned to the taxpayer. Compact migrants contribute to the local economy by consuming local goods and services and by spending remittances that they receive from their home islands in affected jurisdictions. Their total consumption and economic effect may be reduced if they remit some of their income to their home islands. Data from 1998 suggest that compact migrants generally consume less of their income than does the general population; however, since that time, no data quantifying consumption by compact migrants has been published. Compact migrants we met with confirmed that they send remittances to their home islands; however, estimates and methodologies for remittances have many limitations and vary significantly across sources, calling their reliability into question. From fiscal years 2004 through 2010, the $30 million in annual compact impact grants, which Interior has awarded in accordance with the enumerations of compact migrants, have addressed a portion of each jurisdiction’s reported impact costs. Of the $210 million in impact grants, approximately $102 million was provided to Guam, $75 million to Hawaii, and $33 million to the CNMI (see fig. 6). In their compact impact reports to Interior, the governors of Guam and Hawaii have highlighted the gaps between their reported impact costs and the amounts of the compact impact grants, requesting that the federal government provide additional support. Interior has approved affected jurisdictions’ applications for compact impact grants to be used for general support of local budgets, projects, and for specific departmental purchases in the areas of health, education, public safety, and social services.  Guam. The largest annual compact impact grants to Guam in fiscal years 2005 through 2010 supported public school construction and maintenance. Most other compact impact grants to Guam funded health and public safety purchases, such as the purchase or renovation of facilities, emergency vehicles, and medical supplies, among many others.  Hawaii. All compact impact grants to Hawaii in fiscal years 2004 through 2010 were provided to its Department of Human Services to offset the cost of state-funded medical services.  CNMI. Compact impact grants to the CNMI in fiscal years 2004 through 2010 supported the operations of several CNMI government departments, such as the departments of public health and public safety, and the public school system. See appendix VII for a description of Interior’s grant reviews and a list of compact impact grants to the affected jurisdictions from fiscal years 2004 to 2011. Compact migrants confront complex challenges related to the compact migrants’ unfamiliarity with local language and culture, limited job skills, and difficulty in accessing available services, according to various government officials, services providers, and compact migrants. Compact impact grants are generally not used to directly target these complex challenges. However, a report by the Hawaii Compacts of Free Association Taskforce released in 2008 recommended a review of the allocation and use of compact grants that the state received from Interior to determine whether there is a way to spend compact impact grants that would have a more effective long-term impact. Officials, providers, and migrants identified the following needs: Language and cultural assistance. Guam education, health, and social service officials reported, among other challenging cultural gaps facing arriving migrants, the need for interpreters to assist patients and families. Hawaii health providers noted that language and cultural barriers compromise care delivery. In addition, the Hawaii Taskforce report identified a need to develop translation and interpreter resources. A number of compact migrants in Guam and Hawaii identified language and cultural issues as a source of difficulty in using government services and identified a need for translators and language tutors. Job training. The Governor of Guam noted that FSM migrants in Guam face challenges due to their lack of job skills and education. Various members of the compact migrant community in Hawaii also cited lack of job skills as a challenge and said that job training is needed to help migrants gain employment. Access to basic services. Hawaii officials identified lack of coordination of services as a challenge. Various FAS officials noted that their citizens are at times frustrated in their attempts to obtain basic documents such as social security numbers and driver’s licenses from officials who are unaware of the compact provisions for compact migrants. In addition, several compact migrants in Hawaii noted that compact migrants are often unaware of available benefits. To more directly address these needs, various government officials, service providers, and compact migrants suggested the establishment of centers offering multiple services to migrants. In Guam, the Center for Micronesian Empowerment provides culture, language, and job skills training, as well as help in finding employment, to both arriving and resident compact migrants. According to one of the center’s founders, the language and cultural training has reduced employee attrition in companies that hire the trainees, and migrants who receive job skills training are almost guaranteed to find employment. Guam officials also noted that Interior grants had previously funded another resource center for FAS citizens that supported migrant efforts to assimilate and provided outreach and services to newly arriving migrants.  The Hawaii Taskforce report recommended the establishment of multipurpose cultural outreach service centers or mobile service delivery centers, among other options, to standardize service delivery processes and promote accessibility.  A senior official at the Hawaii State Department of Health advocated a “one-stop” service center approach for migrants with medical and other government services, with staff who can assist with language and cultural issues. Kokua Kalihi Valley, a community nonprofit in Hawaii, includes elements of such an approach, providing health, social, and youth services, among others, to compact migrants.  Several compact migrants in Hawaii suggested the establishment of a community center to help people adjust and acclimate—for example, by teaching them how to work with schools and access services. Sector grants awarded in fiscal years 2004 through 2010 may have helped mitigate compact impact by supporting the health and education sectors and, in some instances, directly targeted issues related to compact impact in the affected jurisdictions. In 2001, we reported that targeting assistance to the health and education sectors in the FSM and the Marshall Islands might lessen compact migration and its impact in the affected jurisdictions. For example, better education systems in the FSM and the Marshall Islands might reduce the motivation to migrate and enable those who do migrate to better succeed in U.S. schools. Also, targeting health spending where health services are limited might reduce the number of citizens who travel to the United States seeking medical care. Further, programs aimed at improving the health status of FAS citizens might reduce the impact of migrating citizens on the U.S. health care system. Under the amended compacts, the U.S.-Micronesia and U.S.-Marshall Islands joint management committees, chaired by Interior, annually review and approve sector grants that allocate funds primarily for education, health, and infrastructure. The amended compacts and related agreements outline the joint management committees’ responsibilities as including allocating sector grants and recommending ways to increase the effectiveness of sector grant assistance. Based on the joint management committees’ annual approval of sector grants, Interior has made available approximately $808 million in sector grant funds in fiscal years 2004 through 2010. (See table 3 for sector grant allocations approved for fiscal year 2011.) In allocating sector grants for fiscal years 2004 through 2010, the joint management committees did not formally address the needs of compact migrants or their impact on U.S. states and territories, according to Interior officials. In 2011, the committees formally placed compact impact on their annual meeting agendas; however, as of September 2011 they had not allocated 2012 sector grant funding to directly address issues that concern the compact migrants or the affected jurisdictions. The amended compacts indicate that the sector grants are to be used for sectors such as education, health care, the environment, public sector capacity building, and private sector development in the FSM and Marshall Islands but may be used for other sectors as mutually agreed, with priorities in the education and health care sectors. We found some examples of grants that directly address compact migrants’ needs in affected jurisdictions and thus respond to some of the affected jurisdictions’ concerns.  For fiscal years 2010 and 2011, three of the four states of the FSM agreed to use a total of approximately $842,000 of supplemental education grants to fund the Center for Micronesian Empowerment, which assists Micronesians in Guam with language and culture training, developing job skills, and finding employment.  For fiscal years 2009 and 2010, Interior awarded approximately $3.4 million in health sector grants to the FSM and the Marshall Islands to address an outbreak of multidrug-resistant tuberculosis—a public health concern and a costly communicable disease that has occurred among migrants in the affected jurisdictions. During our visits to the affected jurisdictions in February 2011, the Governor of Guam identified a need to use sector grants in the FSM to improve education and health services to reduce compact impact in Guam, noting that he and the President of the FSM had discussed ways to work together to improve assimilation of migrants from the FSM in Guam. FSM migrants in Guam identified a need for cultural education and job training at home before citizens migrate to the United States. In Hawaii, officials identified the need to address health and social issues in the FSM and the Marshall Islands to better prepare FAS citizens considering migration and reduce the need for migrants to seek health and social services in Hawaii. In addition, compact migrants in Hawaii suggested that U.S. grant funds currently going to the FSM and the Marshall Islands be used to establish a compact migrants’ cultural center in Hawaii. In May 2011, Members of Congress wrote to Interior and the Department of State asking that a portion of sector grants be used to fund a program to prepare FAS citizens for migration and to establish and operate dialysis treatment facilities in the FSM and the Marshall Islands so that patients will not seek treatment in the United States. In their annual meetings held in August and September of 2011, both joint management committees formally placed compact impact and fiscal year 2012 sector grants on their agendas, but neither committee allocated sector grants that directly address compact migration. Although the compact migrant population represents a tiny fraction of migrants in the United States, the population can have significant impacts on the U.S. communities where they reside. To help defray costs of providing services to compact migrants, Congress has appropriated compact impact funds that Interior allocates to the affected jurisdictions in proportion to the required periodic enumerations of compact migrants. However, developing a cost-effective enumeration approach that is fair, is accepted as credible by affected jurisdictions, and produces additional demographic data remains a challenge. Thorough consideration of the strengths, limitations, and costs of the preliminary approach for the 2013 enumeration, as well as the concerns of affected jurisdictions, would enhance Interior’s ability to select a credible and reliable approach. Although the affected jurisdictions have reported rising costs of addressing compact migrants’ needs for health, education, and social services, the jurisdictions’ estimates of these costs have weaknesses that affect their reliability. Moreover, Interior’s 1994 guidelines for reporting compact impact do not address certain concepts, such as defining compact migrants and calculating revenues, that are essential for reliable estimates of impact costs. Providing more rigorous guidelines to the affected jurisdictions that address concepts essential to producing reliable impact estimates and promoting their use for compact impact reports would increase the likelihood that Interior can provide reliable information on compact impacts to Congress. Interior’s compact impact grants have generally been used for affected jurisdictions’ budget support, projects, and purchases in the areas of education, health, and public safety. Meanwhile, government officials, service providers, and compact migrants noted the complex challenges confronting both service providers and migrants and suggested approaches to directly address these challenges. For example, centers offering multiple services could address migrants’ needs for basic services as well as facilitate provision of services and improve migrants’ access. One affected jurisdiction also noted the need to review the allocation and uses of the grants to determine whether they could be spent in a way that would increase their long-term effectiveness. Given that compact impact grants only partially offset the affected jurisdictions’ reported rising impact costs, Interior working with the affected jurisdictions to identify alternative uses of the grants could more effectively address compact impact. Available data suggest that about 56,000 citizens of the FSM, the Marshall Islands, and Palau—nearly a quarter of all FAS citizens—reside in the United States and its territories under provisions of the U.S. compacts with those countries. In Guam and Hawaii, officials have advocated the use of sector grants to reduce the impact of compact migration by improving education, health, and social services in the FSM and the Marshall Islands, and compact migrants cited the need for assistance in adapting to life after migration. The joint U.S. - FSM and U.S. - Marshall Islands committees’ allocations of sector grants since 2003 have supported the health and education sectors in the FSM and the Marshall Islands and may indirectly help to mitigate compact impact in the affected jurisdictions. The committees have included compact impact on their recent agendas; however, they have not yet considered potential uses of the grants to directly address the issues that concern compact migrants or the affected jurisdictions. We recommend that the Secretary of the Interior take the following four actions: In order to select the most appropriate approach for its next enumeration of compact migrants, fully consider the strengths and limitations of its preliminary approach for 2013, weighing the cost of the approach with the need for data that will be fair as well as useful to the affected jurisdictions. In order to strengthen its ability to collect, evaluate, and transmit reliable information to Congress, disseminate guidelines to the affected jurisdictions that adequately address concepts essential to producing reliable impact estimates, and call for the affected jurisdictions to apply these guidelines when developing compact impact reports. In order to promote the most effective use of compact impact grants, work with the affected jurisdictions to evaluate the current use of grant funds and consider alternative uses of these grants to reduce compact impact. In order to help mitigate compact impact and better assist FSM and Marshall Islands citizens who migrate to the United States, work with the U.S.-FSM and U.S.-Marshall Islands joint management committees to consider uses of sector grants that would address the concerns of FSM and Marshallese migrants and the affected jurisdictions. We provided a draft of this report to the Department of the Interior; the Department of State; the Census Bureau; and the governments of Guam, Hawaii, the CNMI, Arkansas, the FSM, Marshall Islands, and Palau for review. All except the Department of State provided written comments, which we have summarized below with our responses. See appendixes VIII through XVI for reproductions of the comments, along with our detailed responses. Interior generally agreed with our findings and the recommendations that it fully consider the strengths and limitations of enumeration approaches and that it disseminate guidelines on impact estimates. However, Interior disagreed with our recommendation that it work with the affected jurisdictions to evaluate the use of compact impact grant funds and consider alternative uses. Interior stated that the amended compacts’ enabling legislation authorizes broad uses of compact impact grants and that it has chosen to respect the funding priorities of the governors. Further, Interior stated that it did not believe that practical gains can be made by proposing alternatives. We believe Interior should not rule out the possibility of practical gains through a consideration of alternate uses of the grant funds. During our review, government officials and service providers suggested alternative uses of compact impact funding that may more directly address compact impact, such as measures to reduce certain health costs through the provision of preventive care. The governors of Guam and the CNMI agreed with this recommendation, and the governor of Hawaii noted that ideas to increase long-term capacity or efficiency of resources could be of great benefit to the affected jurisdictions. We retain our recommendation for Interior to work with governors to evaluate their current use of funds and to consider alternative uses. Interior agreed with our recommendation that it work with the U.S.-FSM and U.S.-Marshall Islands joint management committees to consider uses of sector grants that would address the concerns of compact migrants and the affected jurisdictions, subject to the funds being used within the FAS. However, Interior stated that our draft report implied that compact sector grant funds should be shifted from providing assistance to the FAS governments to providing assistance to FAS citizens living in the affected jurisdictions, an action that Interior sees as inconsistent with the compacts and their enabling legislation. We agree with Interior that compact sector grants are to support the governments of the FSM and the Marshall Islands by providing grant assistance to be used in certain sectors such as education and health care, or for other sectors as mutually agreed. We expect that compact sector grant awards will be provided consistent with the terms of the compacts and the amended compacts’ enabling legislation; we do not intend to imply that funds should be shifted from FAS governments to FAS migrants. In response to Interior’s concern, we clarified that our findings and recommendation highlight the opportunity for the joint management committees to consider the use of sector grants to the FSM and Marshall Islands in ways that address the concerns of FAS citizens—whether they are in the FAS or in U.S. areas—and the concerns of the affected jurisdictions. The recommendation supports consideration of the use of sector grants in ways that respond to the concerns of FSM and Marshall Islands migrants and the affected jurisdictions. Census did not comment on our report’s recommendations but offered a number of largely technical comments on our findings, which we have addressed as appropriate. Census disagreed with our assessment of the limitations of the 2008 enumeration methodologies. However, our findings indicate that the 2008 Guam and CNMI surveys are not comparable with the ACS estimates for Hawaii in terms of their sampling methods and reporting period. In addition, as our report notes, migration is continuing, and Hawaii ACS data does not include nearly a year of additional migration that may be captured in the Guam and CNMI totals. Regarding its varying estimates of compact migrants in Arkansas, Census stated that the different estimates are not based on the same criteria and therefore should not be compared. We agree that the surveys have different bases for identification, and we identify several reasons for these differences in appendix IV of the report. We have also noted Arkansas’s and Hawaii’s observations about the accuracy and reliability of the ACS data. The government of Hawaii generally agreed with our recommendations and made several related observations. In particular, in response to our recommendation that Interior disseminate guidelines to the affected jurisdictions for estimating compact impact, the government of Hawaii said it would be willing to consider using such guidelines if they do not create undue burdens. Regarding our recommendation that Interior work with the affected jurisdictions to evaluate current uses of compact impact grants and consider alternative uses, the government of Hawaii noted that it had always used compact impact assistance for direct services to compact migrants, and said it had done so efficiently and effectively. However, Hawaii noted that ideas to increase long-term capacity or efficiency, or proposals to strengthen support infrastructure, could be of future benefit. The government of Hawaii stated that a portion of the sector grants to the FAS might be more effectively used to provide services to their compact migrants and suggested that affected jurisdictions provide input on the use of sector grants. The government of Guam agreed in principle to the four recommendations in our report. Regarding the required enumerations of compact migrants, the government of Guam stated that Interior’s decision not to use the enumerations to collect additional demographic data has resulted in the loss of valuable information. The government of Guam also welcomed legislative proposals for federal impact aid for education and restoration of Medicaid eligibility. Regarding our recommendation to consider uses of sector grants to address compact impact, the government of Guam cautioned that while such use may lessen impact on affected jurisdictions, diverting them from use within the FAS must be carefully weighed. The government of Guam also stated that the report does not discuss some options available in the amended compacts' enabling legislation to address compact impact, including: direct financial compensation to affected jurisdictions, nondiscriminatory limits on migration, and debt relief to offset previous costs. Our report notes the authorization of additional appropriations but does not address limits on migration. We added a note to the report to describe the debt relief provision but also note that it expired on February 28, 2005. The government of the CNMI generally agreed with our findings and recommendations and stated that Interior should consult with the CNMI on developing cost guidance based on Interior Inspector General, Office of Management and Budget, and GAO guidance. The government of the CNMI also recommended that Congress provide additional appropriations to redress the outstanding costs for services provided to compact migrants from past years to the present. The government of Arkansas generally agreed with our findings but expressed serious reservations about the ACS data shown in figure 2 of our report. The government of Arkansas asked that figure 2 show Census’s 2010 decennial census count based on race rather than the estimate of compact migrants based on ACS 2005-2009 data. We agree that there are differences between the counts and list some of the reasons for the differences in appendix IV. We have added additional text to the report body to present Arkansas’s concerns and more thoroughly describe the differences between the data sources. The government of the FSM commented that weaknesses we identified in affected jurisdictions’ impact cost reporting, combined with the lack of information on the positive contributions of compact migrants, leaves the net impact unknown. The FSM asked that the service of its citizens in the U.S. armed forces be recognized in our report. In response, we obtained information on the number of FAS-born persons on active duty in the armed forces and have included it in the report. Further, the FSM expressed concern that disagreements regarding the compact migrant enumerations will continue and requested that parties involved in the 2013 enumeration reach an agreement on the best approach. The government of the Marshall Islands stated that a methodology should be developed to calculate net compact impact and requested that the contributions of Marshall Islands citizens in the U.S. armed forces be recognized in our report. The government of the Marshall Islands commented that it views the immigration privileges under the compact as a cornerstone of its free association with the United States and that any changes to them will lead to a deterioration in the relationship between the United States and the Marshall Islands. The government of the Marshall Islands also cited specific steps it has taken to address compact migrant impact, including establishing a task force working on and implementing a program to address communicable diseases, and producing a video for Marshallese that describes intending migrants’ rights, duties, and responsibilities while living in the United States. Regarding the recommendation that Interior work with the U.S.-FSM and U.S.-Marshall Islands joint management committees to consider uses of sector grants, the government of the Marshall Islands stated that the amended compact provides only for uses of the grants in the Marshall Islands. We clarified some statements and our recommendation in response to this observation. The government of the Marshall Islands further stated that the amended compacts’ enabling legislation authorized additional appropriations for grants to affected jurisdictions to offset impact and that it is the responsibility of Congress to compensate affected jurisdictions for any adverse impact. The government of Palau generally agreed with our findings. Palau also emphasized that positive compact impact should be determined and asked that the contributions of Palau’s citizens in the U.S. armed forces be recognized in our report. The government of Palau commented that our report does not adequately explore whether compact impact differs among FAS citizens. However, we found that not all local government agencies reported compact impact costs by FAS country, limiting our ability to perform such an analysis. Finally, the government of Palau stated that some persons who entered the United States after the date of the compacts may be lawfully present under authorities other than those of Section 141 of the compact and therefore would not count towards compact impact. We agree and have noted this in the report. We are sending copies of this report to the Secretary of the Interior, the Secretary of State, and the Director of the U.S. Census Bureau. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XVII. This report describes migration to U.S. areas from the Federated States of Micronesia (FSM), the Marshall Islands, and Palau under those countries’ compacts of free association with the United States; reviews approaches to enumerating these compact migrants; evaluates reporting of these migrant’s impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands (CNMI); and reviews Department of the Interior (Interior) grants related to compact migration. In addition, appendix II provides information on the growing Marshallese compact migrant population in Arkansas and its impact. To describe compact migration to U.S. areas, we reviewed survey data from 1993 through 2010. As part of this review, to approximate the dispersion of compact migrants, we arranged with the U.S. Census Bureau (Census) to purchase a special tabulation of multiyear American Community Survey (ACS) data gathered from 2005 through 2009. These state estimates represent migrants if they are present in sufficient numbers to be reportable; state estimates are unreportable when fewer than 50 people respond in each category of cross-tabulated data. Census also applies statistical disclosure avoidance techniques to the tabulated data to protect respondent confidentiality, such as suppressing the number and location of compact migrants. The new tabulation mirrors the one used by Census to estimate the number of compact migrants in Hawaii in 2008 using ACS data. To determine the trend of migrants as a percentage of the populations of affected jurisdictions and identify reasons for migration, we reviewed our previous report on compact migrant impact and analyzed the information presented in previous enumerations. To estimate the populations of affected jurisdictions, FSM, and the Marshall Islands in 2003 and 2008, we used the 1999 Marshall Islands census, an estimate from the Marshall Islands’ embassy to the U.S. for the 2011 population, and 2000 and 2010 censuses for the FSM and the affected jurisdictions, assumed that the population changed at a constant rate, and interpolated the population counts for the years in between. The 1993 and 1998 estimates of affected jurisdiction population are existing Census estimates. To assess Interior and Census approaches to enumerating compact migrants in affected jurisdictions, we reviewed the requirement for the enumerations in the amended compacts’ enabling legislation. In addition, we interviewed Census and Interior officials and officials in affected jurisdictions who had contacted or worked with Census and Interior as they developed the 2003 and 2008 enumerations. We also reviewed affected jurisdictions’ written critiques of the enumerations. We reviewed Office of Management and Budget (OMB) survey criteria, and we compared the surveys to these criteria by reviewing the reported methodology of the 2003 survey and the supporting documents for the 2008 survey such as the enumerator’s manual, Census’s source and accuracy statement, and Census quality control review documents. We also conducted a literature review to identify existing studies of the uses and limitations of the various methods for enumerating populations such as compact migrants. To compile the compact impact costs reported by the governments of Guam, Hawaii, and the CNMI, we used the most recent data that they submitted to Interior for 1986 through 2010, Interior’s 2010 compact impact report to Congress, and data from our previous report. We then categorized the reported costs using the categories that the amended compacts’ enabling legislation defines as eligible for compact impact funding—education, health, public safety, and social services and infrastructure related to such services—to identify the main sources of compact impact reported by the affected jurisdictions. For additional context, we reviewed the narrative of the reports submitted by affected jurisdictions and interviewed compact migrants and officials in affected jurisdictions. To identify the eligibility of compact migrants for selected federal programs that may help address the compact impact on affected jurisdictions, we reviewed existing legislation and discussed our findings with officials from affected jurisdictions and subject matter experts. To evaluate the affected jurisdictions’ estimates of compact impact costs, we compared the costs that the affected jurisdictions had reported to Interior since 2004 with cost estimation criteria that we developed based on OMB guidelines as well as our own guidance on cost -benefit analyses, a previous report on costs associated with illegal alien schoolchildren, and requirements in the amended compacts’ enabling legislation. We identified the methodologies used by local government agencies in the affected jurisdictions to develop their compact impact costs and determined their limitations by reviewing the compact impact reports; interviewing officials from many of the reporting agencies in affected jurisdictions; and collecting information from the Guam and the CNMI’s single audit reports. Using our cost criteria, we developed questions and circulated them to affected jurisdictions’ reporting agencies, providing them an opportunity to further explain how they derived their estimates. Not all agencies responded to these questions; therefore, additional examples beyond the ones we have identified may exist. Table 8 includes Hawaii’s most recent reported compact impact costs which were submitted to Interior in August 2011. However, our analysis of compact impact reporting does not include this information. Officials from Hawaii’s Department of Human Services, Department of Health, and Department of Education said that their reporting methodologies had generally not changed since their last report, which was submitted in 2008 and which we included in our analysis. However, the Department of Education said that it excluded federal funds from its 2008 through 2011 compact impact costs and corrected its reporting error regarding the number of compact students for 2006 through 2008. In addition, the Department of Health said that the Tuberculosis Branch changed its methodology and the Family Health Services Division changed its presentation of the data to show excluded federal funds. To identify federal funding received by Guam and the CNMI for programs serving compact migrants, we analyzed single audit reports from 2005 through 2009 in the CNMI and from 2004 through 2008 in Guam. To assess Interior’s guidelines on compact impact reporting, we reviewed the requirements contained in the amended compacts’ enabling legislation, and we identified and reviewed Interior’s 1994 compact impact reporting guidelines and the Interior Office of Inspector General’s 1993 report that prompted the creation of these guidelines. We also interviewed officials from the affected jurisdictions and Interior regarding their use of these guidelines to develop cost estimates. To assess Interior’s compliance with the congressional reporting requirements of the amended compacts’ enabling legislation, we reviewed the legislation, met with Interior officials, and assessed the Interior’s 2010 report to Congress against the specific elements required in the legislation. To describe compact migrants’ participation in local economies, we generally used data from the Micronesian surveys in 1997 and 2003 as reported in the 2008 report of the 2003 survey, supplementing these data where possible with additional information from local and natio nal agencies and other literature. To determine whether additional data on the compact migrants’ role in the economy exist, we contacted agencies from affected jurisdictions that address labor and taxation and reviewed reports and data sets. These sources of additional information and data include the following:  To describe compact migrant health status and the health and education systems in the FAS, we reviewed and summarized published literature.  To describe compact migrants’ contributions to the labor market in the CNMI, we analyzed data from the CNMI Department of Finance for 2001 through 2009, comparing the size of the compact migrant labor force to the size of the overall CNMI labor force and the income of the compact migrants to that of the general population.  To compare the amount of taxes paid by compact migrants with the amount paid by the general population in Guam, we used data from the 2008 and 2009 Guam Annual Census of Establishments, the 2008 and 2009 Guam Current Employment Reports, the 2008 Guam Statistical Yearbook, and the 2008 migrant survey. These data allowed us to estimate under certain assumptions the number of compact migrants and others working in the private and public sectors, their average wages, and taxes paid. The method we used to prepare this estimate drew on a method first outlined by an official in the Guam Bureau of Statistics and Plans.  To estimate the amount of remittances that migrants sent and received while in the U.S. areas, we analyzed data from the Inter- American Dialogue, the fiscal year 2008 Economic Reviews of the FSM and the Marshall Islands, and data reported in the 2008 report of the 2003 survey. Because of the limitations and significant variation in the estimates provided by these three sources, we determined that these data were not sufficiently reliable for our purposes.  To determine the number of FAS-born persons serving in the U.S. armed forces, we requested a special tabulation from the Department of Defense’s Active Duty Personnel Master and Reserve Components Common Personnel Data System. To review Interior’s compact impact grants, we reviewed our previous report on Interior grant management, the requirements of the amended compacts’ enabling legislation, and Interior’s 2010 Financial Assistance Manual. We assessed the management of the grants against the legislation and manual by reviewing Interior’s compact impact grant files for Guam, Hawaii, and the CNMI for fiscal years 2004 through 2011. In each grant file, we reviewed grant narratives and correspondence and collected the grant name, number, amount, description, status, remaining balance, and purpose, as well as any funding redirections or deobligations. To determine the extent to which compact sector grants may address compact impact, we interviewed compact migrants and Interior and affected jurisdiction officials and collected grant allocation data from Interior for compact sector grants. We then discussed the nature of sector grants and compact impacts in the affected jurisdictions with Interior officials to identify the amount and purpose of compact sector grants that could be linked to addressing compact migrant impact. To provide information on the migrant population and impact in Arkansas, we met with state and Springdale, Arkansas officials and with employers and migrants. We reviewed existing Census population reports and the Census tabulation of ACS data as well as existing Arkansas government reporting and published literature on Arkansas’s compact migrant impact. Although the amended compacts’ enabling legislation does not define Arkansas as an affected jurisdiction and the state government therefore does not submit reports to Interior, we compiled data available for 2004 through 2010 from the Arkansas Department of Health, Arkansas Department of Correction, and the Springdale School District. We then assessed the limitations of these data in the same manner as we assessed the data for affected jurisdictions. We conducted this performance audit from September 2010 through October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In response to congressional interest, this case study reviews existing enumerations of the compact migrant population of Arkansas and their impact. Arkansas’s compact migrant population is almost exclusively from the Marshall Islands and is concentrated in the rapidly growing northwest Arkansas counties of Benton and Washington (see fig. 7). The amended compacts’ enabling legislation does not define Arkansas as an affected jurisdiction; therefore, the state is not eligible to receive compact impact grants, and data on Arkansas’s migrant population and impact are limited in comparison to data for the affected jurisdictions. Tabulations of data from the 2005-2009 ACS show an estimated 1,150 (with a 90 percent confidence interval of 933 to 1,367) Marshallese compact migrants living in Arkansas. The 2010 decennial census reports that 4,324 persons in Arkansas responding to the race question on the 2010 form identified themselves as Marshallese. Additionally, in 2009 and 2010, the Springdale School District reported that 1,323 and 1,579 students, respectively, identified as Pacific Islanders when enrolling in Springdale schools. In comments on a draft of this report, the government of Arkansas stated that it had serious doubts about the count of Arkansas migrants using ACS data and Census commented that the estimates described here are not based on the same criteria and should not be compared. See appendix IV of this report for a further discussion of the differences between ACS and 2010 decennial census data. Concerns in Arkansas regarding compact migrants are similar to those expressed by officials in affected jurisdictions. Arkansas government officials and service providers cited the following concerns: migrant students lagging academically behind their peers; low levels of family involvement in education; the prevalence of communicable and noncommunicable diseases; reluctance of migrants to use preventive health care; language barriers; cultural barriers; and crowded living conditions. To address some of these concerns, Northwest Arkansas local government and social service agencies have begun to offer services to the Marshallese community in recent years. For example:  The Springdale School District has provided supplemental tutoring and employs two Marshallese translators.  The Jones Center for Families, a nonprofit community service organization, employs a Marshallese Community Outreach Coordinator and has helped facilitate the activities of the Gaps in Services to Marshallese Task Force, a network of interested individuals headed by a retired Jones Center employee.  The task force has used grants from the Centers for Disease Control and Prevention and the Arkansas Minority Health Commission to survey the health concerns of Marshallese and prepare an outreach booklet and DVD to aid Marshallese migrants in adapting to life in the state.  The Washington County Department of Health dedicates four staff to its Marshallese Outreach Team and will add two more staff when it opens the Marshallese outreach clinic in 2011.  Some local agencies noted that they work in cooperation with the Marshall Islands consulate in Springdale. Opened in 2008, this consulate is the only FAS consulate in the continental United States. Data provided by Arkansas state officials for 2004 through 2010 identified approximately $51 million in costs for education, health, and public safety services to compact migrants (see table 4). Available data from Arkansas are not comparable with data from affected jurisdictions. Arkansas does not collect data on a number of costs reported by Guam, Hawaii, and the CNMI, particularly costs for social services. In addition, not all Arkansas state agencies compiled their cost data on compact impact annually, whereas the affected jurisdictions have generally compiled their data annually. Education. The estimated education service costs are for the Springdale School District, where most Marshallese school children live. The estimate for Springdale is based on average per-pupil expenditures, similar to some of the affected jurisdictions’ cost estimates. However, these expenditures may overstate actual costs to the extent there is excess capacity in the schools to absorb a marginal increase in population. The estimates may also understate actual costs by not including higher than average costs for additional services to Marshallese, such as language education. Finally, student population data prior to 2009 are incomplete. Health. Arkansas estimated costs for health services to compact migrants by compiling costs for the population identified as of Pacific Islander ethnicity; as a result, Arkansas’s estimates may overstate compact migrant health costs by including services to Pacific Islanders who are not compact migrants. However, the estimates do not include costs for the Arkansas Women, Infants and Children Program and were not complete for all years for tuberculosis and sexually transmitted disease treatment, potentially leading to an underestimate of total costs. Social services. Arkansas does not track the use of some state-funded services by ethnicity and therefore could not estimate the costs of providing these services to compact migrants. However, officials stated that compact migrants are eligible for programs such as the state’s Division of Developmental Disabilities Services; Division of Youth Services; and ARKids First, which provides health insurance to low- income U.S.-citizen children of compact migrants. Unlike affected jurisdictions, Arkansas does not provide Medicaid equivalent services to noncitizen compact migrants. As in affected jurisdictions, Arkansas compact migrants contribute to the local economy through payment of taxes and participation in the labor market. Taxes. Marshallese are subject to federal, state, and local taxes; however Arkansas does not disaggregate the tax revenue by ethnic categories or citizenship and there are no data on consumption and remittances. Labor market. Marshallese fill a significant niche in the local poultry industry. According to employers in northwest Arkansas, Marshallese represent between 14 and 37.9 percent of the total workforce at some plants of major poultry producers, such as Tyson, Cargill, and George’s. Tyson officials in Springdale stated that they have begun referring some Marshallese job applicants to plants elsewhere in Arkansas and in Oklahoma. In addition, Tyson may begin recruiting workers in the Marshall Islands. According to Marshall Islands officials, Tyson representatives have visited the Marshall Islands. Table 5 shows the estimated population of the compact migrants from each FAS in the U.S. states based on tabulations of Census’ 2005-2009 ACS. Taking into account sampling uncertainty, the table shows the lower-bound and upper-bound population interval that corresponds to a 90 percent confidence interval. Estimates of compact migrants from each FAS in Guam and the CNMI, using data from Census’s 2008 survey of compact migrants, are shown for comparison. The 2010 decennial census identified 22,434 Native Hawaiian or Pacific Islanders—alone or in combination with another race—who identified themselves specifically as Marshallese residing in the 50 U.S. states in 2010. The largest Marshallese populations were in Hawaii, Arkansas, and Washington, which together accounted for 62 percent of the total reported Marshallese residing in the 50 states. See table 6. As of September 2011, Census has not released a separate count of U.S. residents who identified themselves as one of the FSM ethnicities or Palauan, but plans to do so between December 2011 and April 2012. As of November 2011, Census race data for Guam and the CNMI had yet to be released. The number of Marshallese reported by the 2010 decennial census differs from estimates of Marshallese compact migrants derived from the Census 2005-2009 ACS. For example, the 2010 census reported 7,412 Marshallese in Hawaii and 4,324 in Arkansas, while the ACS tabulation estimates 3,535 and 1,150 Marshallese compact migrants in Hawaii and Arkansas, respectively. The census counts are not meant to be compared with ACS 5 year estimates of compact migrants and several factors may explain the differences:  The ACS and 2010 census figures use different definitions. The ACS compact migrant estimates include only those born in the Marshall Islands who arrived in the United States under the terms of the compact after 1986 and their children. The census counts are defined by the respondents’ reported race and not limited by the post-1986 time frame of the compact.  The ACS and 2010 census figures have different time frames. The ACS estimates are based on data collected from 2005-2009 and do not include compact migrants to U.S. areas in 2010. In addition, the ACS estimate does not include births that are included in the 2010 counts.  The ACS and 2010 census use different approaches. The 2010 census attempts to reach all persons in the United States, while the ACS is a sample of the population. The sampling method used by the ACS was not specifically designed to make estimates of a population as small as the compact migrants.  The ACS and 2010 census have different levels of outreach. Census ran extensive public service announcements of the 2010 survey, and local governments and community groups encouraged participation. However, Census does not conduct a similar public campaign to encourage participation in the ACS. Table 7 shows key attributes, related to survey design, coverage, nonresponse, measurement, and sampling error, for the 2003 and 2008 Census approaches to enumerating compact migrants. Since 1986, affected jurisdictions have submitted to Interior compact impact reports that include descriptions of, and estimated costs for, education, health, public safety, and social services that local government agencies provided to compact migrants (see table 8 for costs reported for 1986 through 2010). However, assessed against best practices for cost estimation, the 2004-2010 estimates contain a number of limitations with regard to accuracy, adequate documentation, and comprehensiveness, affecting the reported costs’ credibility and preventing a precise calculation of total compact impact on the affected jurisdictions. Best practices and guidance state, among other things, that to be credible, cost estimations should be characterized by accuracy, adequate documentation, and comprehensiveness.  Accuracy. Estimates should contain few errors and reflect actual costs.  Adequate documentation. Cost estimates should include a detailed description of the derivation of the reported costs, such as the source data used, the calculations performed and their results, and the methodology used. Cost estimates should be captured in such a way that the data can be traced back to, and verified against, their sources, so that estimates can be replicated and updated.  Comprehensiveness. Estimates should be structured in sufficient detail to ensure that cost elements are neither omitted nor double counted and should include documentation of all assumptions. We found a number of limitations affecting the credibility of cost estimates in the compact impact reports (2004-2010) that we reviewed. (See appendix I for a description of our methodology in evaluating the cost estimates.) Definition of compact migrants. Several local government reporting agencies that responded to our inquiries did not define compact migrants according to the criteria in the amended compacts’ enabling legislation and, as a result, may have either overcounted or undercounted costs. The legislation defines the population to be enumerated as persons, or those persons’ children under the age of 18, who pursuant to the compacts are admitted to, or resident in, an affected jurisdiction as of the date of the most recently published enumeration. By counting compact migrants based on their ethnicity or language, agencies may have overcounted by including those present prior to the compacts; by identifying compact migrants by their citizenship, agencies may have undercounted, because they would have excluded compact migrants’ U.S.-born children under the age of 18. For example, school administrative data from each of the affected jurisdictions show a potential for overcounting by identifying compact migrant children by means of ethnicity (as in Guam and the CNMI) or the language spoken at home (as in Hawaii). According to the 2003 Census survey data, approximately 32 percent of FAS citizens identified in the CNMI, 10 percent of those identified in Guam, and 13 percent of those identified in Hawaii were not part of the defined impact population. Therefore, the number of children used to estimate impacts may also be overstated. Federal funding. Guam, Hawaii, and the CNMI, among other U.S. states and territories, receive federal funding for programs that compact migrants use; however, not all compact impact reports accounted for this stream of funding. For example, the Hawaii Department of Education reported as compact impact the cost of programs that federal funding had partially addressed. In addition, Guam and the CNMI’s single audit reports show that these jurisdictions have received federal revenue from various agencies such as the U.S. departments of Health and Human Services (HHS), Agriculture, and Homeland Security. To the extent that revenue for these programs is based on population counts or data on usage, the presence of, and use of services by, compact migrants lead to federal offsets. For example, in fiscal years 2004 through 2008, Guam received an annual average of $1,027,825 from HHS for the Consolidated Health Centers program, an amount based partly on the number of beneficiaries in Guam. Based on Guam’s resident and compact migrant populations in 2008, services to compact migrants accounted for a $112,942 share of that amount—equal to 16 percent of compact migrant impact costs reported by the Guam Bureau of Primary Care Services. However, in reporting impact costs, the Guam Bureau of Primary Care Services did not deduct the HHS funding that was used for compact migrants. Revenue. Multiple local government agencies that receive fees as a result of providing services to compact migrants did not consider them in their compact impact reports. For example, the CNMI Department of Public Health and Guam Department of Mental Health and Substance Abuse did not include payments received from compact migrants into their costs. This exclusion of revenue may cause an overstatement of the total impact reported. Capital costs. Many local government agencies, such as the CNMI Public School System, did not include capital costs in their annual compact impact reporting. This exclusion can cause an understatement of total costs of providing services to compact migrants. Per person costs. Many local government agencies estimated impact costs based on average, rather than specific, costs of providing services to compact migrants, possibly leading to under or overestimations. For example, the CNMI Department of Public Health based the cost of providing healthcare services to compact migrants on the number served out of the total patient load instead of totaling each patient’s specific costs. However, other agencies more comprehensively accounted for costs by including additional compact impact expenses beyond the average costs. For example, the Hawaii Department of Education included language training costs in its reported per pupil expenditures. Alternatively, the CNMI Public School system did not include special services such as language training or translation which suggests an underestimation. Discretionary costs. Some compact impact costs reported by local government agencies were for benefits or services provided at the discretion of the affected jurisdiction. Data reliability. In one case, we found a discrepancy between the data reported and the data provided during this review. According to the Hawaii Department of Education, this discrepancy resulted from a system error that caused a double counting of Marshallese students over a 5- year period, which resulted in an overestimation of impact costs. For its 2011 impact report, the Department of Education said it excluded federal funds from its 2008 through 2011 compact impact costs and corrected the reporting error made regarding the number of compact students for 2006 through 2008. Many local government agencies did not report their methodologies for estimating costs of providing services to compact migrants under the compacts. For example, the Hawaii Department of Human Services did not provide a detailed description of how it derived its estimates. As a result, it is difficult to determine whether the reported figures are accurate. Further, some agency methodologies vary between affected jurisdictions. For example, Guam prorates police costs based on the percentage of compact migrants in the total population, and the CNMI prorates its police costs based on the percentage of total arrests that are FAS citizens. The Guam Bureau of Statistics and Plans said it documented its methodologies in its 1995 compact impact report and applied these approaches in calculating its compact impact costs. However, these methodologies were not discussed in Guam’s annual reports from 2004 to 2010 and were generally not used by the reporting agencies. Hawaii has not submitted annual compact impact reporting each year and is not required to do so, but for those years when affected jurisdictions submitted impact reports to Interior, not all local government agencies included all compact impact costs. However, some agency costs were reported in subsequent fiscal year reports. For example, Hawaii did not provide estimated costs to Interior in 2005 and 2006, although it included partial costs incurred in those years in its 2007 and 2008 reports. Without comprehensive data in the year they are submitted, the compact impact reports could understate Hawaii’s total costs. In addition, compact impact reporting has not been consistent across affected jurisdictions. For example, Guam and the CNMI included the cost of providing police services, while Hawaii did not. Interior distributed the compact impact grants to the affected jurisdictions in 2004 through 2010 as follows:  From fiscal year 2004 through 2009, based on the results of 2003 enumeration, Interior annually awarded approximately $14 million to Guam, $10.6 million to Hawaii, and $5.2 million to the CNMI. In fiscal year 2010, having recalculated the division of funds based on the results of the 2008 enumeration, Interior awarded approximately $16.8 million to Guam, $11.2 million to Hawaii, and $1.9 million to the CNMI. The amended compacts’ enabling legislation and Interior’s Office of Insular Affairs’ Financial Assistance Manual guide the administration and management of compact impact grants. An official at Interior said the agency uses the same grant management process for compact impact funds as it does its other grants. To implement these requirements, Interior has reviewed and at times questioned whether the proposed uses of compact impact grant funds were in keeping with the amended compacts’ enabling legislation. While the vast majority of Interior reviews resulted in approvals, Interior questioned some uses in Guam and the CNMI. Specifically: Interior initially viewed Guam’s fiscal 2010 request to fund a Community Pool Complex and Fitness Trail as only distantly connected to compact migrants. Ultimately, Interior accepted Guam’s justification that it could improve the health of migrants by reducing obesity and provide a healthy outlet for youth, thereby reducing public safety concerns. In fiscal year 2011, Interior approved the use of compact impact grant funds for Guam Memorial Hospital Authority (GMHA), but restricted their use to future purchases rather than paying past bills, contrary to previous Interior practice. According to an Interior official, this change was made in order to strengthen the link between migrant impacts and compact funds and to stop “bandaging” the chronic financial issues of GMHA. In fiscal year 2006, Interior denied a CNMI grant proposal to use $400,000 for the Marianas Visitors Authority and $500,000 for Financial Control/Economic Recovery Initiatives. An Interior official said the agency did not retain documentation of the specific nature of these grant requests or why it denied them. As of August 2011, Guam had approximately $14.2 million in compact impact grant funds available from fiscal years 2004 to 2011 that it had yet to draw down from Interior. Both Hawaii and the CNMI have fully drawn down prior fiscal year funds. See table 9 for a complete list of Interior compact impact grant awards by jurisdiction and fiscal year. 1. The Department of the Interior suggested that our report’s use of the terms “compensation” and “reimbursement” to describe compact impact funds could give the impression that these funds were intended to fully reimburse the affected jurisdictions for their added expenses when the amended compacts' enabling legislation states that compact impact grants are “to aid in defraying costs incurred by affected jurisdictions as a result of increased demands placed on health, educational, social, or public safety services or infrastructure related to such services due to the residence in affected jurisdictions” of compact migrants. We have modified the text to make our characterization of the act’s intent clearer. Also in response to this comment, as well as comments received from Guam, we have cited additional provisions of the amended compacts’ enabling legislation that authorize funds to address compact impact. As the affected jurisdictions may view the law as implying a reimbursement, we have kept such a characterization when it reflects the viewpoint of the affected jurisdiction. 2. The Department of the Interior stated that the record of the compact negotiations, the compact agreements, and the amended compacts’ implementing legislation do not support the use of sector grants to provide assistance to FAS expatriates living in affected jurisdictions. Interior stated that the compact provides sector grants to support FAS government activities in-country and that the few training programs taking place outside of the FAS are the result of the FAS governments’ choices in the use of sector grants. We agree with Interior that the sector grants listed in the compacts are to support the governments of the FSM and the Marshall Islands by providing grants in certain sectors such as health care and education; however, we note that the compacts allow grants to fund other sectors as mutually agreed and we have added this text to our description of compact economic assistance. Currently, limited FSM compact grant funds are being used to support worker training in Guam, addressing a concern of compact migrants in Guam as well as the Guam government. According to the Center for Micronesian Empowerment, approximately 45 percent of its trainees in Guam are compact migrants residing in Guam. 3. The Department of the Interior did not agree with our recommendation that it work with affected jurisdictions to evaluate the current use of compact impact grant funds and consider alternative uses. Interior observed that the amended compacts' enabling legislation authorizes broad uses and that it has chosen to respect the priorities of governors. Interior further stated that it does not believe there would be any practical gains from proposing alternative uses. We believe this position overlooks opportunities to respect the priorities of the governors while at the same time working with the governors to review their current use of funds and consider alternatives uses. In the course of our review we identified alternative grant uses for consideration. We found that government officials, service providers, and compact migrants in the affected jurisdictions identified a significant need for language and cultural assistance, job training, and improved access to basic services for compact migrants. The sources suggested that migrant needs could be addressed by, for example, establishing centers that offer such services. This may also help reduce some of the negative impact from compact migration. For example, more translators could result in more effective health treatment. Other alternative uses may also offer practical gains. For example, health experts have advocated for the adoption of primary health care as a more cost-effective strategy for providing health care in the Pacific Islands. This adoption would address the need for preventive care among Micronesians and Marshallese in Hawaii—studies have shown that the Marshallese in Hawaii do not generally seek preventive care and only seek professional health care when they experience a certain level of pain. 4. The Department of the Interior stated that it believes compact sector grants are limited to use within the FAS; however, Interior agreed that some activities may be related both to sector grant priorities and to programs that would better prepare migrants to live and work in the United States. Interior stated that our report implies that compact sector grant funds should be shifted from providing assistance to the FAS governments to providing assistance to FAS citizens living in affected jurisdictions, an action that Interior sees as inconsistent with the compacts and their enabling legislation. We expect that compact sector grant awards will be provided consistent with the terms of the compacts and the amended compacts’ enabling legislation; we do not intend to imply that funds should be shifted from the FAS governments to FAS migrants. In response to Interior’s concern, we clarified that our findings and recommendation highlight the opportunity for the joint management committees to consider the use of sector grants to the FSM and Marshall Islands in ways that address the concerns of FAS citizens—whether they are in the FAS or in U.S. areas—and the concerns of the affected jurisdictions. The recommendation supports consideration of the use of sector grants in ways that respond to the concerns of FSM and Marshall Islands migrants and the affected jurisdictions. 1. Census emphasized that the migrant survey content was purposely chosen to enumerate compact migrants while minimizing costs and maximizing respondent participation. As our report notes, however, collecting only those data needed to enumerate migrants limited the collection of data that stakeholders such as affected jurisdictions would have found useful. 2. Census stated that, in contrast to our report’s findings, the 2008 Guam and CNMI surveys were designed to produce estimates with a similar coefficient of variation to the ACS estimates for Hawaii. However, the 2008 Guam and CNMI estimates are point-in-time estimates while the ACS is a multiyear estimate. As Census guidance on interpreting the ACS multiyear estimates states, “The ACS estimates the average of a characteristic over the year or period years, as opposed to the characteristic at a point in time” and “When comparing estimates across geographies or subpopulations, users should compare the same period length for each estimate.” Our findings do not suggest that the 2008 Guam and CNMI surveys are not comparable with the ACS estimates for Hawaii in terms of reliability, but rather are not comparable in terms of their sampling methods and reporting period. We show that the estimates have similar relative errors, as presented in appendix V. If the reported precision for the Guam and CNMI surveys is accurately estimated, and includes proxy respondents, the estimates have similar levels of precision. 3. Census recommended that the Department of the Interior adopt the two-pronged approach described in our report for the 2013 enumeration of compact migrants and stated that the approach would provide cost-effective required estimates. We agree that the low cost is a strength of this approach. As our report notes, however, the 2013 two-pronged approach will have limitations such as using data from different time periods, limited comparability with prior data, and limited collection of demographic data. 4. Census referred to a footnote in our draft report that indicated that the homeless population was not represented in the 2008 surveys. We have moved this discussion to appendix V and included an assessment of the varying coverage of the homeless population of the 2003 snowball, ACS, and 2008 migrant survey in Guam and the CNMI. 5. Census disagreed with our statement that the effect of using an earlier time frame of data in Hawaii relative to Guam and the CNMI results in an undercount of compact migrants in Hawaii relative to Guam and the CNMI. However, as our report notes, migration is ongoing, with approximately 7,000 persons estimated to have left the FSM and the Marshall Islands in 2007 and 2008. Other available data also indicate that the migrant population is growing. Because the Hawaii data do not include 2008 and the Guam and CNMI data are from the closing months of 2008, nearly a year of additional migration is captured in the Guam and CNMI totals that is not included in the Hawaii ACS data. Alexander and Navarro (2003) show that even the upper bound of the ACS multiyear confidence interval, an amount that is greater than the estimate, can lag behind the actual value for a growing small population at a given point in time. 6. Census stated that the frequent changing of address by the compact migrant population, cited by Guam and CNMI officials as potentially leading to a miscount, would not produce bias in the surveys as designed. We agree with Census’ comment that inaccurate migrant counts in the sample design would lead to lack of efficiency and not bias. However, we note that researchers inside and outside the Census Bureau studying the foreign-born population agreed that an assumption of complete coverage of legal immigrants and temporary migrants in the 2000 Census was unreasonable indicating the potential for coverage error and bias of the estimates based on the 2000 Census sampling frame. Further, migrants, especially those who frequently change address, are not only hard to count in the census, but they also are less likely to participate in other surveys, indicating the potential for nonresponse bias. Census further stated that it gave both Guam and CNMI an opportunity to provide local information that might improve the accuracy but that neither produced this information before the design had to be finalized. However, we note that both Guam and CNMI officials stated that, from their perspective, implementation of the survey was rushed and they had only limited opportunity to provide such input. 7. Regarding our statement about the change in enumeration method limiting the comparability of the 2008 and 2013 enumerations, Census stated that both methods would provide a comparable count, which is what is required for the purpose of determining funding. We note that the estimates across jurisdictions, within an enumeration, are not comparable. We note that estimates across enumerations are not comparable due to changing methodology and the lack of use of a method that is statistically designed to measure change over time. 8. Census asserted that its varying estimates of compact migrants in Arkansas are not based on the same criteria and therefore should not be compared. We agree that the surveys have different bases for identification and have identified several reasons for these differences in appendix IV. However, we have also noted Arkansas’ concerns about the accuracy of the ACS data. 9. Census disagreed with our observation that the ACS was not designed to make estimates of a population as small as the compact migrants and notes that concluding that the ACS cannot provide reliable estimates depends on which definition of “reliable” is used. We note that the Census Bureau did not determine the level of precision, or reliability, necessary for these estimates to be used for funding. As we show in appendix III, while the ACS can be used to detect the presence of compact migrants, the smaller the population, the less reliable the estimates will be, as indicated by the wide confidence intervals. In some cases, the estimates are so unreliable that we suppressed them. 10. In response to our observation that Census did not provide an unweighted response rate, Census stated that it is not usual for the agency to publish unweighted response rates. Our review of OMB Standards and Guidelines for Statistical Surveys indicates that both unweighted and weighted response rates should be calculated and reported. However, an unweighted response rate was not provided in Census Survey Documentation that we received for review. 11. Census disagreed with our statement that the use of proxy respondents indicates an overstatement of the response rate and asserts that proxy responses are generally accepted in household surveys and included in the response rate. Proxy respondents are substitutions for the intended sample member, and Census acknowledged that proxy responses are a potential source of nonsampling error due to the proxy respondent’s potential lack of knowledge of the sample respondent’s information. OMB guidelines and standards call for the calculation of response rates without substitutions, as well as overall response rates that include substitutions. 12. Census stated that the issues of nonresponse bias, as well as collection strategy and content, that we highlight regarding the 2008 survey effort were also present in the 2003 survey but are not listed for the 2003 survey. For the 2003 survey, we had no documentation of the nonresponse bias or the collection strategy and content information related to personal interviewers leaving contact information. We have noted Census’s comments in the report. 13. Census stated that final weights were used when calculating the variance estimates. While we acknowledge that final weights, including a nonresponse adjustment factor, were used when calculating variances, Census documentation does not indicate that variance estimates properly accounted for the variability due to the nonresponse adjustment factor, such as through the use of replicate weight methodologies, and thus will likely result in an underestimate of the variance. 1. The government of Hawaii stated that while the report sets out relative strengths and weaknesses of the different enumeration methodologies, it does not highlight the inherent inadequacies of the American Community Survey (ACS) for enumeration of small discrete groups such as compact migrants. Throughout the report we note the limitations of the ACS. As our report notes, ACS data have limited statistical reliability for populations as small as compact migrants in Hawaii. We further note in appendix V that ACS estimates are not equivalent to point-in-time estimates and may be biased due to nonresponse and coverage error. 2. The government of Hawaii stated that the report does not point out the discrepancies between the Census estimates in each affected jurisdiction and the utilization data for services provided by agencies in each jurisdiction, such as the number of compact migrant students enrolled in school. We did explore the use of school data in particular as a basis for evaluating the enumeration findings; however, we found that the schools identified compact migrant students by language (as in Hawaii) or ethnicity (as in Guam and the CNMI). These definitions do not match that contained in the amended compacts' enabling legislation and could include the children of persons with FAS ethnicity who were present in Hawaii prior to the compacts or the children of persons who were born in U.S. areas. For this reason, although they are informative in a general way, the difference between the school data and the enumeration data could result from methodological differences as well as from any potential miscount in the enumeration. 3. The government of Hawaii stated that it agrees that a portion of the sector grants given to the FAS might more effectively be used to provide services to compact citizens living in the affected jurisdictions and that the affected jurisdictions should provide input into the uses of the grants. In response to other comments, we clarified our recommendation regarding compact sector grants to not imply that the use of sector grants to address migration concerns should be in the affected jurisdictions. Our recommendation is to highlight the nexus between sector grants and the issues that concern the FAS, compact migrants and affected jurisdictions. We also report that the Federated States of Micronesia has used sector grant funds for activities in the affected jurisdiction of Guam. compacts, it seems compact migrants are treated as if they are citizens in that they have access to all federal and local services unless specifically barred; therefore, reimbursement is justified for all services rendered because they are provided on a nondiscriminatory basis. We believe that our recommendation that Interior prepare adequate cost guidance will help in determining compact impact costs. 1. The Arkansas Department of Health expressed serious doubts about the count of Arkansas migrants based on American Community Survey data and asked that 2010 census data be used to create figure 2 rather than the ACS data. We agree that there are differences between the count based on the ACS data and that based on 2010 decennial census data and have listed some of the reasons for the differences in appendix IV. We have added text to the report to more explicitly describe this appendix, the differences between the data sets, and Arkansas’s concerns. 1. The government of the FSM stated that our findings on compact migrant participation in local economies are not conclusive as to the net impact of migrants. The amended compacts' enabling legislation does not require the inclusion of such data in affected jurisdictions’ impact reports and complete data on compact migrants’ contribution to local economies does not exist. We provided available information on labor market participation, taxes, consumption, and remittances. 2. The government of the FSM stated the importance of clearly defining a compact migrant for the purposes of enumeration and reporting compact impact. As our report notes, Interior interprets the legislation's definition of a qualified nonimmigrant—which generally refers to a compact migrant living in an affected jurisdiction—as including those migrants' children under the age of 18 who are born in the United States; therefore, some U.S. citizens are included in the count of migrants. We have used Interior’s definition for our estimates of compact migrants and costs. As our report also notes, a number of reporting local government agencies in affected jurisdictions do not use the definition of compact migrants in the amended compacts' enabling legislation, which affects the reliability of their reported compact impact costs. 3. The government of the FSM cited the fact that its citizens are eligible to serve, and have served, in the armed forces and asked that we include them in the count of migrants. We have obtained data from the Department of Defense on persons born in the FAS who are on active duty in the U.S. armed forces and have added this to the report. 4. The government of the FSM referred to a statement in the draft report that migrants pay into the Medicaid program but do not receive benefits. This statement was in error and has been deleted from the report. 1. The government of the Marshall Islands stated that not including Marshallese citizen contributions to the economies of U.S. areas is a serious flaw that undermines the credibility of claimed compact impact costs. The Marshall Islands recommended that a methodology be developed that determines “net” compact costs. The amended compacts' enabling legislation does not require the inclusion of such data in affected jurisdictions’ impact reports and complete data on compact migrants’ contribution to local economies does not exist. We provided available information on labor market participation, taxes, consumption, and remittances. 2. The government of the Marshall Islands asked that the participation of Marshall Islands citizens in the armed forces of the United States also be taken into account. We have obtained data from the Department of Defense on persons born in the FAS who are on active duty in the U.S. armed forces and have added this to the report. 3. The government of the Marshall Islands noted that a number of Marshallese migrants attending public schools in the United States are U.S. citizens and that, while they may legally be included for purposes of compact impact costs, it questioned the inclusion of U.S. citizens in determining impact costs. As our report notes, Interior interprets the legislation’s definition of qualified nonimmigrant—which generally refers to a compact migrant living in an affected jurisdiction—as including those migrants’ children under the age of 18 who are born in the United States; therefore, some U.S. citizens are included in the count of compact migrants. The Census ACS tabulation we obtained for our estimates used Interior’s definition of a compact migrant. The ACS interviews current residents— that is, those in the house on the day of the interview who have been staying there for more than 2 months, regardless of the individuals’ usual residence. 4. The government of the Marshall Islands stated that it would be helpful to provide government leaders and decision makers with comparative information on emigration, or migration rates from other Pacific Island nations. This analysis was not part of the scope of our audit. 5. The government of the Marshall Islands stated that some of the data for enumerations for Marshallese in the state of Hawaii may be erroneous and overstated since some Marshallese only transit through Hawaii for a short period before moving to the U.S. mainland to accept employment. We do not have data to verify this assertion. 6. The government of the Marshall Islands stated that although there is discussion in the report regarding U.S.-Marshall Islands joint management committee taking action to deal with compact impact costs in affected jurisdictions in the United States, sector grants were never designed or funded for that purpose. In response to the government of the Marshall Islands and other comments, we clarified our recommendation regarding compact sector grants to not imply that the use of sector grants to address migration concerns should be in the affected jurisdictions. We also note that, though we did not find instances of the Marshall Islands using grant funds for activities in affected jurisdictions, the FSM has done so in Guam. 1. The government of Palau noted that our draft report’s use of the term “Micronesian” to refer to citizens of the Federated States of Micronesia may be confusing, as Micronesian also has a larger meaning related to persons living on multiple Pacific islands. We have reviewed the report and now refer to the Federated States of Micronesia as the FSM. In keeping with a commonly used definition, we use the term “Micronesia” to refer to the three compact nations. 2. The government of Palau stated that our report does not adequately address the lack of information regarding "positive impact" from compact migration. The amended compacts' enabling legislation does not require the inclusion of such data in affected jurisdictions’ impact reports, and complete data on compact migrants’ contribution to local economies does not exist. We provided available information on labor market participation, taxes, consumption, and remittances. 3. The government of Palau noted that FAS citizens serve in the U.S. armed forces. We have obtained data from the Department of Defense on persons born in the FAS who are on active duty in the U.S. armed forces and have added this to the report. 4. The government of Palau stated that most agencies have included capital costs in their impact reporting, thus contributing to a gross overstatement of the costs associated with migrants. However, in our review, we did not find cases where agencies included such capital costs in their impact reporting. Such costs could be legitimate and addressed by future Interior guidelines. 5. The government of Palau stated that our report does not adequately explore the differences in the impact between the three FAS. Not all local government agencies reported compact impact costs by FAS country, and this assessment was not included in the scope of our review. Complete data on compact migrants’ contribution to local economies does not exist; however, we provided available information on labor market participation, taxes, consumption, and remittances. If Interior implements our recommendation to disseminate adequate guidance on compact impact reporting to affected jurisdictions, assessing impact by FAS country may be a topic for Interior to address. 6. The government of Palau stated that our estimate of Palauan migrants is an accurate estimate, but that many Palauans emigrated before the compacts came into effect and, while ethnically Palauan, should not be considered in calculating the impact of FAS emigration. However, the Census tabulation and survey we used in estimating the number of Palauan compact migrants only included those who arrived in U.S. areas after the date of the Palau compact. Those who arrived in the United States prior to that date, and their children, are not included in our estimate. 7. The government of Palau noted that ethnic Palauans in the United States may be U.S. citizens, permanent resident aliens (Green Card holders), or members of the U.S. Armed Forces and their dependents, as well as compact migrants. As our report notes, Interior interprets the legislation’s definition of qualified nonimmigrant—which generally refers to a compact migrant living in an affected jurisdiction—as including those migrants’ children under the age of 18 who are born in the United States; therefore, some U.S. citizens are included in the count of migrants. However, we agree with Palau’s comment that some persons who entered the United States after the date of the compacts may be lawfully present in U.S. areas under authorities other than those of Section 141 of the compacts and have noted this in the report. In addition to the contact named above, Emil Friberg, Jr., Assistant Director; Keesha Egebrecht; Fang He; Reid Lowe; Mary Moutsos; Michael Simon; Sonya Vartivarian; Adam Vogt; Greg Wilmoth; and Monique Williams made key contributions to this report. Michael Derr, Bob Lunsford, and Jena Sinkfield provided additional technical assistance.
U.S. compacts with the freely associated states (FAS)--the Federated States of Micronesia (FSM), the Marshall Islands, and Palau--permit FAS citizens to migrate to the United States and its territories (U.S. areas) without regard to visa and labor certification requirements. Thousands of FAS citizens have migrated to U.S. areas (compact migrants)--particularly to the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and Hawaii, which are defined as affected jurisdictions. In fiscal year 2004, Congress appropriated $30 million annually for 20 years to help defray affected jurisdictions' costs for migrant services (compact impact). Though not required, affected jurisdictions can report these costs to the Department of the Interior (Interior), which allocates the $30 million as impact grants in proportion to compact migrant enumerations required every 5 years. This report (1) describes compact migration, (2) reviews enumeration approaches, (3) evaluates impact reporting, and (4) reviews Interior grants related to compact impact. GAO reviewed U.S. agency data, recent enumerations, impact reports, and grants and it also interviewed officials, employers, and migrants in the affected jurisdictions. Combined data from the U.S. Census Bureau's (Census) 2005-2009 American Community Survey (ACS) and the required enumeration in 2008 estimate that a total of roughly 56,000 compact migrants from the FSM, the Marshall Islands, and Palau--nearly a quarter of all FAS citizens--were living in U.S. areas. Compact migrants resided throughout U.S. areas, with approximately 58 percent of all compact migrants living in the affected jurisdictions. According to the 2008 required enumeration, compact migrant populations continued to grow in Guam and Hawaii and were roughly 12 percent of the population of Guam and 1 percent of the population of Hawaii. Working under agreements with Interior, Census used a different approach for the most recent enumeration than for prior enumerations, employing two methods in 2008: (1) a one-time survey in Guam and the CNMI and (2) a tabulation of existing multiyear ACS data for Hawaii. The affected jurisdictions opposed the change in approach. The 2008 approach allowed for determining the precision of the estimates but did not yield comparable results across jurisdictions or detailed information on compact migrants. Interior and Census officials have a preliminary plan for the required 2013 enumeration but Interior has not determined its cost or assessed its strengths and limitations. The methods used by affected jurisdictions to collect and report on compact impact have weaknesses that reduce their accuracy. For fiscal years 2004 through 2010, Hawaii, Guam and the CNMI reported more than $1 billion in costs associated with providing education, health, and social services to compact migrants. However, some jurisdictions did not accurately define compact migrants, account for federal funding that supplemented local expenditures, or include revenue received from compact migrants. Although Interior is required to report to Congress any compact impacts that the affected jurisdictions report to Interior, it has not provided the affected jurisdictions with adequate guidance on estimating compact impact. Compact migrants participate in local economies through employment, taxation and consumption, but data on these effects are limited. From fiscal years 2004 to 2010, Interior awarded approximately $210 million in compact impact grants to the affected jurisdictions, which used the funds primarily for budget support, projects, and purchases in the areas of education, health, and public safety. In Guam and Hawaii, government officials, service providers, and compact migrants discussed approaches to more directly address challenges related to migration by bridging language barriers, providing job training, and increasing access to services. The amended compacts also made available $808 million in sector grants for the FSM and the Marshall Islands from fiscal years 2004 to 2010. Sector grants are jointly allocated by the joint U.S.-FSM and U.S.-Marshall Islands management committees and have been used primarily in the FAS for health and education. Few sector grants directly address issues that concern compact migrants or the affected jurisdictions. The committees had not formally placed compact impact on their annual meeting agendas until 2011 and have not yet allocated any 2012 sector grant funds to directly address compact impact. GAO recommends that Interior assess the 2013 enumeration approach, disseminate adequate guidance on estimating compact impact, and encourage uses of grants that better address compact migrants' impact and needs. Interior generally agreed with the report but did not support the recommendation on grant uses.
The Biscuit Fire began in July 2002 as 5 separate fires in southwest Oregon in the Siskiyou National Forest, which was administratively joined with the Rogue River National Forest in 2004. The fire was one of 12 or 13 large fires that burned throughout the Pacific Northwest Region in 2002 due to severe drought conditions; in addition to the Biscuit Fire, fires burned in the Deschutes, Umpqua, Malheur, and other forests in the region. In Oregon, the Biscuit Fire burned mostly within the Siskiyou Forest, which encompasses more than 1 million acres of diverse, steep, and rugged landscape made up of the Klamath Mountains, the Coast Ranges, the 180,000-acre Kalmiopsis Wilderness, and many roadless areas. By September 2002, the fire was being controlled, and Forest Service staff were conducting Burned Area Emergency Response program projects to stabilize the most severely burned areas. By November 2002, the fire was declared controlled, and the Rogue River-Siskiyou National Forest staff were beginning their postfire recovery efforts. In evaluating conditions after the fire, the Rogue River-Siskiyou National Forest staff determined that some areas were not so severely burned as to warrant management action. However, in some instances, the forest staff identified areas that were severely burned and resources that would not recover as quickly as desired without forest intervention. The fire burned in a mosaic pattern, with about 30 percent of the area burned lightly, with little vegetation killed, and about 44 percent burned intensely, with more than 75 percent of vegetation killed; the remaining acreage burned with mixed intensity and mixed results (see fig. 1). In evaluating postfire recovery projects and activities, the following laws and regulations affect the approach that the Forest Service generally takes: The National Forest Management Act of 1976 requires the Forest Service to, among other things, (1) develop a plan to manage the lands and resources of each national forest in coordination with the land management planning process of other federal agencies, states, and localities and (2) revise each plan at least every 15 years. Each forest plan—called a Land and Resource Management Plan—establishes how land areas within a forest may be used and governs individual projects or activities that occur within the forest. Individual projects or activities, such as building a road or harvesting timber, may take place only if they are consistent with the plan and after site-specific environmental review, which often includes public notice, comment, and administrative appeal. Under NEPA, agencies such as the Forest Service generally evaluate the likely effects of projects they propose using a relatively brief environmental assessment to determine if an EIS is needed. If the action would be likely to significantly affect the environment, a more detailed EIS is required. An agency may exclude categories of actions having no significant environmental impact—called categorical exclusions—from the requirement to prepare an EIS. One purpose of the EIS is to ensure that agencies have detailed information available to inform their decision making. Agencies such as the Forest Service give the public an opportunity to comment on draft environmental assessments and impact statements. In addition, the Forest Service has established procedures for administrative appeal of its decisions concerning projects and activities on National Forest System lands. As a general rule, once the administrative appeals process is complete, the public can litigate in a federal court a decision about a particular project. In 2001, the Forest Service issued a rule for managing its inventoried roadless areas, which generally include areas without roads that are 5,000 acres or larger, or smaller areas contiguous to designated wilderness areas. This rule, which was intended to provide lasting protection for inventoried roadless areas within the National Forest System, generally prohibited road construction, road reconstruction, and timber harvesting. However, U.S. District Court for the District of Wyoming found the rule unlawful and struck it down in 2003. The government did not appeal this decision and issued a new rule related to the roadless areas in 2005, also now in litigation. The new rule allows states to petition the Forest Service to issue regulations establishing management requirements for inventoried roadless areas within their states. The opportunity for submitting state petitions is available until November 13, 2006. Projects involving salvage harvests are governed by the Forest Service’s timber sales regulations and procedures. To sell timber, the forest staff identify the areas that they want to harvest—called sale units—identify the unit boundaries, and develop a timber sale contract that contains many standard provisions, such as limits on which trees can be harvested and requirements to prevent and control erosion. Sale units can be located along roads to allow access by logging trucks and equipment; logs are cut and hauled from the slopes by tractors or pulled by cables suspended above the ground. Sale units that are located farther away from roads—such as roadless areas—can be logged using helicopters. In such cases, loggers cut the trees and the logs are then flown out by helicopter. Timber sales are laid out by timber planners and the sales are monitored by a timber sale administrator that visits the site to review contract provisions and harvest operations. A large fire such as the Biscuit Fire can cause major changes to a forest’s resources and planned program of work such as the amount of timber to be sold and harvested, campgrounds and trails to be maintained, and areas of vegetation to be removed or reduced to help avoid future fires. The Siskiyou forest plan establishes goals and objectives for the desired future conditions of the forest that describe management of forest resources and activities such as timber, grazing, recreation, wilderness, and others. As with all land management activities, postfire recovery projects must be consistent with the forest plan. In the case of the Biscuit Fire, postfire recovery projects need to comply with the Siskiyou forest plan, which was approved in 1989. The projects also need to comply with the Northwest Forest Plan, a comprehensive document amending several forest plans adopted in 1994 for the management of federal forest land in Washington, Oregon, and northern California. Old-growth forests are valued as habitat that includes large standing, dead, and down—fallen—trees in various stages of decay. The plan includes a combination of land allocations managed to protect and enhance habitat for late-successional and old-growth related species, while providing a sustainable level of timber sales, as well as standards and guidelines for the management of these land allocations. These standards and guidelines include requirements for retaining dead and decaying trees on the ground, as well as standing dead trees, called snags, that are essential habitat for many wildlife species. The standards and guidelines also impose restrictions on timber harvesting and road building in riparian areas—areas along streams, ponds, reservoirs, and wetlands—to limit the amount of sediment running into them. Postfire recovery projects are funded by various sources, principally appropriations and trust funds. The Forest Service conducts its rehabilitation and restoration activities through existing programs, including its forest management, watershed, recreation, wilderness, and construction programs, among others. To fund such activities, the agency uses appropriations from sources that include its National Forest System, capital improvement and maintenance, and wildland fire management accounts. In addition, the Forest Service uses the Knutson-Vandenberg (K-V) trust fund that collects receipts generated from timber sales to pay for reforestation and timber stand improvement in areas harvested for timber, as well as wildlife habitat and other improvements in sale areas. It also uses the Salvage Sale Fund, which collects receipts generated from salvage sales, to pay for future salvage sales. Other sources of funds, such as gifts, bequests, and partnerships, also fund postfire recovery projects. In developing the Biscuit Fire Recovery Project, the Rogue River-Siskiyou National Forest staff followed the Forest Service’s general approach for postfire recovery efforts, but several unique circumstances, combined, affected the time taken to develop the Project and the alternatives included in it. First, the size of the burned area—and subsequently the Project—complicated the environmental analysis and the time needed to complete and review it. For example, to assess resource conditions, such as identifying the extent of dead trees, the forest staff had to rely on remote sensing data that were difficult to interpret and time-consuming to verify. Changes in the remote sensing data throughout the development of the Project caused the salvage sale volumes in the different EIS alternatives to change. Second, before, during, and after the development of the Project and the EIS, the regulations and guidance governing activities that could occur in the inventoried roadless areas changed several times, in part due to litigation. Changes that allowed salvage harvest in the inventoried roadless areas directly affected the alternatives considered in the EIS and the time needed to develop them. Third, during development of the EIS, the forest staff were reorganized and downsized, although the effect on the EIS is difficult to quantify. According to the forest staff, the changes increased their workload and limited the amount of time they could devote to developing and implementing the Project. However, according to the Forest Supervisor and other managers, the forest had enough staff to develop and implement the various alternatives identified in the EIS. In the wake of the Biscuit Fire, the Rogue River-Siskiyou National Forest staff followed the Forest Service’s general approach to postfire recovery planning for large fires. The Forest Service does not have a national program directing postfire recovery efforts or nationwide guidance on the development of recovery projects after a fire. However, according to Forest Service officials, regions and forests that had experienced past large fires with severe damage to their resources followed a general approach of assessing the conditions of forest resources after the fire, identifying projects needed to rehabilitate and restore damaged resources and opportunities for salvage harvest, and following the steps documented in the Forest Service’s NEPA manual, which include implementing and monitoring the chosen project. Figure 2 shows the time line of events in the development of the Project compared with the Forest Service’s general approach. Generally, to determine management actions to recover a burned area, forest staff assess the postfire conditions and evaluate various actions that could help to achieve their forest plan’s desired conditions. For large fires and recovery projects specifically, as shown in figure 2, forest staff (1) assess the resources in the burned areas; (2) develop a proposed action to recover resources, which can include multiple activities; (3) issue a Notice of Intent (NOI) to prepare an EIS; (4) develop and analyze alternatives to the proposed action; (5) issue a draft EIS and solicit public comments on the draft; and (6) issue a final EIS and record of decision to make a formal decision about the project. At this point, the forest staff implement and monitor the project, although it may be appealed or subject to litigation. Some projects can be finished within a few years after the fire; others may be implemented years after the fire. In the case of the Biscuit Fire Recovery Project, the forest staff wrote a formal postfire assessment, published in January 2003, 3 months after the fire was declared controlled. The Biscuit postfire assessment was conducted by a team of forest resource specialists, with expertise in forestry, recreation, engineering, hydrology, soil science, and fish and wildlife. The team visited key areas burned by the fire to view and measure the effects of the fire and to determine how severe the effects were on different resources. They then identified potential work to repair damaged resources. During this assessment, the team also held multiple meetings to gather the public’s input on what to do to repair the damage caused by the fire. In January 2003, after the Biscuit postfire assessment was completed, forest officials began the NEPA process by identifying members of an interdisciplinary team made up of about 30 resource specialists from the Rogue River-Siskiyou National Forest and other units of the Forest Service. Over the next few months, the team developed the purpose and need for the recovery work and then developed a proposed action, or a set of activities to be conducted in the area. In March 2003, the forest staff published an NOI in the Federal Register announcing that it would prepare an EIS for the Biscuit Fire Recovery Project. In it, the forest staff identified the purpose and need for action in the Biscuit Fire area: recovery of potential economic value through salvage harvest; restoration of vegetation altered by the fire—in particular, reforestation; protection of late successional habitat from future fire and insect damage; protection from future fire through hazardous fuel reduction; and learning about postfire management activities. The Project originally proposed in the NOI included salvage harvest on about 7,000 acres of matrix lands, totaling 90 million board feet; fuel reduction on 16,000 acres including late-successional reserve lands; meadow habitat treatments; road closures and repair; and reforestation on about 30,000 acres. As shown in figure 2, from March through October 2003, the interdisciplinary team developed alternatives for the proposed action and analyzed their effects on the environment. According to forest and regional officials, the team sought to develop a range of alternatives that were reasonable, including a range of salvage options, fuel reduction alternatives, and other activities. According to the Department of Agriculture’s Office of General Counsel, the agency is given discretion in developing a reasonable range of alternatives but typically develops two or more alternative ways of meeting the purpose and need of the proposal—in addition to an alternative that considers no action. During the process of developing alternatives, the team also identified projects in the Biscuit Fire area that could be conducted under categorical exclusion, including repairing recreational trails and sites; road maintenance such as replacing culverts; reforestation of burned areas identified as plantations—areas managed for harvest; and salvage harvesting trees that posed a hazard along roads. The team and the forest staff documented these categorically excluded projects separately and conducted them in 2003 and 2004 as the EIS for the Biscuit Fire Recovery Project was being developed. In addition, the forest staff held “deck” sales in which they sold trees that had been cut by firefighters during suppression activities and piled up or “decked.” According to Forest Service officials, because the environmental effects of cutting the trees occurred during the firefighting, an emergency activity, and the hauling would have limited environmental effects, the deck sales were not subject to a NEPA analysis. The Rogue River-Siskiyou National Forest staff issued its draft EIS for the Biscuit Fire Recovery Project in November 2003, a year after the fire was controlled, and allowed public comment through January 2004, as shown in figure 2. Approximately 23,000 public comments were received, summarized, and incorporated into the final EIS, which was issued in June 2004. A month later, in July 2004, the forest staff issued three records of decision—one each for the inventoried roadless areas, the matrix areas outside inventoried roadless areas, and late-successional reserves outside inventoried roadless areas. According to Forest Service officials, the decision to issue three records of decision was made to separate the more controversial projects—specifically the salvage sales in the inventoried roadless areas—from the less controversial projects to allow the latter to move forward without appeal and litigation. With the issuance of the final EIS and records of decision, an emergency situation determination approved by the Pacific Northwest Regional Forester in June 2004 became effective for the salvage sales in the matrix and late-successional reserve areas. The determination stated that the government would lose approximately $3.3 million if the sales were delayed for the full 105-day appeal period. The decision did not apply to the inventoried roadless area sales because, according to agency officials, the forest staff were not ready to conduct these sales at the time of the decision. Although the region was the first in the country to define an emergency under the economic criteria in the Forest Service regulations, the Biscuit Fire was not the first recovery project to which the region applied this argument. Overall, the general approach to postfire recovery efforts does not have specific time frames associated with it. According to Pacific Northwest Region officials, the NEPA analyses conducted in the region can take from 1 to 3 years to complete. Figure 2 shows that the development of the Biscuit Fire Recovery Project took about 1 ¾ years, after the fire was controlled, to complete, from November 2002 through July 2004. The records of decision were issued in July 2004, and the forest staff awarded the first of several salvage sales the same month. The emergency situation determination allowed the forest staff to begin implementing the Project immediately, without waiting up to 105 days for the appeal process to conclude. However, according to Forest Service officials, because the harvest season in this region typically ends in September, the purchasers did not have time to schedule the Biscuit Fire harvest into their workloads, and most of the salvage sale harvest occurred in 2005—3 years after the fire. This delay in the salvage harvest concerned all parties involved because of the additional loss of the commercial value of the trees. One of the key lessons identified in a regional evaluation after the 2002 fire season was that the identification of potential salvage sales should begin immediately after a fire. At the national level, in December 2004, an interregional committee published a strategy for postfire recovery, which identified challenges for managing postfire environments and proposed potential actions to improve the identification of salvage sales after large fires. According to Forest Service Washington Office officials, these actions have not yet been implemented because the agency has instead been focused on formulating broader restoration policy that encompasses postfire recovery actions. While the Rogue River-Siskiyou National Forest staff followed the general approach for postfire recovery on Forest Service lands, three unique circumstances affected the time taken to develop the Project EIS and the alternatives that were included in it. First, the size of the fire and proposed recovery activities increased the complexity of the analysis and review of the overall Project. Second, changes in the regulations and guidance for inventoried roadless areas that occurred during development of the Project caused alternatives to be added to the analysis and increased the time taken for the analysis. Third, the forest staff planned and implemented a major reorganization and downsizing during the development of the Project. Combined, these unique circumstances affected the time taken to develop the Project EIS, although it is difficult to distinguish the individual effect of each circumstance. In addition, the size of the fire and the changes to the management activities allowed in the inventoried roadless rules caused changes in the amount of timber considered for salvage sale in the Project alternatives and added two alternatives to the EIS. Figure 3 shows the events surrounding each unique circumstance compared with the events in the development of the Project. The first circumstance unique to the Biscuit Fire that affected development of the Project was the size of the area burned by the fire and, subsequently, the size of the area included in the Project. The size increased the complexity and amount of work needed to analyze and review resource conditions, Project alternatives, and potential impacts. While the fire burned almost 500,000 acres, the forest staff excluded the Kalmiopsis Wilderness in the postfire recovery work, leaving about 320,000 acres of nonwilderness area for evaluation. Normally, to assess the conditions of resources burned in a fire, forest staff conduct site visits, take measurements and samples of different resources and conditions, and identify potential rehabilitation and restoration activities. For large fires, they can use aerial photographs and satellite images. However, the Biscuit Fire was much larger than other fires that were considered large, causing the forest staff to conduct the postfire assessment and to use different sources of remote sensing data to assess the condition of forest resources. The size of the fire and Project also increased the attention and amount of review the Project received. The forest staff decided to conduct a postfire assessment of the Biscuit Fire because of the large area that had been burned and needed to be assessed to determine what recovery actions were needed. However, according to forest and regional officials, while the data gathered and analyzed during the assessment were useful in moving forward with recovery, writing the formal report added time to the process. Forest officials involved in the Biscuit postfire assessment stated that because the fire was so large, and access was limited due to the lack of roads and steep terrain, they could only conduct limited site visits to gather information on the condition of forest resources that had been burned and those that remained unburned. The assessment, according to the officials, was useful for the purposes of getting a head start on gathering data on these resource conditions, which were ultimately useful in the NEPA analysis. At the same time, forest and regional officials acknowledged that the assessment did not help them narrow the range of projects to be conducted and was time-consuming and expensive, causing several weeks of delay in the NEPA analysis. According to these officials, the postfire assessment—while useful in soliciting public comments about what should be done to recover the burned area—contained a wish list of projects that could be done regardless of funding sources and schedules. As such, the assessment may have set expectations too high about what could be practically accomplished, given funding and time. According to the Forest Supervisor, the postfire assessment should have focused on time-sensitive projects to facilitate the NEPA process. In response to the lessons learned from the 2002 fire season, the region will conduct postfire assessments separately from the assessment of salvage opportunities and will deploy a rapid assessment team to quickly identify salvage opportunities after a fire to prevent delay and decay of trees that can be harvested. The size of the burned area and the increased complexity of the assessment was also reflected in the need to use remote sensing data to adequately assess the resources in such a large area. Changes to the sources of data added time to the EIS development and affected the salvage harvest volumes being considered in different alternatives. Given the size of the burned area and Project area, the forest staff used aerial and remote sensing data, in addition to site visits to verify the data, to assist in the analysis of vegetation conditions, burned timber available for salvage, and wildlife habitat conditions. Overall, the data helped the staff in covering a large area but also required additional analysis work that added to the time needed to develop the EIS. The interdisciplinary team started using aerial photographs taken at the end of the fire, as shown in figure 3, to identify potential areas for salvage harvest. The team used these photographs to identify patches of dead trees that were a certain size and density; however, because the locations seen in the photographs were inaccurately identified and details were insufficient at times, the forest crews did not always find enough dead trees when they visited the sites. By June 2003, the wildlife staff on the team determined that satellite images taken of the burned area more clearly showed areas of dead timber than the aerial photographs. Because the team did not want to use two sets of data—the aerial photographs and the satellite images—the team selected the satellite images as the data set for the EIS analysis. This added time to change the underlying maps in its Geographic Information System, which the forest staff used to prepare maps for the EIS analysis. In addition to adding time for analysis, the data changes had an effect on the EIS alternatives being considered by the team. For example, the maximum amount of timber estimated as available for salvage harvest decreased from about 1 billion board feet in the draft EIS issued in November 2003 to about 600 million board feet in the final EIS issued in June 2004, due to the use of more accurate satellite data, more field verification of data, and application of strict salvage guidelines for the late-successional reserves. Finally, the size of both the fire area and the Project resulted in additional review by Forest Service regional officials and Department of Agriculture officials, as well as increased attention by state officials. The additional review included two evaluations by the region’s Environmental Review Committee—a group responsible for examining more complicated EIS documents in the region for substantive concerns and to ensure compliance with Forest Service regulations. The Environmental Review Committee reviewed the EIS in February 2004 and again in April 2004 before its issuance. According to regional staff, the evaluations identified the need to revise the document, and these revisions required a few additional weeks to complete. In addition, the review included visits and several briefings for the Undersecretary and Deputy Undersecretary of Agriculture for Natural Resources and Environment and key state and tribal officials to apprise them of the status of the EIS (see fig. 3). According to the Undersecretary, large, controversial fires and recovery projects such as the Biscuit Fire Recovery Project elicit additional attention from department officials because of increased congressional interest. These briefings took some time, but according to the Forest Supervisor, did not affect the time needed to produce the EIS. The second circumstance unique to the Biscuit Fire that affected the development of the Project was the uncertainty of the regulations and guidance governing road building and salvage harvest activities in inventoried roadless areas, which affected the alternatives in the Project EIS and the time needed to analyze them. Figure 4 shows the inventoried roadless areas in the fire area. As can be seen from figure 3, the regulations and guidance governing activities in inventoried roadless areas changed several times. The first change occurred in December 2002. Regulations promulgated in 2001 would have limited road building and timber harvest in inventoried roadless areas; however, in May 2001, the U.S. District Court for the District of Idaho prohibited the Forest Service from implementing the regulations. Subsequently that year, to help provide guidance for addressing road and timber management activities until land and resource management plans are amended or revised, the Forest Service issued an interim directive that allowed some road building and timber harvest activities in the areas with the approval of the Chief of the Forest Service or a Regional Forester. In December 2002, immediately after the fire was controlled and as the forest staff developed the postfire assessment, the U.S. Court of Appeals for the Ninth Circuit reversed the Idaho district court’s decision, effectively reinstating the 2001 regulations. The plaintiffs petitioned the appellate court to rehear the case, which the court denied in April 2003. During this time, the interdisciplinary team was developing its proposed action and began developing its EIS alternatives. In April 2003, the team had identified seven alternatives, the largest of which included 386 million board feet of salvage harvest from matrix, late-successional reserve, and inventoried roadless areas. However, by May 2003, after the appellate court declined to rehear the plaintiff’s case, the team narrowed the alternatives to five, the largest of which included 104 million board feet from matrix lands and fuel reduction work and did not include salvage harvest in the inventoried roadless areas. In July 2003, a convergence of events led the forest staff to develop two new alternatives with larger salvage harvest amounts, including amounts in the inventoried roadless areas. That month, the 2001 regulations were again enjoined, this time by the U.S. District Court for the District of Wyoming. Second, the Forest Service’s interim directive on inventoried roadless areas expired and was not reinstated until July 2004. During this time, forest supervisors were authorized to make road and timber management decisions within inventoried roadless areas consistent with the applicable land management plan. And third, an Oregon State University report identified 2 billion board feet as available for salvage harvest in the Biscuit Fire area, many times greater than the largest draft EIS estimate. According to Forest Service officials, the amounts differed because the purpose of the Oregon State University report was to identify all timber available for salvage regardless of legal or other restrictions on harvest. The district court’s decision came a week after the Oregon State University report and during the same week that the Forest Supervisor and Project leader visited Washington to brief Forest Service Washington Office staff, Oregon congressional delegation members, and Department of Agriculture officials on the five alternatives in its EIS—none of which included salvage harvest in the inventoried roadless areas. The forest officials providing the briefing received several comments about the need for more logging that would include harvest in the inventoried roadless areas. According to forest and regional officials, the failure to consider at least one alternative proposing salvage harvest within inventoried roadless areas might have made the EIS vulnerable to legal challenges based on the idea that the alternatives the Forest Service considered did not include a reasonable range of alternatives. Despite concerns about completing the EIS quickly to allow any salvage harvest to occur as quickly as possible, forest and regional officials determined that an estimated 8-week delay to conduct the analysis of new alternatives would be acceptable. Between the end of July and October 2003, the interdisciplinary team developed two additional alternatives that included about 1 billion board feet and about 500 million board feet of salvage harvest respectively for the draft EIS. The third circumstance unique to the Biscuit Fire that affected the development of the Project was a reorganization and downsizing of the Rogue River-Siskiyou National Forest staff. Since the 1990s—before and after the two forests were administratively combined—the Siskiyou and Rogue River National Forest workforce declined as timber harvest amounts declined. Their annual operating budget dropped from $33.6 million in fiscal year 2001 to $25.1 million in fiscal year 2006. The number of staff also dropped, falling from 619 at the beginning of fiscal year 2002 to 400 at the start of fiscal year 2005. Beginning in January 2003, just as the forest staff issued its postfire assessment, the staff reorganized to address decreasing budgets and staff numbers. As shown in figure 3, the forest staff issued a strategic business plan in November 2003, just as the draft EIS was released and the two forests joined as one administrative unit. More than 150 positions were identified that could be officially abolished to achieve the reorganization option the Forest Supervisor selected. The forest staff began identifying positions to be abolished in August 2002, identifying 35 positions to be placed on the Forest Service’s Workforce Reduction and Placement System list, which allows the employees to receive priority in moving to vacant positions elsewhere in the Forest Service. After its strategic business plan was issued, the forest staff began officially abolishing positions in June 2004. From that month through October 2004, 48 positions were abolished. The effect of this downsizing and reorganization on the development of the EIS is difficult to quantify. According to forest staff involved with the interdisciplinary team that developed the EIS, they worked on both the EIS and Project in addition to their ongoing daily responsibilities. They contrasted this experience with a previous large fire on the forest’s lands—the Silver Fire in 1987—for which there was dedicated staff for the EIS and recovery project. However, according to the Forest Supervisor and other managers, the forest had enough staff to develop and implement the various alternatives identified in the EIS. The Forest Supervisor stated that he directed staff to place priority on the Project and, according to the Regional Forester, additional staff were available to help the team, if needed. As of December 2005, the forest staff had nearly completed 12 salvage sales in the matrix and late-successional reserve areas; however, incomplete sales information and a lack of comparable economic data make a comparison of the financial and economic results of the sales with the agency’s initial estimates difficult. For the sales conducted through 2005, purchasers harvested almost 60 million board feet, which is much less than the 367 million board feet proposed for sale in the EIS. Forest staff overestimated the timber available for harvest and, in addition, some timber decayed during the preparation of the EIS and salvage sales, further reducing the volume of available timber. For fiscal years 2003 through 2005, the Forest Service and other agencies spent about $5 million on the sales and related activities such as law enforcement. In return, the agency collected about $8.8 million from the sales. From these receipts, the Forest Service plans to spend an additional $5.7 million in the next several years to remove brush, reforest, and conduct other work in sale areas. In the EIS, the sale expenditures and receipts were estimated to be about $24 million and $19.6 million, respectively, and the salvage harvest was expected to generate about 6,900 local jobs and $240 million in regional economic activity. However, it is premature to compare the results through 2005 with the estimates because the Forest Service will generate additional expenditures, revenues, and potential economic activity from two sales in June and August 2006. Even if complete sale results were available, methodological differences and a lack of comparable economic data complicate the comparison of the salvage sale results and EIS estimates. For example, the financial comparison is complicated by the fact that the EIS expenditure estimates are based on different activities than the reported expenditures through fiscal year 2005; adjustments can be made to allow a comparison, but they are complicated. Similarly, the economic comparison is complicated by the fact that the Forest Service does not report the economic results of sales. The analysis needed to report such data can be done, but according to Forest Service officials, the agency does not conduct this type of analysis because the primary reason for preparing EIS estimates is to compare the relative economic effects of salvage alternatives and not to provide a precise prediction of the outcomes of the sales. As of December 2005, the Rogue River-Siskiyou National Forest staff completed 12 salvage sales identified in the Biscuit Fire Recovery Project EIS and records of decision. After the EIS and records of decision were released in July 2004, the forest staff finished preparing and completed 12 sales totaling about 67 million board feet of timber on almost 3,700 acres of land in the matrix and late-successional reserve areas, as shown in figure 5. One sale occurred in 2004; the others occurred in 2005. Although several lawsuits were filed against the sales, they generally did not delay the implementation of the salvage sales in the matrix areas. A timber industry trade association and timber companies filed the first case against the Project alleging, among other things, that the Project violated the National Forest Management Act by failing to implement required reforestation activities. Environmental groups also filed lawsuits against the Project alleging, among other things, that the Forest Service: (1) allowed unauthorized personnel to mark trees for harvest, (2) performed an inadequate NEPA analysis, and (3) lacked authority to issue the emergency situation determination. Two court orders stemming from this collection of cases affected the timing of Project activities. First, the U.S. District Court for the District of Oregon issued a preliminary injunction on August 3, 2004, prohibiting certain salvage activities from proceeding because the sales contracts failed to require Forest Service personnel—rather than purchasers—to identify standing dead trees within the sale area that were not to be harvested for environmental reasons. The court lifted this injunction on August 20, 2004, after the agency amended the contracts. Second, the U.S. Court of Appeals for the Ninth Circuit issued an emergency stay prohibiting the late-successional reserve sales from proceeding pending resolution of an environmental group’s appeal of a district court ruling in favor of the Forest Service. The emergency order was in effect from September 7, 2004, through March 7, 2005. This period included the winter months during which sales activity can be impossible because of weather conditions and, when possible, may be restricted to limit the risk of spreading a particular fungus along wet roads. The forest staff provided a waiver to begin harvesting in March 2005 rather than June, the usual end of the restrictions on salvage harvest activities. Table 1 shows the volume of timber sold and harvested on the 12 sales as of December 2005. According to Forest Service staff, the majority of the timber volume harvested occurred in 2005. In general, the volume harvested was less than the volume sold because the sales were “scaled” sales that allowed the purchasers—with the concurrence of the timber sale administrator—to leave trees that did not have good timber and pay only for the timber removed from the sale units. In the case of the Horse sale, the harvested volume was greater than the sale volume because additional trees died after the sale contract was awarded but before the harvest was complete. According to a forest official, these trees posed a hazard to the loggers in the sale unit, so the timber sale administrator added them to the sale contract. Through 2005, the agency had sold nothing in the inventoried roadless areas but decided in spring 2006 that it would offer two sales—Mike’s Gulch and Blackberry—in these areas. In the records of decision, the forest staff had identified salvage harvest units in the inventoried roadless areas of the forest with a total of 194 million board feet available. In laying out salvage sales, the forest staff planned to offer about 38.1 million board feet in the two sales and determined that the remaining harvest units did not have enough merchantable timber left for sale. The forest staff selected the sale areas that had the better timber volume and would have the least effect on roadless and potential future wilderness values. Mike’s Gulch was advertised and sold in June 2006; the forest staff sold 261 acres with about 9.3 million board feet for about $300,000. In August 2006, the forest staff sold almost 7.9 million board feet on 274 acres in the Blackberry sale for almost $1.7 million. In addition to the salvage sales that resulted from the Biscuit Fire Recovery Project EIS and records of decision, the forest staff completed eight salvage sales of timber using a categorical exclusion that did not require the preparation of an EIS. These sales involved trees that the forest staff identified as hazardous because they could fall on roads. In addition, the forest conducted six deck tree sales. The hazard and deck tree sales were sold in 2003, while the development of the Biscuit Fire Recovery Project was ongoing. The deck sales were completed in 2003, while the hazardous trees were harvested primarily in 2004. Table 2 shows the individual sales and timber volumes harvested. Although all salvage sales planned in the EIS and records of decision are not complete, the acres and amount of timber salvaged in the matrix and late-successional reserve areas were much less than anticipated by the forest staff in the EIS. In the records of decision, the forest staff estimated that it would sell about 367 million board feet of salvage timber, which would be removed from 18,939 acres. Through December 2005, 44 million board feet have been removed from 3,700 acres, and an additional 15 million board feet have been removed from the hazard and deck tree sales. In a March 2006 report, the forest staff identified the following two reasons that the amount sold is much less than they had estimated: Overestimation: The original amount of timber available for harvest was overestimated for three reasons. First, the forest staff had difficulty applying the legal requirements in the Northwest Forest Plan to protect late-successional reserve habitat and riparian corridors. The staff had adjusted the timber volume estimates in the EIS to remove late-successional reserve habitat and riparian reserves. After the issuance of the EIS and records of decision, when the staff planned the sales, they discovered more riparian areas that needed protection and identified more trees that they needed to leave to meet habitat requirements. Second, the forest staff discovered that the hazard salvage sale volumes had not been removed from the EIS volumes. Third, the volume estimates based on remote sensing data were inaccurate—when the forest staff visited the sale sites and viewed the actual trees rather than photos or images, the trees were either alive or not large enough for sale. Decay: The amount of timber that would be lost to decay was underestimated. Although the forest staff estimated decay rates accurately, the EIS estimate was based on one-third of the timber harvest occurring in 2004 rather than 2005, when most of the salvage harvesting actually occurred. In planning the sales, the forest staff determined that more trees had decayed than they had estimated in the EIS. As such, they removed some sale units and acres because the trees no longer had commercial value or there were too few trees with remaining value to make the sale unit economical to harvest. In addition, the March 2006 report identified 8,174 acres from inventoried roadless areas that had not been harvested due to ongoing litigation. In April 2005, the Forest Service agreed with plaintiffs in one of the cases pending before the U.S. District Court for the District of Oregon not to harvest in the inventoried roadless areas until a new roadless rule had been finalized. The rule was finalized in May 2005. In August 2005, the state of Oregon and two other states—California and New Mexico—filed a lawsuit asserting that the Forest Service rescinded the 2001 roadless rule without carrying out the environmental analysis NEPA requires. Throughout 2005, the Forest Service held ongoing discussions with the Governor of Oregon to delay action on inventoried roadless area sales to await a decision on one of several lawsuits before the U.S. District Court for the District of Oregon challenging the adequacy of the EIS for the Biscuit Fire Recovery Project. According to Forest Service officials, they were trying to avoid further litigation concerning the roadless area sales. In February 2006, the district court rejected the challenge. In June 2006, after the forest staff auctioned the first inventoried roadless area sale—Mike’s Gulch—an environmental group challenged this sale in district court, alleging that the Forest Service violated NEPA by not preparing a supplemental EIS to review significant new information concerning adverse environmental effects of salvage logging within inventoried roadless areas. The court refused to issue a preliminary injunction against the sale, holding that the environmental group was unlikely to prevail. In July 2006, the plaintiffs in the states’ roadless rule case moved for a temporary restraining order against the sale. After the Mike’s Gulch purchaser agreed not to start operations until August 4, 2006, the plaintiffs withdrew the motion. The purchaser began harvesting on August 7, 2006. The purchaser of the Blackberry sale began harvest on August 28, 2006. From fiscal years 2003 through 2005, the Forest Service reported that it spent an estimated $4.6 million to plan, prepare, and administer the salvage sales in the Biscuit Fire Recovery Project, while other agencies spent an estimated $350,000. Forest Service expenditures include NEPA planning, salvage sale preparation, and administration for fiscal years 2003 through 2005, and indirect activities that support the Forest Products program—such as information technology, budget, financial, and public affairs activities. Other agencies’ expenditures were for activities related to Biscuit Fire salvage sales, including Department of Agriculture and Department of Justice attorneys’ legal services in litigation over the salvage sales through 2005. Table 3 shows the Forest Service’s and other agencies’ estimated expenditures on the Project salvage sales by fiscal year. Appendix I discusses the methodology used to estimate Forest Service expenditures. As the Project’s salvage sales are not complete and work will continue through at least fiscal year 2006, additional expenditures for the salvage sales can be expected. Also, the forest staff plans to spend $5.7 million in the next several years to remove brush, reforest the sale areas, and repair and maintain roads. This figure is based on collections of salvage sale receipts collected and deposited into the K-V Fund, Brush Disposal Fund, road maintenance account, and other accounts to pay for work in the Biscuit Fire salvage sale areas. The Brush Disposal Fund is a permanent fund created to allow the deposit of funds to pay for certain brush disposal work on all timber sales, including salvage sales. Forest Service staff complete brush disposal work using funds collected as an additional charge to the purchaser based on the amounts paid for the trees harvested. The funds are deposited in the Brush Disposal Fund, and the agency generally seeks to spend them within 3 years of the completion of the sale. The road maintenance account is a trust fund created with purchasers’ deposits for roadwork that is then conducted by the Forest Service. In total, for the 12 salvage sales and 14 hazard and deck sales completed through 2005, the forest staff collected more than $8.8 million. Of this amount, about $3.7 million was collected from the Project’s salvage sales, while more than $5.1 million was collected from the sale of hazard and deck trees. Table 4 shows the revenues generated for the Project’s sales, as well as the hazard and deck tree sales. Of the total receipts collected, about $6.8 million was collected as revenue for the sales, and about $2.1 million was collected as deposits for brush disposal, road maintenance, and other work. From the $6.8 million, the forest staff deposited $3.7 million into the K-V Fund for reforestation and other rehabilitation work associated with the sale and the fire; most of the remaining funds were deposited into the Salvage Sale Fund to support future salvage sales in the region. Of the $2.1 million in deposits, about $1.2 million was deposited into the Brush Disposal Fund, $538,000 was deposited for road maintenance, and about $290,000 was deposited for other purposes that include contracts for companies that weigh and measure the harvested trees—called scaling contracts. While the Biscuit Fire Recovery Project contains estimates of the financial and economic results of the salvage sales for each proposed alternative, a comparison of the estimates with the results is difficult. First, the incomplete sales mean that financial and economic data for the salvage sales are also incomplete, which makes a comparison of the sales’ financial and economic results with the EIS results premature. Furthermore, even with complete sales data, the comparison of the estimates with final sales’ results is complicated by methodological differences related to the way the expenditure estimates and results are calculated and a lack of comparable economic data. The Biscuit Fire Recovery Project EIS estimated that the salvage sales planned under the alternative selected by the Forest Supervisor would cost about $24 million to prepare, administer, and reforest and would generate about $19.6 million in revenues for the government—about $13 million from sales receipts and $6.6 million for brush disposal deposits. These funds, according to the Project EIS, would be available to help pay for postfire recovery activities. In addition to financial revenues for the federal government, the EIS estimated the economic effects of the salvage sales for each alternative. The Project EIS estimated the direct and indirect economic effects of the sales in each alternative for five counties in southwest Oregon—Coos, Curry, Douglas, Jackson, and Josephine—and examined the economic sectors affected by the salvage sales, such as wood manufacturing, construction, and retail trade. The EIS estimated that the salvage logging in the selected alternative of the EIS would generate about 6,900 local jobs and $240 million to the regional economy related to the harvesting and processing of the timber. Because the Forest Service held two additional salvage sales for the Project in 2006, it is premature to compare the forest’s financial and economic results with the estimates in the EIS. With additional sales, the Forest Service will have additional, unknown expenditures and revenues, making the total results on all sales unknown and incomparable with the estimated results. A comparison of the results through 2005 with the EIS estimates could be made if the estimates were available on a sale-by-sale basis; however, according to a Forest Service official, the EIS estimates are averaged across the sales and are reported as a total only, not separately for each sale. Unlike typical timber sales that have well-defined units and volumes, the EIS estimates were necessarily formulated using several broad assumptions about the salvage sale units and the timber volume available in them, as well as harvesting methods and average purchaser costs. Because the forest staff ultimately ended up changing sale units and recombining units in different sales, the units in the EIS estimate differ from those ultimately sold. According to a Forest Service official, these assumptions and average prices would cause the estimate to be less precise, but they had to be made because the size of the fire and the number of sales prevented the forest staff from making more precise estimates. Similarly, the economic estimates cannot be compared with the sale results because the appropriate regional data, such as jobs created by salvage sales, cannot be calculated until the sales are complete. Although a comparison of the financial results of the Project’s salvage sales is premature because the sale results are incomplete, an examination of the volume and prices paid—both components of revenue—indicates that the EIS overestimated volume and underestimated prices received for potential sales. The amount of timber volume sold and removed from the 12 salvage sales was much less than the EIS estimated was available. The EIS estimated that 173 million board feet out of the total 367 million board feet, or 47 percent of the total timber volume estimated for sale, would be available in the matrix and late-successional reserve areas, while the remaining 194 million board feet would be available in the inventoried roadless areas. By the end of 2005, the forest staff had sold 67 million board feet from the matrix and late-successional reserves. With regard to price, the EIS estimated that the timber sales would generate receipts of $37 per thousand board feet. The actual price received for the 12 salvage sales averaged $47 per thousand board feet, while the actual price received for the hazard sales averaged $293 per thousand board feet and for the deck sales averaged $397 per thousand board feet. The difference in prices received reflects some difference in quality due to the fact that the hazard and deck trees were removed a year or so earlier. It also reflects the fact that the hazard sales are near a road and deck sales are already logged, which would mean a purchaser would have minimal or no logging costs. Even when the salvage sales are complete and final data are available on sale expenditures, revenues, and economic results, certain methodological factors complicate the comparison of the sale results with the EIS estimates. Specifically, the Forest Service’s estimated expenditures and those estimated in the EIS were calculated for different purposes and, therefore, do not contain the same items. For example, the EIS estimates do not include expenditures on NEPA, indirect costs, or law enforcement and litigation, while the forest’s estimated expenditures for fiscal years 2003 through 2005 do include these expenditures. According to a Forest Service official, the purpose of the EIS is to compare alternatives and assess the differences among alternatives, therefore certain costs that are the same for each alternative, such as NEPA and indirect costs, are not included. On the other hand, the expenditures reported by the forest staff for fiscal years 2003 through 2005 include those expenditures that can be allotted to salvage sales—such as NEPA expenditures—for the purpose of showing full expenditures related to the Biscuit Fire salvage sales. A comparison of these amounts would be complicated by adjustments and assumptions that would need to be made to facilitate the comparison. With regard to the economic analysis, even at the completion of the sales, the Forest Service does not conduct the type of analysis needed to report the actual economic results of the sales, which would allow a comparison with the estimates. The needed analysis would require the collection of appropriate economic data, as well as formulation of appropriate economic models to clearly separate the effects of salvage sales on jobs and on the economy of the region from effects of other concurrent regional and national factors. This retrospective analysis is difficult but could be done; however, according to a Forest Service official, the agency does not typically conduct the analysis needed to report these results because the primary reason for preparing EIS estimates is to compare the relative economic effects of salvage alternatives and not to provide a precise prediction of the results of the sales. However, given that the volume of timber sold through December 2005 is substantially less than the volume of sales assumed in the EIS for the selected alternative, we would expect the actual economic results of the sales to be less than the EIS estimate, all else being equal. The Rogue River-Siskiyou National Forest staff have begun implementing other activities in the Project’s records of decision but completing these activities depends on the extent of salvage sales, workload schedules, salvage sale revenues, and other funding. In the Project’s records of decision, the forest staff included numerous activities to help burned areas recover, including postsale activities such as reforestation that would be conducted in salvage sale areas. Table 5 shows the key activities included in the Biscuit Fire Recovery Project records of decision and the amount of work planned and completed for each through December 2005. The forest staff have begun work on reforestation, brush disposal, and road maintenance but the extent of this work depends, in large part, on the amount of salvage harvest activity that occurs. The forest staff have also begun work on fuel management zones and wildlife habitat activities—which are planned for both within and outside the salvage sale areas—but completing this work depends on uncertain schedules and funding sources. In addition to the activities in table 5, the records of decision for the Project proposed a large-scale study of postfire management activities such as salvage harvest and fuel management zones, and monitoring of the Project’s activities. The forest staff are still planning these activities, which are not yet funded. Through December 2005, forest staff had begun work on brush disposal, reforestation, and road maintenance activities. These activities have funding sources because the Forest Service collects and deposits sale revenues for brush disposal and reforestation activities and because much of the road maintenance work is conducted by the sale purchaser. However, the amount of work that the forest planned to accomplish for each of these activities has changed as a result of the amount of timber sold and harvested in the Biscuit Fire salvage sales. For example, the amount of brush disposal work—an estimated 18,939 acres in the records of decision—will be reduced because the acres where salvage harvest will be done have been reduced. As shown in table 5, the forest staff have accomplished 554 acres of brush disposal, also referred to as slash disposal or activity fuel treatment. After a salvage sale, forest staff are responsible for brush disposal, which usually entails burning piles or areas that are covered with vegetative debris from the sale such as stumps, chunks of wood, broken tree tops, tree limbs and branches, rotten wood, or damaged brush resulting from salvage logging operations. In general, under the Biscuit Fire salvage sale contracts, the purchasers were required to create piles of such debris on the acres logged before the forest staff conducted their brush disposal work. While the forest staff had planned to accomplish almost 18,939 acres of brush disposal, they have revised the total amount needed to about 3,000 acres because the acres sold for salvage harvest were much less than anticipated—about 3,700 acres through December 2005. The forest staff do not need to conduct brush disposal if the anticipated salvage sales do not occur. In addition, the forest staff said that they will not conduct work on every single acre of a salvage sale unit because, in some cases, the treatment is not needed. As of the end of December 2005, the forest staff have collected $826,000 in the Brush Disposal Fund for the Biscuit Fire salvage sales. As of December 2005, the forest staff had planted 706 acres of trees. The Forest Service plants trees to help reforest areas where trees have been removed by natural events such as wildland fire, or by timber harvest, that might not recover naturally. In the Project records of decision, the forest staff estimated that they would plant trees on about 30,000 acres, including 18,939 acres in the areas that would be salvage harvested, and about 11,000 acres that had been burned but not harvested. On the harvested acres, the forest staff plan to conduct reforestation work after the salvage sales are closed and brush disposal is completed. The estimated 30,000 acres of planting will be reduced because the forest staff will not need to plant acres that were planned for salvage but will not be harvested. In addition to the reforestation activity identified in the Project records of decision, the forest staff replanted 8,935 acres through 2005 under a categorical exclusion to restore plantations—areas to be managed for future timber harvest—destroyed by the Biscuit Fire. This work was funded from appropriated funds and reforestation trust funds. In general, planting work that occurs in salvage sale areas is funded from sale revenues collected and deposited into the K-V Fund, while planting outside of sale areas is funded through the forest’s appropriated funds for vegetation management. For sale area reforestation, the K-V plans identified about $4.6 million worth of work to plant the harvested areas. About $2.7 million was deposited into the K-V Fund for planting activities, although the plans are not yet final and, according to forest staff, funds can be shifted to projects needing them until the plans are final. The Forest Service retains these funds for use in the salvage sale area and generally uses them within 5 years after the sale is closed to complete reforestation. During the 5 years after a sale is completed, forest staff inspect the areas to determine the extent of growth of planted seedlings and naturally grown seedlings. In some cases, the Forest Service determines that sufficient numbers of trees have grown in the area naturally, and the planned reforestation work will not be needed. According to agency guidance, if this occurs before the sale is administratively closed, the K-V funds can be used to fund other activities planned for the sale area, such as wildlife habitat restoration. As of December 2005, 307 miles of the 559 miles of road maintenance had been completed. Road maintenance activities, which include blading, grading, and gravel replacement on Forest Service roads, were conducted by the purchasers as part of the salvage sale contracts. The 559 miles identified in the records of decision include all the roads in the forest’s road system; however, according to forest engineers, not all roads will receive treatment because only the roads used by purchasers while they are harvesting the Biscuit Fire salvage sales are maintained under contract. Furthermore, some roads may receive two or more treatments because roads that are used for two or more sales are maintained under each contract. In addition to the road maintenance planned for the Project, 176 miles of roads were maintained by the purchasers during and after the hazard and deck sales—some of them the same roads that were treated under the Project sales. In addition to the maintenance performed by the purchaser, the purchasers made deposits into a road maintenance account. The forest staff will use these deposits to pay for work, such as asphalt resurfacing, on roads used by multiple purchasers. The deposits were collected in addition to the price paid for the salvage sale and were based, in part, on the volume of timber harvested from each sale. As of December 2005, more than $360,000 had been deposited in the road maintenance account to be used to maintain roads in the future. As of the end of 2005, the forest staff had also begun fuel management and wildlife rehabilitation activities identified in the Biscuit Fire Recovery Project records of decision, but completing these activities will depend on the Forest Service funding and scheduling the work over many years (see table 5). As of June 2006, the forest staff have not specified funding sources or work schedules for completing these activities. As shown in table 5, by the end of 2005, the forest staff had completed almost 15 miles of fuel management zones. These fuel management zones are concentrated along roads and ridges, as well as the perimeter of the Biscuit Fire. They are areas where vegetation or fuels—trees and brush that act as fuel for wildland fires—have been reduced to help create a space where firefighters can be more successful suppressing future fires. Maintaining them requires periodic efforts to burn or cut down brush and trees that grow in the areas. The Project’s records of decision show that the forest staff plan to maintain about 285 miles of these fuel reduction zones in the matrix, late-successional reserves, and inventoried roadless areas, as shown in figure 6. The forest staff do not have a schedule for developing fuel management zones and have not requested additional funds for the work. According to a forest official, most of the work to date has been incidental to salvage sale work in areas where salvage sales touched on identified fuel management zone areas. The official explained that creating and maintaining fuel management zones identified in the records of decision must be done in addition to fuel reduction work needed in areas adjoining developed or urban areas, called the wildland-urban interface. The official stated that funding priorities for fuel reduction work are concentrated in the wildland-urban interface because this is where human life and high value property are most at risk. The forest staff has identified numerous projects in this area that need to be completed, and the fuel management zone work would not have as high a priority for funding. By the end of 2005, the forest staff had accomplished 715 acres of seeding—scattering grass seeds in meadows to increase the amount of vegetation and enhance native grasses—to improve wildlife habitat. In addition to seeding, wildlife restoration work can involve removing trees and shrubs to reduce their encroachment into grasslands and meadows. Such work provides forage for grazing wildlife, including deer and elk, and provides habitat for birds such as the purple martin. In the Project records of decision, the forest wildlife staff planned to accomplish 6,800 acres of seeding and 700 acres of meadow encroachment work. As with fuel management zones, the forest staff have not scheduled or requested additional appropriated funds to accomplish the work. While the staff included about $1.3 million of projects in K-V plans for the Biscuit Fire salvage sales, salvage sale revenues were sufficient to fund about one-third of the planned work. Forest staff stated that it is still possible for K-V funds to become available to fund wildlife projects if the funds are not used for reforestation or planting work; however, if K-V funds are not available, the wildlife projects planned for the Biscuit Fire area will compete for funding with other wildlife projects outside the fire area. The Project records of decision include a large-scale adaptive management study of postfire activities, such as salvage harvest and prescribed burns, and monitoring of the progress and results of the Project. These activities will be implemented over many years and depend on other activities to be accomplished. The forest staff are still planning these activities and completing them depends on schedules and funding sources. Although the staff have developed a tentative schedule for the monitoring program, they have not developed a schedule for the adaptive management study. The study includes some activities that are part of the forest’s regular work but also includes work that would be desirable if funding can be identified. Similarly, while some monitoring work was intended to be conducted as part of the forest’s regular program work, several of the monitoring items have been designated as desirable depending on funding sources. At the time of our review, the forest staff had just begun planning for the large-scale adaptive management study included in the Project. The study includes a management experiment to learn about and adapt different management actions in postfire vegetation across a broad landscape. The objectives of the study are to compare the results of different postfire management strategies designed to restore and protect habitat for late-successional reserves and old-growth related species. With the help of Forest Service researchers, a study plan was written to design the study, identify comparable areas of the forest in which to conduct different treatments, design the vegetation treatments, and identify monitoring needed for the projects. The treatments include salvage and replanting, natural recovery, and prescribed burns, which will set the areas on different pathways for recovery that will be monitored for significant differences. Completion of the study depends on the completion of other Project activities. The treatments cannot be completed unless other activities—namely the salvage sales and fuel management zones—are completed. In addition, one of the treatments included in the study involves prescribed burning, but the forest staff have not yet issued a record of decision for prescribed burning activities that it studied in the EIS. Completion also depends on activities being conducted in the areas chosen for the study. The EIS identified 12 areas of about 3,000 acres each as locations for the study. At the time of our review, because the acres of salvage sale had been reduced, about half of the study areas were available. According to the researchers who designed the work, the study is still viable, despite the reduction in areas subject to different treatments. Implementing the study depends on the forest staff scheduling the activities identified as needed and determining which forest program will conduct and fund the work. The Project EIS outlined the study’s activities and identified those that the forest staff could undertake in their normal workload and additional activities that should be accomplished but were not funded. The Pacific Northwest Research Station paid for and conducted initial work in the area by gathering remote sensing data of the burned area to establish a baseline for future assessments of vegetation conditions and how the three different treatments may affect the vegetation differently. While there is still time to set up the study, the Pacific Northwest Research Station recommended that a committee or board be established to ensure that the needed activities are conducted. The forest officials had not taken action on this recommendation at the time of our review. The Biscuit Fire Recovery Project records of decision identify a number of monitoring activities, with three purposes: (1) to assure that all aspects of the Project are implemented as intended, (2) to determine that certain critical activities have the desired effect, and (3) to allow changes to occur if activities are found to have been implemented incorrectly or have undesired effects. The records of decision and the final EIS identify some of the monitoring activities, as required to meet policy or standards, while the final EIS identifies other monitoring activities as desired, which refers to monitoring that would provide important information for future projects and administrative studies. At the time of our review, the forest staff reported that they had conducted some of the monitoring associated with salvage sales from the records of decision, which included monitoring planting sites and site preparation, the number of snags and down trees retained on salvage sale sites, activities to mitigate the effect of noxious weeds, marking used during salvage sales to ensure compliance with harvest requirements and marking guides, activities to mitigate threats to threatened and endangered species, and specific aspects of activities identified for protecting threatened and endangered species. According to forest staff, this monitoring is carried out by timber sale administrators as they visit and inspect sale sites. Their findings are included in inspection reports that are part of the timber sale contract files. The administrators can also determine whether best management practices have been followed for the timber sales, which include actions to reduce soil erosion and runoff from sale areas. According to forest staff, these practices can be separate activities or they can be part of the design of the timber sale. For example, a best management practice can include designing a timber sale to use cable or helicopter logging rather than tractor logging to reduce soil disturbance and erosion. For the other monitoring identified in the records of decision, the forest staff have drafted a plan that states whether each activity is required to meet policy or standards, suggests the frequency with which monitoring should take place, and outlines monitoring parameters and techniques. For example, the plan identifies the need to monitor noxious weed treatments after 1 to 5 years and after 5 to 10 years by using field visits to examine treated sites to determine whether treatments have removed populations of weeds. The plan does not, however, identify which forest staff will conduct the monitoring or which forest funds will be used to accomplish the work. The Project records of decision stated that monitoring results would be made available to the public. The unique nature of the Biscuit Fire and the significance of the Project activities underscore the importance of this information for showing the Congress and the public the extent of recovery work accomplished and remaining to be done. However, monitoring the status of the Project’s activities is not included in the monitoring plan. Further, the forest staff do not report annual accomplishments for the Biscuit Fire separately from their other program accomplishments. The activities in the Project are being implemented by the forest’s regular programs, including Forest Products, Natural Resources, and others. Although a forest monitoring report for 2004 includes activities conducted in the Biscuit Fire, forest staff did not comprehensively report on the status of activities in the Project such as salvage sales, reforestation, road maintenance, wildlife habitat rehabilitation, fuel management zones, and others. Without such information, the forest staff cannot report on the status and results of the Project, as described in the records of decision. During the hazard and salvage sales conducted in areas burned by the Biscuit Fire, the Rogue River-Siskiyou National Forest staff received and investigated numerous complaints of logging in areas where it should not have occurred. The forest staff confirmed three instances of improper logging and determined that two were the result of errors on the part of the forest staff, and one was an error by the timber purchaser. The forest staff attributed most of the other alleged cases of improper logging to disagreements over the definition of a riparian area and, after further review, dismissed them. Forest Service officials admit that the confirmed cases of improper logging were serious errors and have taken steps to prevent such occurrences on future salvage sales. The forest staff acknowledge that mistakes resulted in improper logging in two cases, one that occurred in the Babyfoot Lake Botanical Area adjacent to the Fiddler salvage sale—one of the 12 salvage sales in the Biscuit Fire Recovery Project—and another in the Kalmiopsis Wilderness Area adjacent to the Bald Bear hazard sale. In both cases, forest officials identified actions to improve the marking of boundaries for timber and salvage sales. Babyfoot Lake is a 350-acre area within the Siskiyou National Forest designated as a botanical area because it contains several rare species such as Brewer’s spruce, a spruce that grows in southwest Oregon and northern California. Botanical areas are specific management areas designated in forest plans that require natural management and allow researchers to study plants in their natural state. As such, timber harvest should not occur in the area. However, during the Fiddler salvage sale, about 16 acres of the botanical area adjacent to the sale were harvested. This incursion was discovered by members of a local environmental group in August 2005. A total of 292 tree stumps were counted within the area. According to the District Ranger in whose area the incident occurred and who investigated the incident, a series of occurrences led to the improper logging: During the fall of 2003 and spring of 2004, the Fiddler sale was being planned on maps and on the ground. In December 2003, the timber officer responsible for the Fiddler sale left the forest staff and from that time through January 2005, the position was filled by two detailees from different ranger districts and by the District Ranger. In the fall of 2003, the Forest Service staff used maps and a global positioning system to paint and flag the boundary of the Fiddler sale units, including a unit near Babyfoot Lake. During the winter, the timber staff discovered that the botanical area was included in the sale unit on the map. The boundary that should have followed a ridge top next to a road was instead drawn farther down the hill in the botanical area. The map was corrected, and the timber staff determined that they would need to repaint and remove flags from the unit boundaries in the spring when the weather improved and they could visit the site. In the spring of 2004, the correct boundary of the Fiddler sale units was painted by helicopter—a new technique that was being tested on the Biscuit Fire areas—following the correct boundary from the map. However, no one removed the flags and paint from the incorrect boundary, resulting in two boundaries marked on the sale unit. The timber sale administrator—the staff person responsible for monitoring the sale units during the salvage operations—did not notice this discrepancy while reviewing the sale units just before the sale. During harvest operations in 2004, the timber sale administrator and the purchaser followed the flags and painted trees, not the helicopter-painted boundary, which was the correct one. The District Ranger determined that this was a mistake on the part of the timber staff and that the amount of communication among the timber staff and oversight over the salvage sales were insufficient. She stated that the staff were working quickly to plan sales and to prepare for sales as soon as the records of decision with an emergency situation determination were signed. The sales were sold 2 weeks after the records of decision were signed. The District Ranger stated that several simple actions were needed to avoid similar problems in the future. In a report to the Forest Supervisor, she stated that future sales should ensure that botanical areas are marked on the sale map and flagged to distinguish them from the sale boundaries. It was further suggested that timber sale procedures include a checklist of items—such as botanical areas—for timber sale administrators’ reviews. In November 2005, the Department of Agriculture’s Office of Inspector General confirmed the error on the part of the forest staff and stated that the proposed solutions sounded reasonable. According to forest timber staff, the staff used an updated checklist to review the layout of the Mike’s Gulch sale held in June 2006. The sale units did not contain a botanical area but bordered a research natural area that is to be marked. The District Ranger also asked for an assessment of actions that could be taken to mitigate the damage that occurred from the salvage cutting and has implemented some actions already. For example, the Forest Service did not burn the slash in the area, as it normally would after a salvage harvest, leaving the trees to decay naturally. As of June 2006, the assessment and several actions had been recommended. For example, one of the recommendations is to expand the boundaries of the botanical area to include several areas of live Brewer’s spruce outside the current boundary; agency officials say this action would require the preparation of an environmental analysis or EIS and perhaps an amendment to the Siskiyou forest plan. In 2003, the Forest Service sold hazardous trees along roads in the Biscuit Fire area. One of the sales—the Bald Bear sale—occurred along a road on the boundary of the Kalmiopsis Wilderness Area. Although timber harvest and mechanized activities such as the use of chain saws are not allowed in wilderness areas, about 16 trees within the Kalmiopsis Wilderness Area were cut during the hazard sale. The District Ranger who investigated this incident found the following: The road in the Bald Bear sale runs along the boundary of the Kalmiopsis Wilderness Area; the boundary follows a ridgeline but where the terrain flattens, the boundary is along the road. The boundary signs were burned and difficult to see. The timber staff that marked the boundary for the sale called the forest staff to verify the boundary and were told it was on the ridge. The timber staff followed a line through the flat area, rather than the road, and included a portion of wilderness in the sale area. The timber officer did not confirm at the site that the boundary was accurate, which was important given its close proximity to the Kalmiopsis Wilderness Area. An outside researcher informed forest staff about the boundary error. The timber sale administrator directed the purchaser not to cut the area until the boundary could be checked; however, when the administrator arrived at the site, the trees had already been cut. The District Ranger stated that the logging was a result of mistakes on the part of the forest staff and the purchaser. Specifically, she noted that checking the boundary was the timber officer’s responsibility and acknowledged that the timber staff did not discuss the proximity of the Kalmiopsis Wilderness Area with the purchaser. Either of these activities might have identified the mismarked boundary. In addition, she said the purchaser failed to control its workforce after receiving notification of the mistake. The District Ranger asked the timber staff to identify actions to prevent this problem in the future. She noted that the regional staff issued a letter in 2004, prior to the incident, emphasizing the need to better identify forest boundaries. According to forest timber staff, in marking the Mike’s Gulch sale in June 2006, the forest staff used surveyors to identify the forest’s boundaries with private lands, and planned to have the surveyor mark the boundaries of the research natural area. The District Ranger stated that she had her staff prepare a range of options to mitigate the damage caused by the improper logging and, as of June 2006, had decided to leave the trees and stumps untouched since they are near the road and not part of the pristine environment. During the Wafer sale—another of the 12 salvage sales from the Project records of decision—the purchaser cut 120 live, or “green,” trees in error. The purchaser caught the mistake and brought it to the attention of the Forest Service timber sale administrator. The timber sale administrator halted the sale and put the purchaser in breach of contract. The purchaser stated that the cutting crew was inexperienced and, therefore, made the mistake. The forest’s contracting office required the purchaser to pay $200 per tree, or $24,000, in penalties, and the green trees were left in the forest. This incident of improper logging was investigated by a Forest Service law enforcement officer. According to the law enforcement official, because the purchaser reported the improper logging, it is not likely that the purchaser was attempting to steal the green trees. In addition, the forest staff took action in response to the improper logging by putting the purchaser in breach of contract. The sale contract clearly stated that all green trees were to be protected. However, according to Forest Service officials, accidental harvest of green trees can sometimes occur in large salvage sale operations. While timber sale administrators inspect sales periodically, they neither inspect the cutting operations on a day-to-day basis nor control the purchaser’s operations. In addition to these three incidents, Rogue River-Siskiyou National Forest officials received numerous reports of improper logging from local environmental groups who monitored the salvage sale operations. According to a forest official, timber sale administrators and other forest staff investigated these claims. The majority of these claims involved logging in riparian reserves, which are 174-foot buffers on each side of a stream or waterway that protect riparian habitat and water quality. Forest officials stated that the agency’s definition of a riparian area differs from the definition used by the environmental groups. The Forest Service defines a riparian area to be a channel with some evidence of sediment having been moved, while the environmental groups identify a riparian area as a depression in which water may flow. In reviewing these areas, forest staff said they identified one riparian area that had been salvage harvested and should not have been. However, it is difficult to know when the stream appeared because according to forest staff, after logging, the runoff from rain and precipitation is much higher and new “streams” are created. Also, during wet years, more streams are created from the increased runoff. Another claim of improper logging had to do with salvage harvesting in a botanical area. The same environmental group that discovered the Babyfoot Lake harvest reported to the Forest Service that logging from the Steed sale overlapped into the Sourgame Botanical Area. The forest staff investigated this incident and determined that the environmental group had used the larger of two boundaries, identified as alternatives, in the EIS for the Siskiyou forest plan. The record of decision for the plan chose the smaller area as the botanical area. The Biscuit Fire Recovery Project generated considerable public interest and controversy, particularly over treatment of the postfire landscape. With the near completion of the Project’s salvage sales, it is apparent that much less was sold and removed through the salvage sales, changing the need for such projects as brush disposal and reforestation. It remains to be seen how much of the other recovery work—wildlife habitat rehabilitation, fuel management zones, monitoring, and the adaptive management study—will be accomplished given the lack of specific funding and schedules. As the Project’s activities are implemented over the next several years, accountability for their accomplishment rests with the Rogue River-Siskiyou National Forest staff. One of the Project activities with potentially significant results is the proposed large-scale adaptive management study, which offers an opportunity to gather scientific information with broad implications for recovery actions and postfire salvage harvest elsewhere on Forest Service lands. Successful implementation of the study and other Project activities will take commitment on the part of the forest staff to coordinate the work over several years. In light of the size and unique nature of the Biscuit Fire, and continuing public interest in the recovery of the area, it is important that the forest staff communicate the results of the Project to the Congress and the public. The forest staff—and the Forest Service—recognize the importance of providing information on the Project’s status and results to the public but do not report results in such a way that makes the information readily available. Regular tracking and reporting of the status of the Project’s activities and results are needed. To help keep the Congress and the public informed on the status of the Biscuit Fire Recovery Project and the significant research work on the postfire effects of salvage and nonsalvage management actions, we recommend that the Chief of the Forest Service direct the Rogue River-Siskiyou National Forest Supervisor and the Pacific Northwest Regional Forester to provide an annual public report on the status of the activities included in the Project. The report should provide an update on the status of work accomplished and still planned for each of the activities in the Biscuit Fire Recovery Project EIS and records of decision: fuel treatments, prescribed burning, salvage harvest, vegetation and wildlife restoration, roads and water quality, and the large-scale study. The agency should produce such reports until the Project is substantially complete. We provided the Departments of Agriculture and Justice with a draft of this report for review and comment. The Forest Service provided written comments on behalf of the Department of Agriculture (see app. II). The Department of Justice had no comments on the draft report. In its comments, the Forest Service said that the report provided a good view of the process, events, and Project through December 2005. The agency generally agreed with our recommendation for the issuance of an annual update on the status of Biscuit Fire recovery activities but suggested that the time period for producing the report be limited to the next 3- to 5-year period. We stated in the recommendation that the reports should be produced annually until the Project is complete and that may be 5 years or longer given the nature of some of the recovery activities. For this reason, we hesitate to provide a specific time limit but believe there is value to providing the agency with some discretion about when they discontinue the report. Therefore, we revised the recommendation to state that the reports should be provided until the Project’s activities are substantially complete. The Forest Service also stated that an explanation of the litigation, controversies, and protests that occurred since December 2005 would provide the readers an understanding of the complexities of trying to manage fire projects. The report describes the status of sales through 2006, the emergency situation determination used to expedite the sales, the effects of litigation on the sales, and delays in the inventoried roadless area sales. We believe this discussion is sufficiently descriptive of these events and, therefore, did not make any changes to the report in response to this comment. The Forest Service also said that the report does not make it clear that the planning processes and appeals do greatly reduce the final timber harvest volumes. While the planning process was a factor in the time taken to develop the EIS, we did not evaluate the effects of the process on timber volumes because it was not one of the objectives of this report. Also, the report does not discuss the appeals process because the Forest Service used an emergency situation determination, which eliminated the appeals process for 11 salvage sales. Finally, the Forest Service also provided several clarifications of technical information that we incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 18 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Agriculture, the Attorney General of the United States, the Chief of the Forest Service, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine (1) how the development of the Biscuit Fire Recovery Project compared with the Forest Service’s general approach to postfire recovery; (2) the status of the Biscuit Fire Recovery Project salvage sales and how the reported financial and economic results of the sales compared with the Forest Service’s initial estimates; (3) the status of other activities identified in the Biscuit Fire Recovery Project; and (4) the extent and cause of improper logging within the Biscuit Fire Recovery Project, as reported by the Forest Service, and changes the agency made to prevent such occurrences in the future. To determine how the development of the Biscuit Fire Recovery Project compared with the Forest Service’s approach to postfire recovery efforts, we developed information on the (1) general approach used by the Forest Service to assess postfire conditions and identify rehabilitation and restoration projects and (2) detailed process used by the Rogue River- Siskiyou National Forest to develop the Biscuit Fire Recovery Project. To develop information on the general approach, we first reviewed available Forest Service guidance and directives on postfire management and National Environmental Policy Act (NEPA). There is no final guidance on postfire rehabilitation and restoration activities and, therefore, we reviewed guidance for the Pacific Northwest Region and a draft national strategy developed by the Interregional Ecosystem Management Coordination Group to describe the general postfire recovery process. We also interviewed Forest Service officials at headquarters, the Pacific Northwest Region, and the Rogue River-Siskiyou National Forest about the general approach. To develop the details of Project development, we reviewed meeting minutes of the Project’s interdisciplinary team and a forest advisory group during the development of the Project and its environmental impact statement (EIS) in 2003 and 2004. We also interviewed forest and regional staff involved in the development and review of the Project and EIS. To facilitate the interviews, we developed a time line of key events, which we provided to officials before the interviews. We also interviewed the key decision makers in the process— the Forest Supervisor, Regional Forester, Deputy Chief for the National Forest System, and Undersecretary and Deputy Undersecretary of Agriculture for Natural Resources and Environment to determine their roles in the process and in the final records of decision for the Project. To determine the status of the Project’s salvage sales, we obtained and analyzed information on the sales proposed in the Project’s records of decision. We gathered sale data from the Forest Service’s Automated Timber Sale Accounting System including sale name, acres sold, volume harvested, receipts, and receipts disposition. We also gathered this information for sales held prior to the issuance of the Project EIS—sales of hazard trees and trees cut from fire lines during the active fighting of the Biscuit Fire. We gathered this information as of December 2005 to ensure that we captured volume harvested and receipts paid for timber harvested in the fall of 2005 but for which the financial data were captured a month or two later. To determine whether the timber receipts data were reliable for our purposes, we interviewed Forest Service financial officials about the Timber Sale Accounting System and operations and controls over data and data reliability, as well as reviewing the system documentation. Through this process, we determined that the data are reliable for reporting the status of the Biscuit Fire salvage sales and receipts. To gather information on the Forest Service’s expenditures on the Project’s salvage sales, we had to identify what activities and budget line items are related to salvage sales because the Forest Service does not report financial data on a sale-by-sale basis. We gathered information for fiscal years 2003 through 2005 because this was the period during which the Forest Service conducted work to plan and implement the Project and its salvage sales and because 2005 is the last fiscal year for which complete financial data are available. To identify what activities are associated with salvage sales, we reviewed the Forest Service timber sale preparation handbook that describes what activities to include in the financial analysis of a timber sale. We also interviewed Forest Service personnel about what activities and expenditures should be included in a full accounting for a timber sale, including a salvage harvest sale. Finally, we obtained and reviewed previous Forest Service reports that referred to the total cost of its timber sale program and reviewed the activities and expenditures included in those estimates. We then worked with the financial staff of the Rogue River-Siskiyou National Forest to identify the expenditures for a range of activities included in these reports: NEPA planning, timber sale preparation, timber sale administration, reforestation activities, timber stand improvement activities, and forest indirect expenditures. Most of these expenditures occurred from two budget line items—one for appropriated timber funds and one for the Salvage Sale Fund. We also included an estimate of regional and Washington Office expenditures. Because the Forest Service does not account for the costs of timber sales, we had no basis to allocate regional and Washington expenditures and as a result, used the forest’s assessment rate for regional and Washington Office costs for the Salvage Sale Fund. The rate, 5.2 percent, was charged to all Salvage Sale Fund plans by the forest staff in fiscal years 2001 through 2005 to collect funding to pay for regional and Washington Office activities. Finally, because law enforcement and litigation are activities directly related to salvage sales, we obtained expenditures from the Forest Service’s law enforcement regional office located in Portland, Oregon, and from the Department of Agriculture’s Office of General Counsel and the Department of Justice’s Environment and Natural Resources Division for their work related to litigation and other legal services for the salvage sales. The law enforcement expenditures represent overtime and travel expenditures for officers who worked on the Biscuit Fire salvage sales; the expenditures for the Departments of Agriculture and Justice represent salaries for the attorneys involved in litigation and other legal services. To determine the reliability of the Forest Service data, we interviewed Forest Service financial officials responsible for the Foundation Financial Information System and the auditors responsible for reviewing the Forest Service’s annual financial statements to determine if there were any material weaknesses relevant to the data. We determined that there were none and that the data are reliable for our purpose of reporting Biscuit Fire salvage sale expenditures. We are relying on the reported expenditures of the Departments of Agriculture and Justice. We reviewed the Forest Service’s estimated financial and economic results for the proposed salvage sales in the Project EIS and discussed specific aspects of the estimates with the Forest Service’s Regional Economist, the primary official responsible for these analyses. We attempted to compare the financial results of the actual salvage sales with the Forest Service’s estimated financial results. However, because during the course of our analysis the Forest Service held two more salvage sales in the summer of 2006, the financial results—expenditures and receipts—of the sales available to date were incomplete. We also determined that there are methodological differences in the calculation of expenditures. We determined that the Forest Service does not report economic results, and we could not make the comparison of economic results and estimates, although such a comparison could be made if the appropriate analysis were conducted. We attempted to adjust the EIS estimates to make a comparison based only on the sales conducted through 2005 by disaggregating the EIS estimates by sale. The disaggregated results would have enabled us to use only the results of comparable EIS sales as the basis of comparison with the results of sales actually sold though 2005; however, we determined that the EIS estimates, which were based on broad averages across the land types, could not be disaggregated and attributed to individual sales. To determine the status of other recovery project activities, we interviewed forest staff responsible for the activities included in the records of decision and identified the sources of information available to document the status. Different program staff are responsible for conducting the activities in the Project, which include planting, seeding, road maintenance, fuel management zones, research, and monitoring activities. For activities other than research and monitoring, we compiled and summarized the work conducted through December 2005, reviewing contracts for planting work, accomplishment reports for brush disposal work and wildlife rehabilitation activities, and maps for fuel management zones. Where they were available, we reviewed plans for work to be accomplished in the future. We presented this information to the appropriate forest staff and confirmed the data with them. To determine the status of the landscape-scale research study, we interviewed the forest and Pacific Northwest Research Station officials who developed the research proposal in the EIS. The officials provided an update of the status, which we then confirmed with forest officials. Finally, we obtained a copy of the most recent monitoring schedule and discussed the monitoring program with the forest’s timber manager. To determine the extent and cause of reported improper logging, we obtained and reviewed Forest Service reports on the three incidents in the Babyfoot Lake Botanical Area, Kalmiopsis Wilderness, and Wafer sale to determine the facts of the incidents. We then reviewed an Office of Inspector General report on the Babyfoot Lake incident and two law enforcement reports on the wilderness and Wafer sale incidents to determine other views of the incidents. We visited the Babyfoot Lake site to view the correct boundary and the improperly harvested area. We interviewed Forest Service officials responsible for the day-to-day oversight and operations of timber sales, representatives of a local environmental group monitoring the salvage sales and responsible for discovering the Babyfoot Lake incident, and law enforcement and Office of Inspector General officials who reviewed the cases to determine the Forest Service’s response to the incidents. To determine the Forest Service’s response to other claims of improper harvest, we reviewed a file of letters and agency responses. We also reviewed reports from a third-party monitor who visited sale sites that had been harvested and viewed the results of operations. We performed our work in accordance with generally accepted government auditing standards from November 2005 through July 2006. The following are GAO’s comments on the Forest Service’s letter, dated September 7, 2006. 1. We revised the report accordingly. We stated that the EIS is required rather than needed. 2. We revised the report accordingly. 3. We revised the report accordingly. 4. We revised the report accordingly. 5. We revised the report accordingly. 6. We revised the report accordingly. 7. The report describes the status of sales through 2006, the emergency situation determination used to expedite the sales, the effects of litigation on the sales, and delays in the inventoried roadless area sales. We believe this discussion is sufficiently descriptive of these events and, therefore, did not make any changes to the report in response to this comment. While the planning process was a factor in the time taken to develop the EIS, we did not evaluate the effects of the process on timber volumes because it was not one of the objectives of this report. Also, the report does not discuss the appeals process because the Forest Service used an emergency situation determination, which eliminated the appeals process for 11 salvage sales. 8. We disagree that the report should be limited to the next 3 to 5 years because some of the activities in the Project are likely to extend beyond that period of time. For this reason, we continue to believe that such a time limit should be based on the Project’s completion. We do believe there is value to providing the agency with some discretion about when they discontinue the report. Therefore, we revised the recommendation to state that the reports should be provided until the Project’s activities are substantially complete. In addition to the individual named above, David P. Bixler, Assistant Director; Susan Iott; Rich Johnson; Mehrzad Nadji; and Dawn Shorey made key contributions to this report. Joyce Evans, Lisa Knight, John Mingus, Cynthia Norris, Alison O’Neill, Kim Raheb, Jena Sinkfield, Jay Smale, and Gail Traynham also made important contributions to this report.
In 2002, the Biscuit Fire burned almost 500,000 acres of the Rogue River-Siskiyou National Forest in southwestern Oregon. In its wake, the Biscuit Fire Recovery Project (Project) is one of the largest, most complex postfire recovery projects undertaken by the Forest Service. Considerable controversy exists over the Project and its salvage sales to harvest dead trees. GAO was asked to determine (1) how the Project compares with the Forest Service's general approach to postfire recovery, (2) the status of the Project's salvage sales and how the reported financial and economic results of the sales compare with initial estimates, (3) the status of other Project activities, and (4) the extent of reported improper logging and the agency's response. To answer these objectives, GAO reviewed Project environmental analysis documents, plans, and activity reports and interviewed agency officials. The Rogue River-Siskiyou National Forest staff followed the Forest Service's general approach to postfire recovery in developing the Biscuit Fire Recovery Project, but several unique circumstances affected the time taken and the alternatives it included. For example, the size of the burned area--and, subsequently, the size of the Project--complicated the environmental analysis and increased the time needed to complete and review it. Also, the regulations and guidance governing timber harvest and road building in the forest's inventoried roadless areas changed several times, in part due to litigation, affecting the amount of timber available for harvest. As of December 2005, the forest staff had nearly completed 12 salvage sales; however, incomplete sales and a lack of comparable economic data, among other things, make comparing the financial and economic results with the agency's initial estimates difficult. For fiscal years 2003 through 2005, the Forest Service and other agencies spent about $5 million on the sales and related activities. In the next several years, the Forest Service also plans to spend an additional $5.7 million to remove brush and reforest the sale areas. In return, the agency collected about $8.8 million from the sales. While the agency estimated that the salvage sales would generate about $19.6 million for restoration, 6,900 local jobs, and $240 million in regional economic activity, it is premature to compare these estimates with the results because the sales are not complete. The Forest Service will generate additional expenditures, revenues, and economic activity from two sales sold in the summer of 2006. Even when complete sales' results are available, however, a comparison will be complicated by a lack of comparable financial and economic data. Through December 2005, the forest staff began work on most of the other activities identified in the Project but completing them depends on the amount of salvage harvest, funding sources, and work schedules. For example, the amount of brush disposal work--estimated at 18,939 acres--will be reduced because the acres of salvage harvest have been reduced. Other activities, such as establishing fuel management zones to help fight future fires, depend on the Forest Service funding and scheduling the work over many years. In addition, a large-scale study and monitoring activities are still being planned and yet unfunded. Although the forest staff identified the importance of making Project results available to the public, they do not separately report on Project activities and results from other programs. During salvage harvest operations in 2004 and 2005, the Forest Service reported three incidents of improper logging and took action to prevent such occurrences in the future. Two of the incidents were caused by Forest Service errors in marking its boundaries. Forest staff have since developed procedures to better mark boundaries of sale areas. A third incident was caused by an error on the part of the company that purchased the sale; the company was fined $24,000, and the trees were left on the ground.
The prescreening of airline passengers who may pose a security risk before they board an aircraft is one of many layers of security intended to strengthen commercial aviation. In July 2004, the National Commission on Terrorist Attacks Upon the United States, also known as the 9/11 Commission, reported that the current system of matching passenger information to the No-Fly and Selectee lists needed improvements. The commission recommended, among other things, that watch-list matching be performed by the federal government rather than by air carriers. Consistent with this recommendation and as required by law, TSA has undertaken to develop a program—Secure Flight—to assume from air carriers the function of watch-list matching. Secure Flight is intended to eliminate inconsistencies in current passenger watch-list matching procedures conducted by air carriers and use a larger set of watch-list records when warranted, reduce the number of individuals who are misidentified as being on the No-Fly or Selectee list, reduce the risk of unauthorized disclosure of sensitive watch-list information, and integrate information from DHS’s redress process into watch-list matching so that individuals are less likely to be improperly or unfairly delayed or prohibited from boarding an aircraft. Statutory requirements govern the protection of personal information by federal agencies, including the use of air passengers’ information by Secure Flight. For example, the Privacy Act of 1974 places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The Privacy Act requires agencies to publish a notice—known as a System of Records Notice (SORN)—in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” use of the data, and procedures that individuals can use to review and correct personal information. Also, the E-Government Act of 2002 requires agencies to conduct Privacy Impact Assessments (PIA) that analyze how personal information is collected, stored, shared, and managed in a federal system. Agencies are required to make their PIAs publicly available if practicable. According to TSA, the agency developed and is implementing Secure Flight’s domestic watch-list matching function in 3 releases: Release 1—Systems development and testing. Release 2—First stages of parallel operations with airline operators during which both Secure Flight and air carriers perform watch-list matching. Release 3—Continued parallel operations with airline operators and preparation for airline cutovers, in which Secure Flight will perform passenger watch-list matching for domestic flights. Under the Secure Flight watch-list matching process (see fig. 1), air carriers submit passenger information, referred to as Secure Flight Passenger Data, electronically through a DHS router or eSecure Flight, a Web-based access system for air carriers that do not use automated reservation systems to send and receive the data. Secure Flight Passenger Data are matched automatically against watch-list records, with results provided to air carriers through a Boarding Pass Printing Result. Passengers are subject to three possible outcomes from the watch-list matching process: cleared to fly, selected for additional screening, or prohibited from flying. Individuals initially selected for additional screening and those prohibited from flying undergo additional review, which results in the final Boarding Pass Printing Result and may lead to law enforcement involvement. TSA is to use discretion to determine what constitutes a possible match between passenger information and a watch-list record, based on matching settings made in the system. The matching settings include (1) the relative importance of each piece of passenger information (e.g., name versus date of birth); (2) the numeric threshold over which a passenger will be flagged as a potential match (e.g., a scoring threshold of 95 would result in fewer matches than a scoring threshold of 85); and (3) the criteria used to determine whether an element of passenger information is a potential match to the watch list (e.g., the types of name variations or the date-of-birth range that the system considers a match). The Secure Flight matching system will use this information to assign each passenger record a numeric score that indicates its strength as a potential match to a watch-list record. Raising the scoring threshold would result in more names cleared and fewer names identified as possible matches, which would raise the risk of the subject of a watch-list record being allowed to board an airplane (false-negative matches). Conversely, lowering the scoring threshold would raise the risk of passengers being mistakenly matched to the watch list (false-positive matches). In October 2008, TSA issued the Secure Flight Final Rule, which specifies requirements for air carriers to follow as TSA implements and operates Secure Flight, including the collection of full name and date-of-birth information from airline passengers to facilitate watch-list matching. In late-January 2009, TSA began to assume the watch-list matching function for a limited number of domestic flights for one airline, and has since phased in additional flights and airlines. TSA plans to complete assumption of the watch-list matching function for all domestic flights in March 2010 and to then assume from U.S. Customs and Border Protection this watch-list-matching function for international flights departing to and from the United States. According to TSA, since fiscal year 2004, it has received approximately $300 million in appropriated funds for the development and implementation of the Secure Flight program. In addition to matching passenger information against terrorist watch-list records, TSA requires air carriers to prescreen passengers using the Computer-Assisted Passenger Prescreening System (CAPPS). Through CAPPS, air carriers compare data related to a passenger’s reservation and travel itinerary to a set of weighted characteristics and behaviors (CAPPS rules) that TSA has determined correlate closely with the characteristics and behaviors of terrorists. Passengers identified by CAPPS as exhibiting these characteristics—termed selectees—must undergo additional security screening. This system is separate from the Secure Flight watch- list matching process and thus Secure Flight has no effect on CAPPS selection rates. In a January 2009 briefing to congressional staff, we reported that TSA had not demonstrated Secure Flight’s operational readiness and that the agency had generally not achieved 5 of the 10 statutory conditions (Conditions 3, 5, 6, 8, 10), although DHS asserted that it had satisfied all 10 conditions. Since then, TSA has made progress in developing the Secure Flight program and meeting the requirements of the 10 conditions, and the activities completed to date and those planned reduce the risks associated with implementing the program. Table 2 shows the status of the 10 conditions as of April 2009. Condition 1 requires that a system of due process exist whereby aviation passengers determined to pose a threat who are either delayed or prohibited from boarding their scheduled flights by TSA may appeal such decisions and correct erroneous information contained in the Secure Flight program. TSA has generally achieved this condition. For the Secure Flight program, TSA plans to use the existing redress process that is managed by the DHS Traveler Redress Inquiry Program (TRIP). TRIP, which was established in February 2007, serves as the central processing point within DHS for travel-related redress inquiries. TRIP refers redress inquiries submitted by airline passengers to TSA’s Office of Transportation Security Redress (OTSR) for review. This process provides passengers who believe their travels have been adversely affected by a TSA screening process with an opportunity to be cleared if they are determined to be an incorrect match to watch-list records, or to appeal if they believe that they have been wrongly identified as the subject of a watch-list record. Specifically, air travelers who apply for redress and who TSA determines pose no threat to aviation security are added to a list that should automatically “clear” them and allow them to board an aircraft (the “cleared list”), thereby reducing any inconvenience experienced as a result of the watch-list matching process. After a review of the passenger’s redress application, if OTSR determines that an individual was, in fact, misidentified as being on the No-Fly or Selectee list, it will add the individual to the cleared list. If OTSR determines that an individual is actually on the No-Fly or Selectee list, it will refer the matter to the Terrorist Screening Center, which determines whether the individual is appropriately listed and should remain on the list or is wrongly assigned and should be removed from the list. Although Secure Flight will use the same redress process that is used by the current air carrier-run watch-list matching process, some aspects of the redress process for air travelers are to change as the program is implemented. For example, individuals who apply for redress are issued a redress number by TRIP that they will be able to submit during future domestic air travel reservations that will assist in the preclearing process before they arrive at the airport. TSA expects this will reduce the likelihood of travel delays at check-in for those passengers who have been determined to pose no threat to aviation security. According to TSA officials, individuals who have applied for redress in the past and were placed on the cleared list will need to be informed of their new ability to use their redress number to preclear themselves under Secure Flight. These officials stated that they intend to send mailings to past redress applicants with information on this change. TSA has also coordinated with key stakeholders to identify and document shared redress processes and to clarify roles and responsibilities, consistent with relevant GAO guidance for coordination and documentation of internal controls. In addition, Secure Flight, TSA OTSR, and TSA’s Office of Intelligence (OI) have jointly produced guidance that clarifies how the entities will coordinate their respective roles in the redress process, consistent with GAO best practices on coordinating efforts across government stakeholders. See GAO/GGD/AIMD-99-69. TSA OI is responsible for disseminating the cleared list. guidance clarifies the roles and responsibilities for each entity with respect to reviewing potential watch-list matches. Furthermore, TSA is developing performance measures to monitor the timeliness and accuracy of Secure Flight redress, as we recommended in February 2008. TRIP and OTSR’s performance goals are to process redress applications as quickly and as accurately as possible. In February 2008, we reported that TRIP and OTSR track only one redress performance measure, related to the timeliness of case completion. We further reported that by not measuring all key defined program objectives, TRIP and OTSR lack the information needed to oversee the performance of the redress program. We recommended that DHS and TSA reevaluate the redress performance measures and consider creating and implementing additional measures, consistent with best practices that among other things address all program goals, to include the accuracy of the redress process. In response to GAO’s recommendation, representatives from the TRIP office are participating in a Redress Timeliness Working Group, with other agencies involved in the watch-list redress process, to develop additional timeliness measures. According to DHS officials, the TRIP office has also established a quality assurance review process to improve the accuracy of redress application processing and will collect and report on these data. Secure Flight officials are developing additional performance measures to measure new processes that will be introduced once Secure Flight is operational, such as the efficacy of the system to preclear individuals who submit a redress number. Condition 2 requires that the underlying error rate of the government and private databases that will be used both to establish identity and assign a risk level to a passenger will not produce a large number of false-positives (mistakenly matched) that will result in a significant number of passengers being treated mistakenly or security resources being diverted. TSA has generally achieved this condition by taking a range of actions that should minimize the number of false-positive matches. For example, the Secure Flight Final Rule requires air carriers to (1) collect date-of-birth information from airline passengers and (2) be capable of collecting redress numbers from passengers. Collecting date-of-birth information should improve the system’s ability to correctly match passengers against watch-list records since each record contains a date of birth. TSA conducted a test in 2004 that concluded that the use of date-of-birth information would reduce the number of false-positive matches. In addition, airline passengers who have completed the redress process and are determined by DHS to not pose a threat to aviation security can submit their redress number when making a flight reservation. The submission of redress numbers by airline passengers should reduce the likelihood of passengers being mistakenly matched to watch list records, which in turn should reduce the overall number of false-positive matches. TSA has established a performance measure and target for the system’s false-positive rate, which should allow the agency to track the extent to which it is minimizing false-positive matches and whether the rate at any point in time is consistent with the program’s goals. TSA officials stated that they tested the system’s false-positive performance during Secure Flight’s parallel testing with selected air carriers in January 2009 and found that the false-positive rate was consistent with the established target and program’s goals. Condition 3 requires TSA to demonstrate the efficacy and accuracy of the search tools used as part of Secure Flight and to perform stress testing on the Secure Flight system. We addressed efficacy and accuracy separately from stress testing because they require different activities and utilize different criteria. Efficacy and Accuracy of the System TSA has generally achieved the part of Condition 3 that requires TSA to demonstrate the efficacy and accuracy of the search tools used as part of Secure Flight. According to TSA, as a screening system, Secure Flight is designed to identify subjects of watch-list records without generating an unacceptable number of false-positive matches. To accomplish this goal, TSA officials stated that Secure Flight’s matching system and related search parameters were designed to identify potential matches to watch- list records if a passenger’s date of birth is within a defined range of the date of birth on a watch-list record. According to TSA officials, the matching system and related search parameters were designed based on TSA OI policy and in consultation with TSA OI, the Federal Bureau of Investigation, and others. TSA conducted a series of tests—using a simulated passenger list and a simulated watch list created by a contractor with expertise in watch-list matching—that jointly assessed the system’s false-negative and false- positive performance. However, in conducting these tests, the contractor used a wider date-of-birth matching range than TSA used in designing the Secure Flight matching system, which the contractor determined was appropriate to test the capabilities of a name-matching system. The tests showed that the Secure Flight system did not identify all of the simulated watch-list records that the contractor identified as matches to the watch list (the false-negative rate). Officials from TSA OI reviewed the test results and determined that the records not matched did not pose an unacceptable risk to aviation security. These officials further stated that increasing the date-of-birth range would unacceptably increase the number of false positives generated by the system. Moving forward, TSA is considering conducting periodic reviews of the Secure Flight system’s matching capabilities and results (i.e., false positives and false negatives) to determine whether the system is performing as intended. However, final decisions regarding whether to conduct such reviews have not been made. Relevant guidance on internal controls identifies the importance of ongoing monitoring of programs, documenting control activities, and establishing performance measures to assess performance over time. By periodically monitoring the system’s matching criteria as well as documenting and measuring any results to either (1) confirm that the system is producing effective and accurate matching results or (2) modify the settings as needed, TSA would be able to better assess whether the system is performing as intended. Without such activities in place, TSA will not be able to assess the system’s false- negative rate, which increases the risk of the system experiencing future performance shortfalls. Given the inverse relationship between false positives and false negatives—that is, an increase in one rate may lead to a decrease in the other rate—it is important to assess both rates concurrently to fully test the system’s matching performance. In our January 2009 briefing, we recommended that TSA periodically assess the performance of the Secure Flight system’s matching capabilities to determine whether the system is accurately matching watch-listed individuals while minimizing the number of false positives. TSA agreed with our recommendation. Separate from the efficacy and accuracy of Secure Flight search tools, a security concern exists. Specifically, passengers could attempt to provide fraudulent information when making an airline reservation to avoid detection. TSA officials stated that they are aware of this situation and are taking actions to mitigate it. We did not assess TSA's progress in taking actions to address this issue or the effectiveness of TSA’s efforts as part of this review. The second part of Condition 3 requires TSA to perform stress testing on the Secure Flight system. In our January 2009 briefing to the Senate and House Appropriations Committees’ Subcommittees on Homeland Security, we reported that TSA had generally not achieved this part of the condition because despite provisions for stress testing in Secure Flight test plans, such stress testing had not been performed at the time DHS certified that it had met the 10 statutory conditions, or prior to the completion of our audit work on December 8, 2008. However, TSA has since generally achieved this part of the condition. According to the Secure Flight Test and Evaluation Master Plan, the system was to be stress tested in order to assess performance when abnormal or extreme conditions are encountered, such as during periods of diminished resources or an extremely high number of users. Further, the Secure Flight Performance, Stress, and Load Test Plan states that the system’s performance, throughput, and capacity are to be stressed at a range beyond its defined performance parameters in order to find the operational bounds of the system. In lieu of stress testing, program officials stated that Release 2 performance testing included “limit testing” to determine if the system could operate within the limits of expected peak loads (i.e., defined performance requirements). According to the officials, this testing would provide a sufficient basis for predicting which system components would experience degraded performance and potential failure if these peak loads were exceeded. However, in our view, such “limit testing” does not constitute stress testing because it focuses on the system’s ability to meet defined performance requirements only, and does not stress the system beyond the requirements. Moreover, this “limit testing” did not meet the provisions for stress testing in TSA’s own Secure Flight test plans. Program officials agreed that the limit testing did not meet the provisions for stress testing in accordance with test plans and revised program test plans and procedures for Release 3 to include stress testing. Beyond stress testing, our analysis at the time of our January 2009 briefing showed that TSA had not yet sufficiently conducted performance testing. According to the Secure Flight Test and Evaluation Master Plan, performance and load tests should be conducted to assess performance against varying operational conditions and configurations. Further, the Secure Flight Performance, Stress, and Load Test Plan states that each test should begin within a limited scope and build up to longer runs with a greater scope, periodically recording system performance results. These tests also should be performed using simulated interfaces under real-world conditions and employ several pass/fail conditions, including overall throughput. However, Secure Flight Release 2 performance testing was limited in scope because it did not include 10 of the 14 Secure Flight performance requirements. According to program officials, these 10 requirements were not tested because they were to be tested as part of Release 3 testing that was scheduled for December 2008. Moreover, 2 of the 10 untested performance requirements were directly relevant to stress testing. According to program officials, these 2 requirements were not tested as part of Release 2 because the subsystems supporting them were not ready at that time. Further, the performance testing only addressed the 4 requirements as isolated capabilities, and thus did not reflect real-world conditions and demands, such as each requirement’s competing demands for system resources. Program officials agreed and stated that they planned to employ real world conditions in testing all performance requirements during Release 3 testing. In our January 2009 briefing, we recommended that TSA execute performance and stress tests in accordance with recently developed plans and procedures and report any limitations in the scope of the tests performed and shortfalls in meeting requirements to its oversight board, the DHS Investment Review Board. Since then, based on our analysis of updated performance, stress, and load test procedures and results, we found that TSA has now completed performance testing and significantly stress tested the vetting system portion of Secure Flight. For example, the stress testing demonstrated that the vetting system can process more than 10 names in 4 seconds, which is the system’s performance requirement. As a result of the performance and stress testing that TSA has recently conducted, we now consider this condition to be generally achieved and the related recommendation we made at our January 2009 briefing to be met. Condition 4 requires the Secretary of Homeland Security to establish an internal oversight board to monitor the manner in which the Secure Flight programs is being developed and prepared. TSA has generally achieved this condition through the presence of five oversight entities that have met at key program intervals to monitor Secure Flight. In accordance with GAO’s Standards for Internal Control in the Federal Government, a system of internal controls should include, among other things, an organizational structure that establishes appropriate lines of authority, a process that tracks agency performance against key objectives, and ongoing monitoring activities to ensure that recommendations made were addressed. Consistent with these practices, the internal oversight entities monitoring the Secure Flight program have defined missions with established lines of authority, have met at key milestones to review program performance, and have made recommendations designed to strengthen Secure Flight’s development. Our review of a selection of these recommendations showed that the Secure Flight program addressed these recommendations. The oversight entities for the Secure Flight program are the following: DHS Steering Committee, TSA Executive Oversight Board, DHS Investment Review Board (IRB), TSA IRB, and DHS Enterprise Architecture Board (EAB). The DHS Steering Committee and TSA Executive Oversight Board are informal oversight entities that were established to provide oversight and guidance to the Secure Flight program, including in the areas of funding, and coordination with U.S. Customs and Border Protection (CBP) on technical issues. According to TSA officials, the DHS Steering Committee and TSA Executive Oversight Board do not have formalized approval requirements outlined in management directives. The DHS IRB, TSA IRB, and DHS EAB are formal entities that oversee DHS information technology projects and focus on ensuring that investments directly support missions and meet schedule, budget, and operational objectives. (App. III contains additional information on these oversight boards.) GAO has previously reported on oversight deficiencies related to the DHS IRB, such as the board’s failure to conduct required departmental reviews of major DHS investments (including the failure to review and approve a key Secure Flight requirements document). To address these deficiencies, GAO made a number of recommendations to DHS, such as ensuring that investment decisions are transparent and documented as required. DHS generally agreed with these recommendations. Moving forward, it will be critical for these oversight entities to actively monitor Secure Flight as it progresses through future phases of systems development and implementation and ensure that the recommendations we make in this report are addressed. Conditions 5 and 6 require TSA to build in sufficient operational safeguards to reduce the opportunities for abuse, and to ensure substantial security measures are in place to protect the Secure Flight system from unauthorized access by hackers and other intruders. TSA has generally achieved the statutory requirements related to systems information security based on, among other things, actions to mitigate high- and moderate-risk vulnerabilities associated with Release 3. As of completion of our initial audit work on December 8, 2008, which we reported on at our January 2009 briefing, we identified deficiencies in TSA’s information security safeguards that increased the risk that the system will be vulnerable to abuse and unauthorized access from hackers and other intruders. Federal law, standards, and guidance identify the need to address information security throughout the life cycle of information systems. Accordingly, the guidance and standards specify a minimum set of security steps needed to effectively incorporate security into a system during its development. These steps include categorizing system impact, performing a risk assessment, and determining security control requirements for the system; documenting security requirements and controls and ensuring that they are designed, developed, tested, and implemented; performing tests and evaluations to ensure controls are working properly and effectively, and implementing remedial action plans to mitigate identified weaknesses; and certifying and accrediting the information system prior to operation. To its credit, TSA had performed several of these key security steps for Release 1, such as: categorizing the system as high-impact, performing a risk assessment, and identifying and documenting the associated recommended security control requirements; preparing security documentation such as a system security plan and loading security requirements into the developer’s requirements management tool; testing and evaluating security controls for the Secure Flight system and incorporating identified weaknesses in remedial action plans; and conducting security certification and accreditation activities. However, as of December 8, 2008, TSA had not taken sufficient steps to ensure that operational safeguards and substantial security measures were fully implemented for Release 3 of Secure Flight. This is important because Release 3 is the version that is to be placed into production. Moreover, Release 3 provides for (1) a change in the Secure Flight operating environment from a single operational site with a “hot” backup site to dual processing sites where each site processes passenger data simultaneously, and (2) the eSecure Flight Web portal, which provides an alternative means for air carriers to submit passenger data to Secure Flight. While these changes could expose the Secure Flight program to security risks not previously identified, TSA had not completed key security activities to address these risks. Further, we found that TSA had not completed testing and evaluating of key security controls or performed disaster recovery tests for the Release 3 environment. These tests are important to ensure that the operational safeguards and security measures in the production version of the Secure Flight operating environment are effective, operate as intended, and appropriately mitigate risks. In addition, TSA had not updated or completed certain security documents for Release 3, such as its security plan, disaster recovery plan, security assessment report, and risk assessment, nor had it certified and accredited Release 3 of the Secure Flight environment it plans to put into production. Further, TSA had also not demonstrated that CBP had implemented adequate security controls over its hardware and software devices that interface with the Secure Flight system to ensure that Secure Flight data are not vulnerable to abuse and unauthorized access. Finally, TSA had not corrected 6 of 38 high- and moderate-risk vulnerabilities identified in Release 1 of the Secure Flight program. For example, TSA did not apply key security controls to its operating systems for the Secure Flight environment, which could then allow an attacker to view, change, or delete sensitive Secure Flight information. While TSA officials assert that they had mitigated 4 of the 6 uncorrected vulnerabilities, we determined the documentation provided was not sufficient to demonstrate that the vulnerabilities were mitigated. As a result of the security risks that existed as of December 8, 2008, we recommended that TSA take steps to complete its security testing and update key security documentation prior to initial operations. After our January 2009 briefing, TSA provided documentation showing that it had implemented or was in the process of implementing our recommendation. For example, TSA had completed security testing of the most recent release of Secure Flight (Release 3), updated security documents, certified and accredited Release 3, received an updated certification and accreditation decision from CBP for its interface with the Secure Flight program, and mitigated the high- and moderate-risk vulnerabilities related to Release 1. In addition, TSA had prepared plans of actions and milestones (POA&M) for the 28 high-risk and 32 moderate-risk vulnerabilities it identified during security testing of Release 3. The POA&Ms stated that TSA would correct the high-risk vulnerabilities within 60 days and the moderate-risk vulnerabilities within 90 days. Based on these actions, we concluded that TSA had conditionally achieved this condition as of January 29, 2009. Further, after we submitted our draft report to DHS for formal agency comment on March 20, 2009, TSA provided us updated information that demonstrated that it had completed the actions discussed above. Based on our review of documentation provided by TSA on March 31, 2009, we concluded that TSA had mitigated all 60 high- and moderate-risk vulnerabilities associated with Release 3. Therefore, we concluded that TSA had generally achieved the statutory requirements related to systems information security and we consider the related recommendation to be met. Condition 7 requires TSA to adopt policies establishing effective oversight of the use and operation of the Secure Flight system. As of the completion of our initial audit work on December 8, 2008, TSA had generally achieved this condition, but we nevertheless identified opportunities for strengthening oversight and thus made a recommendation aimed at doing so. According to GAO’s best practices for internal control, effective oversight includes (1) the plans and procedures used to meet mission goals and objectives, and (2) activities that ensure the effectiveness and efficiency of operations, safeguard assets, prevent and detect errors and fraud, and provide reasonable assurance that a program is meeting its intended objectives. To its credit, TSA had finalized the vast majority of key documents related to the effective oversight of the use and operation of the system as of the completion of our initial audit work on December 8, 2008. For example, TSA had established performance measures to monitor and assess the effectiveness of the Secure Flight program; provided training to air carriers on transitioning their watch-list matching functions to TSA; developed a plan to oversee air carriers’ compliance with Secure Flight program requirements; and finalized key standard operating procedures. However, TSA had not yet finalized or updated all key program documents or completed necessary training, which was needed prior to the program beginning operations. Accordingly, we recommended that TSA finalize or update all key Secure Flight program documents—including the agreement with the Terrorist Screening Center for exchanging watch-list and passenger data and standard operating procedures—and complete training before the program begins operations. In response, TSA finalized its memorandum of understanding with the Terrorist Screening Center on December 30, 2008, and completed program training in January 2009. Based on these actions, we consider this recommendation to be met. Appendix IV contains additional information on Condition 7. Condition 8 requires TSA to take action to ensure that no specific privacy concerns remain with the technological architecture of the Secure Flight system. TSA has generally achieved the statutory requirement related to privacy based on progress the agency has made in establishing a privacy program as well as recent actions taken to address security vulnerabilities related to conditions 5 and 6. In our January 2009 briefing, we identified deficiencies in TSA’s information security safeguards that posed a risk to the confidentiality of the personally identifiable information maintained by the Secure Flight system. The Fair Information Practices, a set of principles first proposed in 1973 by a U.S. government advisory committee, are used with some variation by organizations to address privacy considerations in their business practices and are also the basis of privacy laws and related policies in many countries, including the United States, Australia, and New Zealand, as well as the European Union. The widely-adopted version developed by the Organisation for Economic Co-operation and Development in 1980 is shown in table 3. At the time of our January 2009 briefing, TSA had established a variety of programmatic and technical controls for Secure Flight, including involving privacy experts in major aspects of Secure Flight development, developing privacy training for all Secure Flight staff and incident response procedures to address and contain privacy incidents, tracking privacy issues and performing analysis when significant privacy issues are identified, instituting access controls to ensure that data are not accidentally or maliciously altered or destroyed, filtering unauthorized data from incoming data to ensure collection is limited to predefined types of information, establishing standard formats for the transmission of personally identifiable information (PII) in order to reduce variance in data and improve data quality, and maintaining audit logs to track access to PII and document privacy incidents. In addition, TSA had issued required privacy notices—including a Privacy Impact Assessment and System of Records Notice—that meet legal requirements and address key privacy principles. These notices describe, among other things, the information that will be collected from passengers and airlines, the purpose of collection, and planned uses of the data. Through its privacy program, TSA had taken actions to implement most Fair Information Practice Principles. For information on the actions TSA has taken to generally address Fair Information Practices, see appendix V. However, at our January 2009 briefing, we also concluded that the weaknesses in Secure Flight’s security posture—as described in our earlier discussion of information security—created an increased risk that the confidentiality of the personally identifiable information maintained by the Secure Flight system could be compromised. As a result, we recommended that TSA take steps to complete its security testing and update key security documentation prior to initial operations. After our January 2009 briefing, TSA provided documentation that it had implemented or was in the process of implementing our recommendation related to information security and we concluded that this condition had been conditionally achieved as of January 29, 2009. Further, after we submitted our draft report to DHS for formal agency comment on March 20, 2009, TSA provided us updated information that demonstrated that it had completed the actions to implement our recommendation. Based on our review of documentation provided by TSA on March 31, 2009, we believe TSA has generally achieved the condition related to privacy. Condition 9 requires that TSA—pursuant to the requirements of section 44903(i)(2)(A) of title 49, United States Code—modify Secure Flight with respect to intrastate transportation to accommodate states with unique air transportation needs and passengers who might otherwise regularly trigger primary selectee status. TSA has generally achieved this condition. TSA is developing the Secure Flight program without incorporating the CAPPS rules and, therefore, Secure Flight will have no effect on CAPPS selection rates. According to TSA, the agency has modified the CAPPS rules to address air carriers operating in states with unique transportation needs and passengers who might otherwise regularly trigger primary selectee status. However, our review found that TSA lacked data on the effect of its modifications on air carrier selectee rates. We interviewed four air carriers to determine (1) the extent to which the CAPPS modifications and a related security amendment affected these carriers’ selectee rates and (2) whether TSA had outreached to these carriers to assess the effect of the modifications and amendment on their selectee rates. The carriers provided mixed responses regarding whether the modifications and amendment affected their selectee rates. Further, three of the four air carriers stated that TSA had not contacted them to determine the effect of these initiatives. According to GAO best practices for internal control, agencies should ensure adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant effect on achieving goals. Without communications with air carriers, and given the agency’s lack of data on carrier selectee rates, TSA cannot ensure that the CAPPS modifications and related security amendment have their intended effect. In our January 2009 briefing, we recommended that TSA conduct outreach to air carriers—particularly carriers in states with unique transportation needs—to determine whether modifications to the CAPPS rules and security amendment have achieved their intended effect. TSA agreed with our recommendation. Condition 10 requires the existence of appropriate life-cycle cost estimates and expenditure and program plans. TSA has conditionally achieved this statutory requirement based on our review of its plan of action for developing appropriate cost and schedule estimates and other associated documents submitted after we provided a copy our draft report to DHS for formal comment on March 20, 2009. The plan includes proposed activities and time frames for addressing weaknesses that we identified in the Secure Flight program’s cost estimate and schedule and was the basis for our reassessment of this condition. At the time of our January 2009 briefing, we reported that this condition had generally not been achieved. Specifically, while TSA had made improvements to its life-cycle cost estimate and schedule, neither were developed in accordance with key best practices outlined in our Cost Assessment Guide. Our research has identified several practices that are the basis for effective program cost estimating. We have issued guidance that associates these practices with four characteristics of a reliable cost estimate: comprehensive, well documented, accurate, and credible. The Office of Management and Budget (OMB) endorsed our guidance as being sufficient for meeting most cost and schedule estimating requirements. In addition, the best practices outlined in our guide closely match DHS’s own guidance for developing life-cycle cost estimates. Reliable cost and schedule estimates are critical to the success of a program, as they provide the basis for informed investment decision making, realistic budget formulation, program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. As we reported at our January 2009 briefing, Secure Flight’s $1.36 billion Life Cycle Cost Estimate (LCCE) is well documented in that it clearly states the purpose, source, assumptions, and calculations. However, it is not comprehensive, fully accurate, or credible. As a result, the life-cycle cost estimate does not provide a meaningful baseline from which to track progress, hold TSA accountable, and provide a basis for sound investment decision making. In our January 2009 briefing, we recommended that DHS take actions to address these weaknesses. TSA agreed with our recommendation. The success of any program depends in part on having a reliable schedule specifying when the program’s set of work activities will occur, how long they will take, and how they relate to one another. As such, the schedule not only provides a road map for the systematic execution of a program, but it also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. As we reported in January 2009, the November 15, 2008, TSA’s Integrated Master Schedule (IMS) for Secure Flight—which provided supporting activities leading up to the program’s initial operations in January 2009—was a significant improvement over its February 2008 version. For example, after meeting with GAO and its schedule analysis consultant, TSA took actions to improve the Secure Flight schedule, including adding initial efforts for domestic and international cutover activities, removing constraints that kept its schedule rigid, and providing significant status updates. Our research has identified nine practices associated with effective schedule estimating, which we used to assess Secure Flight. These practices are: capturing key activities, sequencing key activities, establishing duration of key activities, assigning resources to key activities, integrating key activities horizontally and vertically, establishing critical path, identifying float time, performing a schedule risk analysis, and distributing reserves to high risk activities. In assessing the November 15, 2008, schedule against our best practices, we found that TSA had met one of the nine best practices, but five were only partially met and three were not met. Despite the improvements TSA made to its schedule for activities supporting initial operational capability, the remaining part of the schedule associated with implementing Secure Flight for domestic and international flights was represented as milestones rather than the detailed work required to meet milestones and events. As such, the schedule was more characteristic of a target deliverable plan than the work involved with TSA assuming the watch-list matching function. Moreover, likely program completion dates were not being driven by the schedule logic, but instead were being imposed by the program office in the form of target dates. This practice made it difficult for TSA to use the schedule to reflect the program’s status. Without fully employing all key scheduling practices, TSA cannot assure a sufficiently reliable basis for estimating costs, measuring progress, and forecasting slippages. In our January 2009 briefing, we recommended that DHS take actions to address these weaknesses. TSA agreed with our recommendation. In January 2009, TSA provided us with a new schedule, dated December 15, 2008. Our analysis showed that this new schedule continued to not follow best practices, did not correct the deficiencies we previously identified, and therefore could not be used as a reliable management tool. For example, a majority of the scheduled activities did not have baseline dates that allow the schedule to be tracked against a plan moving forward. In addition, best practices require that a schedule identify the longest duration path through the sequenced list of key activities—known as the schedule’s critical path—where if any activity slips along this path, the entire program will be delayed. TSA’s updated schedule did not include a critical path, which prevents the program from understanding the effect of any delays. Further, updating the Secure Flight program’s schedule is important because of the significant cost and time that remains to be incurred to cutover all domestic flights to operations as planned by March 2010 and to develop, test, and deploy the functionality to assume watch- list matching for international flights. After we submitted a copy of our draft report to DHS for formal agency comment on March 20, 2009, TSA provided us its plan of action, dated April 2009, that details the steps the Secure Flight program management office intends to carry out to address weaknesses that we identified in the program’s cost and schedule estimates. With regard to the program’s cost estimate, TSA’s plan has established a timeline of activities that, if effectively implemented, should result in (1) a more detailed work breakdown structure that would define the work necessary to accomplish the program’s objectives; (2) the cost estimate and schedule work breakdown structures being aligned properly; (3) an independent cost estimate performed by a contractor; (4) an assessment of the life-cycle cost estimate by the DHS Cost Analysis Division; and (5) cost uncertainty and sensitivity analyses. In addition, TSA’s plan has estimated government costs that were originally missing from its cost estimate. According to TSA, these costs will be addressed in its life-cycle cost estimate documentation. With regard to the Secure Flight program’s schedule, TSA’s plan of action has established a timeline of activities that, if effectively implemented, should result in, most notably: (1) a sequenced and logical schedule that will accurately calculate float time and a critical path; (2) a fully resource- loaded schedule based on subject-matter-expert opinion that does not overburden resources; (3) a schedule that includes realistic activity duration estimates; and (4) a schedule risk analysis that will be used by TSA leadership to distribute reserves to high-risk activities. According to TSA, this revised schedule will forecast the completion date for the project based on logic, duration, and resource estimates rather than artificial date constraints. The plan of action provides the Secure Flight program management office with a clearer understanding of the steps that need to be taken to address our concerns regarding the Secure Flight life-cycle cost estimate and schedule. Based on our review of the plan and the associated documentation provided, we therefore now consider this legislative requirement to be conditionally achieved and the related recommendations that we made at our January 2009 briefing to be met. It should be noted that a significant level of effort is involved in completing these activities, yet the actions—with the exception of the independent cost estimate—are planned to be completed by June 5, 2009. According to TSA, the independent cost estimate is to be completed by October 2009. While TSA’s ability to fully meet the requirements of Condition 10 does not affect the Secure Flight system’s operational readiness, having reliable cost and schedule estimates allows for better insight into the management of program resources and time frames as the program is deployed. We will continue to assess TSA’s progress in carrying out the plan of action to address the weaknesses that we identified in the program’s cost estimate and schedule and fully satisfying this condition. Appendix VI contains additional information on our analysis of TSA’s efforts relative to GAO’s best practices. TSA has made significant progress in developing the Secure Flight program, and the activities completed to date, as well planned, reduce the risks associated with implementing the program. However, TSA is still in the process of taking steps to address key activities related to testing the system’s watch-list matching capability and cost and schedule estimates, which should be completed to mitigate risks and to strengthen the management of the program. Until these activities are completed, TSA lacks adequate assurance that Secure Flight will fully achieve its desired purpose and operate as intended. Moreover, if these activities are not completed expeditiously, the program will be at an increased risk of cost, schedule, or performance shortfalls. Specifically, the system might not perform as intended in the future if its matching capabilities and results (that is, false positives and false negatives) are not periodically assessed. In addition, cost overruns and missed deadlines will likely occur if reliable benchmarks are not established for managing costs and the remaining schedule. In addition to the issues and risks we identified related to the Secure Flight program, our work revealed one other TSA prescreening-related issue that should be addressed to mitigate risks and ensure that passenger prescreening is working as intended. Specifically, the effect that modifications to the CAPPS rules and a related security amendment have had on air carriers—particularly carriers in states with unique transportation needs—will remain largely unknown unless TSA conducts outreach to these air carriers to determine the effect of these changes. We are recommending that the Secretary of Homeland Security take the following two actions: To mitigate future risks of performance shortfalls and strengthen management of the Secure Flight program moving forward, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to periodically assess the performance of the Secure Flight system’s matching capabilities and results to determine whether the system is accurately matching watch-listed individuals while minimizing the number of false positives—consistent with the goals of the program; document how this assessment will be conducted and how its results will be measured; and use these results to determine whether the system settings should be modified. To ensure that passenger prescreening is working as intended, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to conduct outreach to air carriers—particularly carriers in states with unique transportation needs—to determine whether modifications to the CAPPS rules and related security amendment have achieved their intended effect. We provided a draft of this report to DHS for review and comment on March 20, 2009. Subsequently, TSA provided us additional information related to several of the conditions, which resulted in a reassessment of the status of these conditions. Specifically, in the draft report that we provided for agency comment, we had concluded that Conditions 5 and 6 (information security) and Condition 8 (privacy) were conditionally achieved and Condition 10 (cost and schedule) was generally not achieved. Based on our review of the additional documentation provided by TSA, we are now concluding that Conditions 5, 6, and 8 are generally achieved and Condition 10 is conditionally achieved. In addition, in the draft report we provided to DHS for agency comment, we made five recommendations, four of which were related to the Secure Flight program. The fifth recommendation was related to Condition 9 (CAPPS rules), which is not related to the Secure Flight program. Based on the additional information that TSA provided during the agency comment period, we now consider three of these recommendations to be met (those related to information security, the cost estimate, and the program schedule). The other two recommendations have not been met and, therefore, are still included in this report (those related to monitoring the performance of the system’s matching capability and assessing the effect of modifications on CAPPS rules). We provided our updated assessment to DHS and on April 23, 2009, DHS provided us written comments, which are presented in appendix VII. In its comments, DHS stated that TSA concurred with our updated assessment. We are sending copies of this report to the appropriate congressional committees and other interested parties. We are also sending a copy to the Secretary of Homeland Security. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Cathleen A. Berrick at (202) 512-3404 or berrickc@gao.gov; Randolph C. Hite at (202) 512-3439 or hiter@gao.gov; or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VIII. In accordance with section 513 of the Department of Homeland Security Appropriations Act, 2008, our objective was to assess the extent to which the Transportation Security Administration (TSA) met the requirements of 10 statutory conditions related to the development and implementation of the Secure Flight program and the associated risks of any shortfalls in meeting the requirements. Specifically, the act requires the Secretary of Homeland Security to certify, and GAO to report, that the 10 statutory conditions have been successfully met before TSA implements or deploys the program on other than a test basis. Pursuant to the act, after the Department of Homeland Security (DHS) certified that it had satisfied all 10 conditions—which it did on September 24, 2008—we were required to report within 90 days on whether the 10 conditions had been successfully met. It further requires GAO to report periodically thereafter until it determines that all 10 conditions have been successfully met. Our overall methodology included (1) identifying key activities related to each condition; (2) identifying federal guidance and related best practices, if applicable, that are relevant to successfully meeting each condition (e.g., GAO’s Standards for Internal Control in the Federal Government); (3) analyzing whether TSA has demonstrated through verifiable analysis and documentation, as well as oral explanation, that the guidance has been followed and best practices have been met; and (4) assessing the risks associated with not fully following applicable guidance and meeting best practices. Based on our assessment, we categorized each condition as generally achieved, conditionally achieved, or generally not achieved. Generally achieved—TSA has demonstrated that it completed all key activities related to the condition in accordance with applicable federal guidelines and related best practices, which should reduce the risk of the program experiencing cost, schedule, or performance shortfalls. Conditionally achieved—TSA has demonstrated that it completed some key activities related to the condition in accordance with applicable federal guidelines and related best practices and has defined plans for completing remaining key activities that, if effectively implemented as planned, should result in reduced risk that the program will experience cost, schedule, or performance shortfalls. Generally not achieved—TSA has not demonstrated that it completed all key activities related to the condition in accordance with applicable federal guidelines and related best practices and does not have defined plans for completing the remaining activities, and the uncompleted activities result in an increased risk of the program experiencing cost, schedule, or performance shortfalls. In conducting this review, we worked constructively with TSA officials. We provided TSA with our criteria for assessing each of the 10 conditions and periodically met with TSA officials to discuss TSA’s progress and our observations. To meet our 90-day reporting requirement, we conducted audit work until December 8, 2008, which included assessing activities and documents that TSA completed after DHS certified that it had met the 10 conditions. We reported the initial results of our review to the mandated reporting committees in two restricted briefings, first on December 19, 2008, and then on January 7, 2009. Because we concluded that TSA had not successfully met all 10 conditions, we conducted additional work from January through April 2009, the results of which are also included in this report. Further, after we submitted a copy of our draft report to DHS for formal agency comment on March 20, 2009, TSA provided us additional information related to Conditions 5, 6, 8, and 10 which resulted in our reassessment of the status of these conditions. The report has been updated to include the additional information and reassessments. To assess Condition 1 (redress), we interviewed program officials and reviewed and assessed agency documentation to determine how, once Secure Flight becomes operational, the DHS redress process will be coordinated with the Secure Flight program, based upon GAO best practices for coordination; as well as whether the process was documented, consistent with GAO best practices on documenting internal controls. We also reviewed performance measures for the Secure Flight redress process as well as TSA’s progress in addressing a February 2008 GAO recommendation that DHS consider creating and implementing additional measures for its redress process. To assess Condition 2 (minimizing false positives), we interviewed program and TSA Office of Intelligence (OI) officials and reviewed and assessed Secure Flight performance objectives, tests, and other relevant documentation to determine the extent to which TSA’s activities demonstrate that the Secure Flight system will minimize its false-positive rate. Additionally, we interviewed program and TSA OI officials and reviewed and assessed Secure Flight documentation to determine how the program established performance goals for its false-positive and false- negative rates. We also interviewed a representative from the contractor that designed a dataset that TSA used to test the efficacy and accuracy of Secure Flight’s matching system to discuss the methodology of that dataset. Our engagement team, which included a social science analyst with extensive research methodology experience and engineers with extensive experience in systems testing, reviewed the test methodologies for the appropriateness and logical structure of their design and implementation, any data limitations, and the validity of the results. Our review focused on steps TSA is taking to reduce false-positive matches produced by Secure Flight’s watch-list matching process, which is consistent with TSA’s interpretation of the requirements of this condition. We did not review the Terrorist Screening Center’s role in ensuring the quality of records in the Terrorist Screening Database (TSDB). To assess the first part of Condition 3 (efficacy and accuracy of the system), we interviewed program and TSA OI officials and reviewed and assessed Secure Flight performance objectives, tests, and other documentation that address the type and extent of testing and other activities that demonstrate that Secure Flight will minimize the number of false positives while not allowing an unacceptable number of false negatives. We also interviewed a representative from the contractor that designed a dataset that TSA used to test the efficacy and accuracy of Secure Flight’s matching system to discuss the methodology of that dataset. Our engagement team, which included a social science analyst with extensive research methodology experience and engineers with extensive experience in systems testing, reviewed the test methodologies for the appropriateness and logical structure of their design and implementation and the validity of the results. However, we did not assess the appropriateness of TSA’s definition of what should constitute a match to the watch list. We did not assess the accuracy of the system’s predictive assessment, as this is no longer applicable to the Secure Flight program given the change in its mission scope compared to its predecessor program CAPPS II (i.e., Secure Flight only includes comparing passenger information to watch-list records whereas CAPPS II was to perform different analyses and access additional data, including data from commercial databases, to classify passengers according to their level of risk). To assess the second part of Condition 3, stress testing, we reviewed Secure Flight documentation—including test plans, test procedures, and test results—and interviewed program officials to determine whether TSA has defined and managed system performance and stress requirements in a manner that is consistent with relevant guidance and standards. We also determined whether the testing that was performed included testing the performance of Secure Flight search tools under increasingly heavy workloads, demands, and conditions to identify points of failure. For example, in January 2009, we met with the Secure Flight development team and a program official to observe test results related to the 14 Secure Flight performance and stress requirements. We walked through each of the 14 requirements and observed actual test scenarios and results. To assess Condition 4 (internal oversight), we interviewed DHS and TSA program officials and reviewed and analyzed documentation related to various DHS and TSA oversight boards—the DHS and TSA Investment Review Boards, the DHS Enterprise Architecture Board, the TSA Executive Oversight Board, and the DHS Steering Committee—to identify the types of oversight provided to the Secure Flight program. We also reviewed agency documentation to determine whether the oversight entities met as intended and, in accordance with GAO’s Standards for Internal Control in the Federal Government, the extent to which the Secure Flight program has addressed a selection of recommendations and action items made by the oversight bodies. We evaluated oversight activities related to key milestones in the development of the Secure Flight system. In regard to Condition 7 (oversight of the system), for purposes of certification, TSA primarily defined effective oversight of the system in relation to information security. However, we assessed DHS’s oversight activities against a broader set of internal controls for managing the program, as outlined in GAO’s Standards for Internal Control in the Federal Government, to oversee the Secure Flight system during development and implementation. We interviewed Secure Flight program officials and reviewed agency documentation—including policies, standard operating procedures, and performance measures—to determine the extent to which policies and procedures addressed the management, use, and operation of the system. We also interviewed program officials at TSA’s Office of Security Operations to determine how TSA intends to oversee internal and external compliance with system security, privacy requirements, and other functional requirements. We did not assess the quality of documentation provided by TSA. Our methodology for assessing information security is outlined under Conditions 5 and 6. To assess Condition 8 (privacy), we analyzed legally-required privacy documentation, including systems-of-record notices and privacy impact assessments, as well as interviewed Secure Flight and designated TSA privacy officials to determine the completeness of privacy safeguards. In addition, we assessed available systems development documentation to determine the extent to which privacy protections have been addressed based on the Fair Information Practices. We also assessed whether key documentation had been finalized and key provisions, such as planned privacy protections, had been clearly determined. We reassessed the status of Condition 8 based on our review of documentation provided by TSA on March 31, 2009, showing that it had mitigated all high- and moderate-risk information security vulnerabilities associated with the Secure Flight program’s Release 3. To assess Condition 9 (CAPPS rules), we reviewed TSA documentation to identify modifications to the CAPPS rules and a related security program amendment to address air carriers operating in states with unique transportation needs and passengers who might otherwise regularly trigger primary selectee status. In addition, we interviewed TSA officials to determine the extent to which TSA assessed the effect of these activities on air carriers’ selectee rates—either through conducting tests or by communicating with and obtaining information from air carriers—in accordance with GAO best practices for coordinating with external stakeholders. We also interviewed officials from four air carriers to obtain their views regarding the effect of CAPPS changes on the air carriers’ selectee rates. These carriers were selected because they operate in states with unique transportation needs or have passengers who might otherwise regularly trigger primary selectee status as a result of CAPPS rules. To assess Condition 10 (cost and schedule estimates), we reviewed the program’s life-cycle cost estimate, integrated master schedule, and other relevant agency documentation against best practices, including GAO’s Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. We also interviewed key program officials overseeing these activities and consulted with a scheduling expert to identify risks to the integrated master schedule. We reassessed the status of Condition 10, based on TSA’s plan of action provided to us on April 3, 2009. The Plan of Action, dated April 2009, details the steps the Secure Flight program management office intends to carry out to address weaknesses that we identified in the program’s cost and schedule estimates. Appendix VI contains additional information on our analysis of TSA’s efforts relative to GAO’s best practices. We conducted this performance audit from May 2008 to May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Details on TSA’s Testing of the Efficacy and Accuracy of Secure Flight’s Matching System (Condition 3) The Transportation Security Administration (TSA) hired a contractor with expertise in matching systems to construct a dataset against which to test the Secure Flight matching system and assess the system’s false-positive and false-negative performance. Given the inverse relationship between false positives and false negatives—that is, a decrease in one may lead to an increase in the other—it is important to assess both rates concurrently to fully test the system’s matching performance. The contractor developed the dataset specifically for Secure Flight using name-matching software and expert review by analysts and linguists. The dataset consisted of a passenger list and a watch list using name types that were consistent with those on the actual No-Fly and Selectee lists. Each record included a passenger name and date of birth. The passenger list consisted of about 12,000 records, of which nearly 1,500 were “seeded” records that represented matches to the simulated watch list. According to the contractor, the seeded records were plausible variations to passenger names and dates of birth based on the contractor’s analysis of real watch-list records. The passenger list was run through Secure Flight’s automated matching system to determine its ability to accurately match the passenger records against the simulated watch list. The system used name-matching criteria outlined in the TSA No-Fly List security directive, and a defined date-of- birth matching criteria that TSA officials state was consistent with TSA Office of Intelligence policy. According to TSA, Secure Flight officials reviewed the test results to determine whether the system was accurately applying its matching criteria for passenger name and date of birth. TSA officials concluded that all matches and nonmatches made by the system were in accordance with these criteria. The test results for the system’s default matching rules showed that the system produced a number of false-negative matches— that is, of the passenger records deemed by the contractor to be matches to the watch list, Secure Flight did not match a number of those records. TSA officials stated that the false-negative rate in the test was primarily due to the Secure Flight system’s criteria for a date-of-birth match, which differed from the contractor’s criteria. TSA determined a criteria range for a date-of-birth match that was consistent with TSA Office of Intelligence policy. According to TSA officials, these matching criteria are consistent with Secure Flight’s responsibilities as a screening program—that is, the system must process high passenger volumes and quickly provide results to air carriers—and that those responsibilities were considered when balancing the risk presented by the system’s false-positive and false-negative rates. The contractor’s date-of-birth criteria range, however, was wider than the range used by TSA, which the contractor stated was established based on expert analysis of an excerpt from the watch list. According to TSA officials, officials from TSA’s Office of Intelligence reviewed the test results and determined that the records identified as false negatives by the contractor— that is, the records that were matched by the contractor but not by the Secure Flight system—did not pose an unacceptable risk and should not have been flagged, and that these nonmatches were designated as such in accordance with Office of Intelligence policies and TSA’s No Fly list security directive. These officials further stated that increasing the date-of-birth range would unacceptably increase the number of false positives generated by the system. TSA officials stated that the Secure Flight system’s matching setting could be reconfigured in the future to adjust the system’s false-positive and false- negative matching results should the need arise—for example, due to relevant intelligence information or improvements in the system’s matching software. Appendix III: Secure Flight’s Oversight Entities (Condition 4) Table 4 shows the entities responsible for overseeing the development of the Secure Flight program and a sample of activities that had been completed. Appendix IV: TSA’s Activities Related to the Effective Oversight of System Use and Operation (Condition 7) The Transportation Security Administration (TSA) completed several internal control activities related to the management, use, and operation of the Secure Flight system. For example: TSA developed 21 standard operating procedures related to Secure Flight’s business processes. In addition, TSA incorporated additional programmatic procedures into various plans and manuals that will provide support for the program once it becomes operational. According to a Secure Flight official, all 21 standard operating procedures were finalized as of December 12, 2008. TSA released its Airline Operator Implementation Plan, which is a written procedure describing how and when an aircraft operator transmits passenger and nontraveler information to TSA. The plan amends an aircraft operator’s Aircraft Operator Standard Security Program to incorporate the requirements of the Secure Flight program. TSA finalized its plan to oversee air carrier compliance with Secure Flight’s policies and procedures. All domestic air carriers and foreign carriers covered under the Secure Flight rule will be required to comply with and implement requirements set forth in the final rule. The Airline Operator Implementation Plan and the Consolidated User Guide will provide air carriers with the requirements for compliance monitoring during the initial cutover phases. The Airline Implementation Team, which assists air carriers’ transition to Secure Flight, will ensure that air carriers are in compliance with program requirements prior to cutover. TSA developed performance measures to monitor and assess the effectiveness of the Secure Flight program, such as measures to address privacy regulations, training requirements, data quality and submission requirements, and the functioning of the Secure Flight matching engine. TSA will also use performance measures to ensure that air carriers are complying with Secure Flight data requirements. TSA developed written guidance for managing Secure Flight’s workforce, including a Comprehensive Training Plan that outlines training requirements for users and operators of the system and service centers. According to TSA officials, TSA completed programmatic training, which includes privacy and program-related training, for the entire Secure Flight workforce. TSA provided stakeholder training for covered U.S. air carriers and foreign air carriers on the Secure Flight program. This training, while not required of stakeholders, provided air carriers with information on changes to the Secure Flight program after the Final Rule was released and technical and operational guidance as outlined in the Consolidated User Guide. The Airline Implementation, Communications, and Training Teams will support requests from air carriers for additional training throughout deployment. According to TSA, the agency planned to pilot its operational training, which is necessary for employees and contractors to effectively undertake their assigned responsibilities, during the week of December 8, 2008. TSA officials stated that piloting this training would allow them to make any needed updates to Secure Flight’s standard operating procedures. However, TSA officials said that updates to the Standard Operating Procedures as a result of training were expected to be minimal and would not have an effect on initial cutover in their view. Appendix V: TSA’s Actions to Address Fair Information Practices (Condition 8) The Transportation Security Administration (TSA) has taken actions that generally address the following Fair Information Practices. The Purpose Specification principle states that the purposes for a collection of personal information should be disclosed before collection and upon any change to that purpose. TSA addressed this principle by issuing privacy notices that define a specific purpose for the collection of passenger information. According to TSA privacy notices, the purpose of the Secure Flight Program is to identify and prevent known or suspected terrorists from boarding aircraft or accessing sterile areas of airports and better focus passenger and baggage screening efforts on persons likely to pose a threat to civil aviation, to facilitate the secure and efficient travel of the public while protecting individuals’ privacy. The Data Quality principle states that personal information should be relevant to the purpose for which it is collected, and should be accurate, complete, and current as needed for that purpose. TSA addressed this principle through its planned use of the Department of Homeland Security’s (DHS) Traveler Redress Inquiry Program (TRIP), collecting information directly from passengers, and setting standard data formats. More specifically, TSA is planning to use DHS TRIP as a mechanism to correct erroneous data. TSA also believes that relying on passengers to provide their own name, date of birth, and gender will further help ensure the quality of the data collected. Moreover, TSA has developed a Consolidated User Guide that provides standard formats for air carriers to use when submitting passenger information to reduce variance and improve data quality. We reported previously that the consolidated terrorist watch list, elements of which are matched with passenger data to make Secure Flight screening decisions, has had data- quality issues. However, this database is administered by the Terrorist Screening Center and is not overseen by TSA. The Openness principle states that the public should be informed about privacy policies and practices, and that individuals should have a ready means of learning about the use of personal information. TSA addressed this principle by publishing and receiving comments on required privacy notices. TSA has issued a Final Rule, Privacy Impact Assessment, and System of Records Notice that discuss the purposes, uses, and protections for passenger data, and outline which data elements are to be collected and from whom. TSA obtained and responded to public comments on its planned measures for protecting the data a passenger is required to provide. The Individual Participation principle states that individuals should have the following rights: to know about the collection of personal information, to access that information, to request correction, and to challenge the denial of those rights. TSA addressed this principle through its planned use of DHS TRIP and its Privacy Act access and correction process. As previously mentioned, TSA plans to use DHS TRIP in order to allow passengers to request correction of erroneous data. Passengers can also request access to the information that is maintained by Secure Flight through DHS’s Privacy Act request process. As permitted by the Privacy Act, TSA has claimed exemptions from the Privacy Act that limit what information individuals can access about themselves. For example, individuals will not be permitted to view information concerning whether they are in the Terrorist Screening Database (TSDB). However, TSA has stated that it may waive certain exemptions when disclosure would not adversely affect law enforcement or national security. The Use Limitation principle states that personal information should not be used for other than a specified purpose without consent of the individual or legal authority. TSA addressed this principle by identifying permitted disclosures of data and establishing mechanisms to ensure that disclosures are limited to those authorized. The Secure Flight system design requires that data owners initiate transfers of information, a provision that helps to assure that data is being used only for specified purposes. According to TSA privacy notices, the Secure Flight Records system is intended to be used to identify and protect against potential and actual threats to transportation security through watch-list matching against the No-Fly and Selectee components of the consolidated and integrated terrorist watch list known as the Terrorist Screening Database. TSA plans to allow other types of disclosures, as permitted by the Privacy Act. For example, TSA is permitted to share Secure Flight data with federal, state, local, tribal, territorial, foreign, or international agencies responsible for investigating, prosecuting, enforcing, or implementing a statute, rule, regulation, or order regarding a violation or potential violation of civil or criminal law or regulation; and international and foreign governmental authorities in accordance with law and formal or informal international agreements. The Collection Limitation principle states that the collection of personal information should be limited, should be obtained by lawful and fair means, and, where appropriate, with the knowledge or consent of the individual. TSA addressed this principle by conducting a data-element analysis, developing a data retention schedule, and establishing technical controls to filter unauthorized data and purge data. TSA has performed a data element analysis to determine the least amount of personal information needed to perform effective automated matching of passengers with individuals on the watch list. As a result, TSA has limited collection by only requiring that passengers provide their full name, gender, and date of birth. In addition, TSA requires air carriers to request other specific information, such as a passenger’s redress number, and to provide TSA with other specific information in the airline’s possession, such as the passenger’s passport information. TSA established a data- purging control to rid the system of data according to its data-retention schedule. Further, TSA established technical controls to filter unauthorized data to ensure that collection is limited to authorized data fields. TSA is also developing a data-retention schedule which was issued for public comment and is in accordance with the Terrorist Screening Center’s National Archives and Records Administration (NARA)—- approved record-retention schedule for TSDB records. The Accountability principle states that individuals controlling the collection or use of personal information should be accountable for taking steps to ensure the implementation of these principles. TSA addressed the Accountability principle by designating a program privacy officer and a team of privacy experts working on various aspects of the Secure Flight program, and by planning to establish several oversight mechanisms: TSA implemented a system for tracking privacy issues that arise throughout the development and use of Secure Flight, and TSA is conducting follow-up analysis of significant privacy issues and providing resolution strategies for management consideration. TSA developed privacy rules of behavior, which require that individuals handling personally identifiable information (PII) only use it for a stated purpose. TSA is planning to maintain audit logs of system and user events to provide oversight of system activities, such as access to PII and transfer of PII in or out of the system. TSA is planning to issue periodic privacy compliance reports, intended to track and aggregate privacy concerns or incidents, but it has not finalized the reporting process. TSA developed general privacy training for all Secure Flight staff and is developing role-based privacy training for employees handling PII. While TSA has also taken steps related to the Security Safeguards principle, this principle had not been fully addressed at the time of our January 2009 briefing. The Security Safeguards principle states that personal information should be protected with reasonable security safeguards against risks such as loss or unauthorized access, destruction, use, modification, or disclosure. TSA actions to address the Security Safeguards principle include planning to prevent unauthorized access to data stored in its system through technical controls including firewalls, intrusion detection, encryption, and other security methods. Although TSA had laid out a plan to protect the confidentiality of sensitive information through various security safeguards, our security review—discussed in more detail under conditions 5 and 6 on information security—identified weaknesses in Secure Flight’s security posture that create an increased risk that the confidentiality of the personally identifiable information maintained by the Secure Flight system could be compromised. As a result of the security risks we identified and reported on at our January 2009 briefing, and their corresponding effect on privacy, we recommended that TSA take steps to complete its security testing and update key security documentation prior to initial operations. TSA agreed with our recommendation. Since our January 2009 briefing, TSA provided documentation that it has implemented our recommendation related to information security. In light of these actions, we believe TSA has now generally achieved the condition related to privacy and we consider the related recommendation we made at the briefing to be met. Appendix VI: GAO Analyses of Secure Flight’s Life-Cycle Cost Estimate and Schedule against Best Practices (Condition 10) After submitting a copy of our draft report to the Department of Homeland Security (DHS) for formal agency comment on March 20, 2009, the Transportation Security Administration (TSA) provided us its plan of action, dated April 2009, that details the steps the Secure Flight program management office intends to carry out to address weaknesses that we identified in the program’s cost and schedule estimates. We reviewed TSA’s plan and associated documentation and reassessed the program against our Cost and Schedule Best Practices. The following tables show our original assessment and reassessment of TSA’s cost and schedule against our best practices. Table 5 summarizes the results of our analysis relative to the four characteristics of a reliable cost estimate based on information provided by TSA as of March 20, 2009. Table 6 summarizes the results of our reassessment of the Secure Flight program’s cost estimate relative to the four characteristics of a reliable cost estimate based on information provided by TSA as of April 3, 2009. Table 7 summarizes the results of our analysis relative to the nine schedule-estimating best practices based on information provided by TSA as of March 20, 2009. Table 8 summarizes the results of our reassessment of the Secure Flight program’s schedule relative to the nine schedule estimating best practices based on information provided by TSA as of April 3, 2009. In addition to the contacts listed above, Idris Adjerid, David Alexander, Mathew Bader, Timothy Boatwright, John de Ferrari, Katherine Davis, Eric Erdman, Anthony Fernandez, Ed Glagola, Richard Hung, Jeff Jensen, Neela Lakhmani, Jason Lee, Thomas Lombardi, Sara Margraf, Vernetta Marquis, Victoria Miller, Daniel Patterson, David Plocher, Karen Richey, Karl Seifert, Maria Stattel, Margaret Vo, and Charles Vrabel made key contributions to this report.
To enhance aviation security, the Department of Homeland Security's (DHS) Transportation Security Administration (TSA) developed a program--known as Secure Flight--to assume from air carriers the function of matching passenger information against terrorist watch-list records. In accordance with a mandate in the Department of Homeland Security Appropriations Act, 2008, GAO's objective was to assess the extent to which TSA met the requirements of 10 statutory conditions related to the development of the Secure Flight program. GAO is required to review the program until all 10 conditions are met. In September 2008, DHS certified that it had satisfied all 10 conditions. To address this objective, GAO (1) identified key activities related to each of the 10 conditions; (2) identified federal guidance and best practices that are relevant to successfully meeting each condition; (3) analyzed whether TSA had demonstrated, through program documentation and oral explanation, that the guidance was followed and best practices were met; and (4) assessed the risks associated with not fully following applicable guidance and meeting best practices. As of April 2009, TSA had generally achieved 9 of the 10 statutory conditions related to the development of the Secure Flight program and had conditionally achieved 1 condition (TSA had defined plans, but had not completed all activities for this condition). Also, TSA's actions completed and those planned have reduced the risks associated with implementing the program. Although DHS asserted that TSA had satisfied all 10 conditions in September 2008, GAO completed its initial assessment in January 2009 and found that TSA had not demonstrated Secure Flight's operational readiness and that the agency had generally not achieved 5 of the 10 statutory conditions. Consistent with the statutory mandate, GAO continued to review the program and, in March 2009, provided a draft of this report to DHS for comment. In the draft report, GAO noted that TSA had made significant progress and had generally achieved 6 statutory conditions, conditionally achieved 3 conditions, and had generally not achieved 1 condition. After receiving the draft report, TSA took additional actions and provided GAO with documentation to demonstrate progress related to 4 conditions. Thus, GAO revised its assessment in this report. Related to the condition that addresses the efficacy and accuracy of search tools, TSA had not yet developed plans to periodically assess the performance of the Secure Flight system's name-matching capabilities, which would help ensure that the system is working as intended. GAO will continue to review the Secure Flight program until all 10 conditions are generally achieved.
Because of the importance of the national security space launch enterprise, we have been asked to look at many aspects of the EELV program over the last 10 years. Our work has examined management and oversight for EELV, as well as the “block buy” acquisition approach. The block buy approach, finalized in December 2013, commits the department to an acquisition that spans 5 years, in contrast with the prior practice of acquiring launch vehicles one or two at a time, with the aim of stabilizing the launch industrial base and enabling the government to achieve savings. Additionally, we have assessed the status of the launch vehicle certification process for new entrants. DOD and Congress have taken numerous actions to address our prior recommendations which have resulted in financial and oversight benefits. Highlights of our work over the years follow. We reported that when DOD moved the EELV program from the research and development phase to the sustainment phase in the previous year, DOD eliminated various reporting requirements that would have provided useful oversight to program officials and Congress. For example, the EELV program was no longer required to produce data that could have shed light on the effects the joint venture between Lockheed Martin and Boeing companies (later known as ULA) was having on the program, programmatic cost increases and causes, and other technical vulnerabilities that existed within the program. Furthermore, because the program was now in the sustainment phase, a new independent life-cycle cost estimate was not required for the program; as a result, DOD would not be able to rely on its estimate for making long-term investment planning decisions. According to DOD officials, the life-cycle cost estimate for the program at the time was not realistic. Our recommendations to strengthen oversight reporting gained attention in 2011 following concerns about rising program cost estimates and at that time, Congress required the Secretary of Defense to redesignate the EELV program as a major defense acquisition program, thereby removing it from the sustainment phase and reinstating previous reporting requirements. DOD also developed a new program cost estimate, which allows for greater oversight of the program for both Congress and DOD. We reported that the block buy acquisition approach may be based on incomplete information and although DOD was still gathering data as it finalized the new acquisition strategy, some critical knowledge gaps remained. Specifically, DOD analysis on the health of the U.S. launch industrial base was minimal, and officials continued to rely on contractor data and analyses in lieu of conducting independent analyses. Additionally, some subcontractor data needed to negotiate fair and reasonable prices were lacking, according to Defense Contract Audit Agency reports, and some data requirements were waived in 2007 in exchange for lower prices. DOD also had little insight into the sufficiency or excess of mission assurance activities, which comprise the many steps taken by the government and contractors to ensure launch success. Though the level and cost of mission and quality assurance employed today is sometimes criticized as excessive, it has also resulted in more than 80 consecutive successful launches. We also reported that the expected block buy may commit the government to buy more booster cores than it needs, and could result in a surplus of hardware requiring Further, storage and potentially rework if stored for extended periods.while DOD was gaining insight into the rise in some engine prices, expected at that time to increase dramatically, it was unclear how the knowledge DOD was gaining would inform the expected acquisition approach or subsequent negotiations. We reported that broader issues existed as well, regarding the U.S. Government’s acquisition of, and future planning for, launch services— issues which we recommended be addressed, given that they could reduce launch costs and assure future launch requirements are met. For example, we recommended that federal agencies—like the Air Force, NRO, and NASA—more closely coordinate their acquisitions of launch services. Planning was also needed for technology development focused on the next generation of launch technologies, particularly with respect to engines, for which the United States remains partially reliant on foreign suppliers. Congress responded to our work by legislating that DOD explain how it would address the deficiencies we found. We reported that DOD had numerous efforts underway to address the knowledge gaps and data deficiencies identified in our 2011 report. Of the seven recommendations we made to the Secretary of Defense, two had been completely addressed, four were partially addressed and one had no action taken. That recommendation was aimed at bolstering planning for the next generation of launch technologies. Since GAO’s 2011 report, DOD had completed or obtained independent cost estimates for two EELV engines and completed a study of the liquid rocket engine industrial base. Officials from DOD, NASA, and the NRO initiated several assessments to obtain needed information, and worked closely to finalize new launch provider certification criteria for national security space launches. Conversely, we reported that more action was needed to ensure that launch mission assurance activities were not excessive, to identify opportunities to leverage the government’s buying power through increased efficiencies in launch acquisitions, and to strategically address longer-term technology investments. We reported on the status of DOD’s efforts to certify new entrants for EELV acquisitions.generally satisfied with the Air Force’s efforts to implement the process, they identified several challenges to certification, as well as perceived advantages afforded to the incumbent launch provider, ULA. For example, new entrants stated that they faced difficulty in securing enough launch opportunities to become certified. During our review, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the Air Force to make available up to 14 launches for competition to new entrants, provided they demonstrate the required number of successful launches and provide the associated data in time to compete. Additionally, new entrants considered some Air Force requirements to be overly restrictive; for example, new entrants must be able to launch a minimum of 20,000 pounds to low earth orbit from specific Air Force launch sites (versus facilities the new entrants currently use.) The Air Force stated that 20,000 pounds represents the low end of current EELV lift requirements, and that alternate launch sites are not equipped to While potential new entrants stated that they were support DOD’s national security space launches. Further, new entrants noted that the incumbent provider receives ongoing infrastructure and development funding from the government, an advantage not afforded to the new entrants, and that historical criteria for competition in the EELV program were more lenient. The Air Force acknowledged that criteria for competition are different, and reflective of the differences in the current acquisition environment. We reported and testified that DOD’s new contract with ULA (sometimes referred to as the “block buy”) represented a significant effort on the part of DOD to negotiate better launch prices through its improved knowledge of contractor costs, and that DOD officials expected the new contract to realize significant savings, primarily through stable unit pricing for all launch vehicles. At the time of our review, DOD was leading the broader competition for up to 14 launches, expected to begin in fiscal year 2015. In advance of the upcoming competition, DOD was considering several approaches to how it would require competitive proposals to be structured. Our report did not recommend an approach. However, we identified the pros and cons of two different ends of the spectrum of choices, one being a commercial-like approach and the other being similar to the current approach (a combination of cost-plus and fixed price contracts). If DOD required offers be structured similar to the way DOD currently contracts with ULA, there could be benefits to DOD and ULA as both are familiar with this approach, but potential burdens to new entrants, which would have to change current business practices. Alternatively, if DOD implemented a commercial approach to the proposals, new entrants would potentially benefit from being able to maintain their current efficient business practices, but DOD could lose insight into contractor cost or pricing, as this type of data is not typically required by the Federal Acquisition Regulation under a commercial item acquisition. DOD could also require a combination of elements from each of these approaches, or develop new contract requirements for this competition. ULA’s Atlas 5 launch vehicle uses the RD-180 engine produced by the Russian company NPO Energomash. DOD and Congress are currently weighing the need to reduce U.S. reliance on rocket engines produced in Russia and the costs and benefits to produce a similar engine domestically. The RD-180 engine has performed extremely well for some of the nation’s most sensitive national security satellites, such as those used for missile warning and protected communications. Moreover, the manufacture process of the RD-180 is one that cannot be easily replicated. In addition, the most effective way to design a launch capability is to design all components in coordination to optimize capabilities needed to meet mission requirements. In other words, replacing the RD-180 could require the development of a new launch vehicle and potentially new launch infrastructure. Space launch vehicle development efforts are high risk from technical, programmatic, and oversight perspectives. The technical risk is inherent. For a variety of reasons, including the environment in which they must operate, a vehicle’s technologies and design are complex and there is little to no room for error in the fabrication and integration process. Managing the development process is complex for reasons that go well beyond technology and design. For instance, at the strategic level, because launch vehicle programs can span many years and be very costly, programs often face difficulties securing and sustaining funding commitments and support. At the program level, if the lines of communication between engineers, managers, and senior leaders are not clear, risks that pose significant threats could go unrecognized and unmitigated. If there are pressures to deliver a capability within a short period of time, programs may be incentivized to overlap development and production activities or delete tests, which could result in late discovery of significant technical problems that require more money and ultimately much more time to address. For these reasons, it is imperative that any future development effort adopt disciplined practices and lessons learned from past programs. I would like to highlight a few practices that would especially benefit a launch vehicle development effort. First, decisions on what type of new program to pursue should be made with a government-wide and long-term perspective. Our prior work has shown that defense and civilian government agencies together expect to require significant funding, nearly $44 billion in then-year dollars (that factor in anticipated future inflation), for launch-related activities from fiscal years 2014 through 2018. At the same time, our past work has found that launch acquisitions and activities have not been well coordinated, though DOD and NASA have since made improvements. Concerns have also been raised in various studies about the lack of strategic planning and investment for future launch technologies. Further, the industry is at a crossroads. For example, the government has a decreased requirement for solid rocket motors, yet for strategic reasons some amount of capability needs to be sustained and exercised. The emergence of Space Exploration Technologies, Corp. (SpaceX) and other vendors that can potentially compete for launch acquisitions is another trend that benefits from coordination and planning that takes a government-wide perspective. The bottom line is that any new launch vehicle effort is likely to have effects that reach beyond DOD and the EELV program and should be carefully considered in a long-term, government-wide context. Second, requirements and resources (for example, time, money, and people) need to be matched at program start. This is the first of three key knowledge points we have identified as best practices. In the past, we have found that recent launch programs, such as NASA’s Constellation program and Commercial Crew Program, have not had sufficient funding to match demanding requirements. Funding gaps can cause programs to delay or delete important activities and thereby increase risks and can limit the extent to which competition can be sustained. Realistic cost estimates and assessments of technical risk are particularly important at program start. Space programs have historically been optimistic in estimating costs (although recently DOD and NASA have been making strides to produce more realistic estimates). The commitment to more realistic, higher confidence cost estimates would be a great benefit to any new launch vehicle development program and enable Congress to ensure its commitment is based on sound knowledge. We have also found that imposing overly ambitious deadlines can cause an array of problems. For instance, they may force programs to overlap design activities with testing and production. The many setbacks experienced by the Missile Defense Agency’s ground-based midcourse defense system, for example, are rooted in schedule pressures that drove concurrent development. Even if the need for a new engine is determined to be compelling, the government is better off allowing adequate time for disciplined engineering processes to be followed. Third, the program itself should adopt knowledge-based practices during execution. The program should also use quantifiable data and demonstrable knowledge to make go/no-go decisions, covering critical facets of the program such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Our work on the second and third knowledge points during execution (design stability and production process maturity) has tied the use of such metrics to improved outcomes. In addition, the program should place a high priority on quality, for example, holding suppliers accountable to deliver high-quality parts for their products through such activities as regular supplier audits and performance evaluations of quality and delivery, among other things. Prior to EELV, DOD experienced a string of launch failures in the 1990s due in large part to quality problems. This concludes my statement. I am happy to answer questions related to our work on EELV and acquisition best practices. For questions about this statement, please contact Cristina Chaplain at (202) 512-4841, or at chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Art Gallegos, Assistant Director; Pete Anderson, Claire Buck, Erin Cohen, Laura Hook, and John Krump. NASA: Actions Needed to Improve Transparency and Assess Long-Term Affordability of Human Exploration Programs. GAO-14-385. Washington, D.C.: May 8, 2014. Missile Defense: Mixed Progress in Achieving Acquisition Goals and Improving Accountability. GAO-14-351. Washington, D.C.: April 1, 2014. Evolved Expendable Launch Vehicle: Introducing Competition into National Security Space Launch Acquisitions. GAO-14-259T. Washington, D.C.: March 5, 2014. The Air Force’s Evolved Expendable Launch Vehicle Competitive Procurement. GAO-14-377R. Washington, D.C.: March 4, 2014. Defense and Civilian Agencies Request Significant Funding for Launch- Related Activities. GAO-13-802R. Washington, D.C.: September 9, 2013. Space: Launch Services New Entrant Certification Guide. GAO-13-317R. Washington, D.C.: February 7, 2013. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. National Aeronautics and Space Administration: Acquisition Approach for Commercial Crew Transportation Includes Good Practices, but Faces Significant Challenges. GAO-12-282. Washington, D.C.: December 15, 2011. Evolved Expendable Launch Vehicle: DOD Needs to Ensure New Acquisition Strategy Is Based on Sufficient Information. GAO-11-641. Washington, D.C.: September 15, 2011. NASA: Constellation Program Cost and Schedule Will Remain Uncertain Until a Sound Business Case Is Established. GAO-09-844. Washington, D.C.: August 26, 2009. Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges. GAO-08-1039. Washington, D.C.: September 26, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The EELV program is the primary provider of launch vehicles for U.S. military and intelligence satellites. The DOD expects to spend about $9.5 billion over the next five years acquiring launch hardware and services through the program, during which time it will also be working to certify new launch providers. This investment represents a significant amount of what the entire U.S. government expects to spend on launch activities—including new development, acquisition of launch hardware and services, and operations and maintenance of launch ranges—for the same period. The United Launch Alliance (ULA) is currently the sole provider of launch services through the EELV program. However, DOD, the National Aeronautics and Space Administration (NASA), and the National Reconnaissance Office (NRO) are working to certify new launch providers who can compete with ULA for launch contracts. GAO was asked to discuss past work related to the EELV program, as well as best practices for acquiring new launch capabilities, as the Congress is currently weighing the need to reduce our reliance on rocket engines produced in Russia. GAO has reported extensively on the Evolved Expendable Launch Vehicle (EELV) program in the past. In 2008, GAO reported that when the Department of Defense (DOD) moved the EELV program from the research and development phase to the sustainment phase in the previous year, DOD eliminated various reporting requirements that would have provided useful oversight to program officials and the Congress. In 2011, GAO reported that the block buy acquisition approach may be based on incomplete information and although DOD was still gathering data as it finalized the new acquisition strategy, some critical knowledge gaps remained. In 2012, GAO reported that DOD had numerous efforts under way to address the knowledge gaps and data deficiencies identified in the 2011 GAO report, and found that two of GAO’s seven recommendations had been completely addressed, four partially addressed, and one had no action taken. In 2013, GAO reported on the status of DOD's efforts to certify new entrants for EELV acquisitions. While potential new entrants stated that they were generally satisfied with the Air Force’s efforts to implement the process, they identified several challenges to certification, as well as perceived advantages afforded to the incumbent launch provider. In 2014, GAO reported and testified that DOD's new contract with ULA (sometimes referred to as the “block buy”) represented a significant effort on the part of DOD tonegotiate better launch prices through improved knowledge of contractor costs. DOD officials expect the new contract to realize significant savings, primarily through stable unit pricing for all launch vehicles. Space launch vehicle development efforts are high risk from technical, programmatic, and oversight perspectives. It is imperative that any future development effort adopts disciplined practices and lessons learned from past programs. Practices that would especially benefit a launch vehicle development effort include the following: Decisions on what type of new program to pursue should be made with a government-wide and long-term perspective. Requirements and resources (for example, time, money, and people) need to be matched. The EELV program itself should adopt knowledge-based practices.
The four military services stockpile in their retail and wholesale inventories conventional ammunition, explosives, and missiles (hereafter referred to as ammunition) valued at about $80 billion as of September 30, 1994. About $58 billion of this ammunition is classified as usable or serviceable. Serviceable ammunition valued at about $34 billion is owned, stored, and managed by the services (retail stocks). The remaining serviceable ammunition, valued at $24 billion, is owned by the services but stored under Army management to ensure that a sufficient supply is available to meet needs for peacetime training and for war (wholesale stocks). Including the retail stocks, the amount of ammunition stored is over 5 million tons, which if loaded into railway cars would stretch over 800 miles, about the distance from Washington, D.C., to Orlando, Florida. Under current guidance, the services must maintain enough ammunition to support forces fighting in two nearly simultaneous major regional conflicts. This requirement represents a change in national strategy dictated by international developments and a major reduction in U.S. forces. A 1993 study directed by the Joint Ordnance Commanders Groupfound that the changes had seriously affected stockpile operations and readiness. Each service determines the types and quantities of ammunition it needs to meet requirements for war reserves and training. The requirements are based on the national military strategy, which requires the services to be capable of fighting two major regional conflicts. The Defense Planning Guidance gives general direction to the services and planning factors for the conduct of military operations under the strategy. Each service is to use the Department of Defense’s (DOD) capabilities-based munitions requirements process to establish its munitions requirements. Under this intricate process, the services determine their requirements based on the operational objectives of the combatant commander in chiefs against potential threats. The requirements determination process also considers the services’ logistics capabilities and the need for sufficient ammunition to remain after an operation or conflict for future contingencies. Each service must maintain enough ammunition to meet all those requirements. The services assess the combination of inventories at both wholesale and retail levels and in the procurement pipeline to determine whether they have sufficient ammunition to meet requirements for combat, strategic readiness, residual readiness, training, and testing. In 1977, the Army became the single manager for conventional ammunition, assuming responsibility for the storage, management, and disposal of wholesale inventories of ammunition and explosives for all the services. As of September 30, 1995, this stockpile consisted of 3 million tons of ammunition stored at nine depots, two plants, and one arsenal (see fig. 1.1), comprising in all 37.8 million square feet of storage space. The services own 80 percent of the total tonnage of ammunition stored by the single manager. The Army owns the largest amount, 43 percent, followed by the Air Force with 17 percent, the Navy with 13 percent, and the Marine Corps with 7 percent. As the manager of the wholesale ammunition stockpile, the Army undertakes all the management functions—distribution, storage, inventorying, surveillance, maintenance, and disposal (see table 1.1). The Army’s effectiveness in performing these functions determines the stockpile’s readiness. During the 1980s, ammunition storage was generally stable. In 1985, with 55 to 60 percent of the storage space occupied, the stockpile held about 2 million tons of ammunition. Most of the stockpile consisted of large lots, which optimized space and facilitated economical surveillance and inventories. However, in 1990 and 1991, world politics changed significantly as the Soviet Union collapsed. As a result of this event and other worldwide changes, the United States shifted from preparing for a global war to preparing for regional conflicts and crises, and a general reshaping of military resources and budgets began. First, four major Army storage installations were closed or realigned, which reduced the ammunition stockpile’s storage capacity from 36 million to 30 million square feet. Second, because of overall reductions in the budget, the single manager decided to significantly decrease its inventorying of the wholesale stockpile. Third, massive amounts of ammunition were returned from overseas: (1) prepositioned ammunition from Europe, as U.S. forces stationed there were withdrawn and (2) stock from Operation Desert Storm, of which only 10 percent was used during the war. The continental U.S. stockpile installations received twice as much stock—1 million tons—as they had shipped out. This ammunition arrived in small, broken-up lots, which required more storage space and inventory work. The stockpile has also been affected by (1) increases in retail stock stored within its facilities, which increased the cost of storage installation operations and reduced storage space and (2) lower usage rates, as customer demand declined. In 1993, the Joint Ordnance Commanders Group, concerned that the wholesale conventional ammunition stockpile’s readiness and quality had been degraded, initiated a comprehensive study to assess the wholesale ammunition stockpile. The resulting report, issued in October 1993, identified several conditions adversely affecting the readiness and reliability of the ammunition stored in the stockpile. The report identified problems in all the major functions that related to stockpile operations and management. Some degraded functional areas, such as inventory and surveillance, directly affect the readiness and reliability of the stockpile; others, such as receipts, issues, and storage of ammunition, affect the efficiency and effectiveness of operations. The report predicted that conditions would worsen over the next 4 years because of continued funding problems and identified several initiatives to effect improvements to the readiness and operations of the stockpile. The report’s findings led to a charter for an ammunition functional area analysis and the development of the Integrated Ammunition Stockpile Management Plan to address funding and storage management concerns. Concerned about the condition and readiness of the wholesale ammunition stockpile, given changes in world and stockpile conditions, the Chairmen, Subcommittee on Military Readiness and Subcommittee on Military Procurement, House Committee on National Security, asked us to determine (1) the availability of ammunition to meet wartime and peacetime requirements and (2) what problems the Army single manager has in managing the military services’ wholesale ammunition stockpile. To determine whether DOD has sufficient ammunition to meet demands for training and war reserves, we compared serviceable ammunition, from both wholesale and retail inventories, on hand for each service as of September 30, 1994, with the amount needed to meet requirements for wartime and peacetime operations. In making this determination, we used the automated data systems that each service maintains for its ammunition items. Specifically, the requirements were obtained from the Army Worldwide Ammunition Reporting System (WARS), Navy Non-Nuclear Ordnance Requirements System, Air Force Theater Allocation Buy/Budget System, and the Marine Corps Ammunition Requirements Management System. We did not independently verify the military’s method of determining ammunition requirements. To determine whether the services have excess amounts of ammunition, we analyzed computerized files of the services’ inventories as of September 30, 1994 (the end of the fiscal year). First, we compared the total on-hand serviceable inventory, item by item, to that needed to satisfy wartime requirements, testing and training requirements for 7 years (6 years of testing for the Army), and other requirements. We used testing and training requirements for 7 years (1) to be conservative in calculating on-hand quantities exceeding requirements, (2) because DOD’s retention policy authorizes this level of supply to meet Defense Planning Guidance, and (3) because 7 years coincides with the future years’ planning of the services. As requested by the Army, we used operational project, wholesale, and basic load requirements in addition to 6 years of testing requirements and 7 years of training. Second, we determined the amount of unserviceable ammunition by type of ammunition for which there was excess serviceable inventory. Third, we compared the single manager’s inventory database showing ammunition stored for the services with the services’ databases that we had used in our comparison. We then determined the amount of additional ammunition excess to requirements that was not on the services’ records. Finally, we identified the amount of ammunition DOD has designated for disposal. To determine the services’ rationale for excesses, we selected and discussed with item managers 145 types of ammunition (126 randomly selected and 19 judgmentally selected because they had large quantities of excess items) for which on-hand quantities exceeded service-determined requirements. To determine whether the services have shortages of ammunition, we compared the same universe to the amount needed to meet wartime requirements plus that needed for 1 year of training and testing. We used only 1 year of training and testing requirements to be conservative in calculating ammunition shortages. To determine the services’ rationale for types of ammunition with shortages, we selected and discussed with item managers 154 types of ammunition (152 randomly selected and 2 judgmentally selected because they represented large dollar values) for which on-hand quantities were less than service-determined requirements. Additionally, we selected and discussed with service officials the 42 highest unit cost items (representing $32 billion of the $60 billion shortage) to determine the rationale for shortages. We used the Standard Depot System database for our analyses of the wholesale stockpile. This database includes information from 11 of the 12 storage installations (Pine Bluff Arsenal is not included in the system). We used data as of March 1995 for old ammunition in the wholesale stockpile, serviceability of ammunition in the stockpile as classified by condition codes, and backlogs of periodic inspections and data as of September 1995 on the net storage space of installations. We also used data from an Army disposal study dated September 1995 on items designated for disposal and estimates of disposals anticipated in the future. In relation to the management of the stockpile, we interviewed ammunition management officials and reviewed policies, procedures, and documents related to the management of conventional ammunition at the following sites: Departments of the Army, the Navy, and the Air Force, Washington, D.C. U.S. Army Materiel Command, Alexandria, Virginia U.S. Industrial Operations Command, Rock Island, Illinois U.S. Army Defense Ammunition Center and School, Savanna, Illinois Inventory commands Air Force Air Logistics Center, Ogden, Utah Naval Ordnance Center, Indian Head, Maryland Marine Corps Systems Command, Clarendon, Virginia Hawthorne Army Depot, Hawthorne, Nevada Letterkenny Army Depot, Chambersburg, Pennsylvania Red River Army Depot, Texarkana, Texas Sierra Army Depot, Herlong, California McAlester Army Ammunition Plant, McAlester, Oklahoma Crane Army Ammunition Activity, Crane, Indiana We did this review from April 1994 to April 1996 in accordance with generally accepted government auditing standards. DOD expressed concern about the requirements database we used, particularly for the Army. We used the WARS database, which was the most complete automated database we found for the Army. At our exit conference, Army officials suggested that we use the Army’s RDAISA database for greater accuracy. However, we determined that this database does not contain requirements for all Army ammunition items; it only contains requirements for ammunition items for which procurement actions are in process or planned. We remain unconvinced that the Army has a more complete automated database that we could have used. Also, DOD notes in its comments on this report that it started using a capabilities-based munitions requirements process beginning with the fiscal year 1996 budget. Our requirements data were the latest available as of September 1994, which was after the beginning of the development of the fiscal year 1996 budget and included capabilities-based principles. The services have to do a better job of managing their ammunition needs. As of September 30, 1994, the total stockpile of usable and unusable ammunition was worth about $80 billion. We estimate that about $31 billion of this total ammunition stockpile was excess. This excess amount includes about $22 billion worth of ammunition that was still usable. This situation has occurred primarily as a result of the collapse of the Soviet Union in the early 1990s and the change in the primary threat to the United States. As a consequence, the services’ ammunition requirements were drastically reduced, and more of the ammunition stockpile became excess. The Army’s war reserve requirements, for example, were reduced by 74 percent. Of the various types of ammunition in the stockpile, we found that almost half have amounts that exceed the services’ needs in varying quantities. For some types of ammunition, the services have over 50 times their stated needs. While there are shortages of some specific ammunition types, overall, the services generally have enough ammunition to meet their wartime and peacetime requirements. DOD management practices perpetuate the buildup of excess and aging ammunition, even though the ammunition stockpile is supposed to comprise only ammunition and explosives essential for peacetime and wartime needs. In many instances, the services keep it available just in case they or other organizations, such as state agencies or foreign allies, have a need for it. However, DOD often does not determine what would be a reasonable amount to keep to meet these needs. For all these reasons, storage facilities are reaching capacity levels, and the excess ammunition is stressing the ability of installation personnel to manage required ammunition since all ammunition not identified for disposal, including the $31 billion excess mentioned above and $2.9 billion in excess that appears on the single manager’s inventory records but not the services’ inventory records, receives the same amount of single manager attention (see ch. 3 for a discussion of stockpile management). Moreover, in fiscal years 1993 and 1994, the services spent about $125 million for ammunition that exceeded fiscal year 1995 stated requirements. No service purchased ammunition items in fiscal year 1995 for which it had quantities on hand in excess of stated requirements at the end of fiscal year 1994. In addition to its ammunition in excess of stated requirements, DOD has shortages of some types of ammunition. However, the services generally believe that these shortages are manageable because they have substitute items and planned procurements to make up for shortages. We believe that the shortages of some items could be satisfied by better sharing of amounts in excess of stated requirements among the services. While the Army has shared some excess ammunition among the other services, the single manager is unaware of all ammunition in excess of stated requirements because the services have not identified which of their ammunition is required and which is not required. Without this information, the single manager cannot adequately identify and coordinate redistribution of excess ammunition. During our review, we identified $1.2 billion of items in excess of stated requirements that could be shared to meet service shortages of required ammunition, reduce potential future procurements, and avoid maintenance. Because the threat the United States faces has changed from a global war to a much smaller one involving two major regional conflicts, all the services’ war requirements have been reduced. Army war reserve requirements in total tonnage declined 74 percent—from 2.5 million tons in fiscal year 1992 to 650,000 tons in fiscal year 1994 (see fig. 2.1). For example, the requirement for multiple launch rocket system pods decreased by 82 percent. Likewise, the requirement for the 155-millimeter dual purpose improved conventional munitions decreased by 61 percent. The reduced threat has led to reduced requirements, and reduced requirements have contributed significantly to large quantities of various ammunition types becoming excess to the services’ stated needs. All the services have serviceable ammunition in the stockpile that exceeds their needs as defined in the Defense Planning Guidance; that is, to support U.S. forces during two nearly simultaneous major regional conflicts, for training and testing during peacetime, and for other needs. In total, about 50 percent of the ammunition types in the services’ inventories include quantities exceeding requirements. The 50 percent includes ammunition types in their inventories for which the services have no stated requirements. Although ammunition managers agreed that some items were excess, they believed that ammunition should be kept for other uses, such as training and foreign military sales. However, they have set no limits on how much should be kept for other purposes. The retention of excess ammunition adds unnecessarily to workload and costs and requires the use of increasingly valuable storage space. The services own and store in the wholesale and retail stockpiles excess ammunition valued at about $22 billion, or 40 percent of the value of the total serviceable stockpile (see table 2.1). To determine the adequacy of the stockpile, we compared the amount of serviceable ammunition on hand in both wholesale and retail level storage facilities as of September 30, 1994, to the services’ stated requirements. At that time, the services owned and stored 2,781 different types of serviceable conventional ammunition worth $58 billion. Before considering stocks excess, we accounted for the quantity of ammunition needed for two major regional conflicts and for 7 years of training and testing (6 years of testing for the Army). For all services, we allowed 1-1/2 times the stated requirements before determining excess quantities. Of the excess ammunition owned by the services, 30 percent exceeded requirements by 1-1/2 to more than 30 times. For another 18 percent, the services did not identify a requirement. The total value of these items is $21.6 billion. (See table 2.2.) One example of excess ammunition types is the .30-caliber carbine ball cartridge. The Air Force has enough of this type of ammunition to meet its stated requirement 58 times, and the Army has 517 times the amount needed. Similarly, the Navy has 276 times the amount of the .50-caliber ball cartridges needed, and the Marine Corps has 92 times the number of offensive hand grenades needed to meet its requirements. Also, as table 2.2 shows, 500 types of ammunition worth $3 billion have no stated requirements. For example, the Air Force has no requirement in its database for its 4.8 million of 20-millimeter cartridges worth over $21 million. According to Air Force officials, this ammunition is needed for the M39 gun and the F-5 aircraft and can be used in the M61 gun, when separated. In addition, the Marine Corps does not show a requirement in its database for its 4,307 105-millimeter cartridges valued at over $2.5 million and 2.9 million .50-caliber cartridges valued at about $2.7 million. Marine Corps officials stated that they do not need these types of ammunition. The other services similarly have ammunition on hand for which there is no stated requirement. Although Air Force officials said that they have specific uses for the ammunition, they nevertheless do not show that they need it by including it in their requirements database. We calculated the total amount of excess ammunition—serviceable and unserviceable—at about $31 billion. In addition to the $22 billion of serviceable ammunition in excess of stated needs, we calculated that as of September 30, 1994, DOD had about $9.4 billion in unserviceable assets that exceeded stated needs (see table 2.3), for a total excess of $31 billion, or about 39 percent of the $80 billion ammunition stockpile. In addition, there was over $2.9 billion of excess assets on the single manager’s inventory records that did not appear on the services’ inventory records, and over $2 billion of ammunition that was identified for disposal. Without some identification of ammunition not needed to meet wartime and peacetime requirements or some other prioritization, all ammunition other than that identified for disposal receives the same level of attention by the single manager. As discussed in chapter 3, the large amount of ammunition being stored by the single manager is stressing the ability of installation personnel to manage required ammunition. We queried ammunition item managers about the reasons that DOD had excess ammunition for 145 selected (126 randomly and 19 judgmentally) types of ammunition. These managers agreed that they had excess items for 59 (41 percent) of the 145 types we selected. They disagreed that the rest were excess for varying reasons. All cited training as a reason for keeping excess ammunition. However, we had already computed training and testing needs in our analysis, and the ammunition they cited as needed for training was excess to stated requirements. Other reasons cited for keeping the ammunition were for foreign military sales, research and development, trade purposes, military competitions, and ceremonies, such as military funerals. However, the services had not determined what would be a reasonable amount to meet these needs; rather, they seemed to keep all of any item they thought might be needed. Historically, the age of ammunition in the stockpile has been a concern and the object of study since before fiscal year 1979. In fiscal year 1979, the single manager initiated a purification program to eliminate old, obsolete, or otherwise unneeded ammunition items. This particular effort built on the results of past studies. In September 1985, the single manager issued an ammunition stockpile rotation study that assessed the effectiveness of stockpile rotation policies and regulations. This study analyzed ammunition stocks in the United States and Europe and found that 30 percent of the Army’s stocks in the United States and 26 percent of the overseas stocks were 20 years old or older. Little change, if any, has occurred since 1985. Despite an awareness of age and the need to rotate ammunition stocks, we found that as of March 1995, a considerable portion of the wholesale ammunition stockpile was over 25 years old. The age of over 56 percent of the lots in the wholesale ammunition stockpile is unknown because the date of manufacture is either not recorded in the database or recorded incorrectly. Of the remaining 44 percent, 14 percent was over 30 years old, 34 percent was over 20 years old, and more than 55 percent was over 10 years old. Table 2.4 shows the ages of the ammunition lots in the wholesale stockpile. We observed ammunition dating to the 1940s (see fig. 2.2). Service officials generally said that unless ammunition has a shelf life, its age does not alter its serviceability. They noted that if ammunition is stored properly, it is as good as the day it was manufactured. While old ammunition may still be serviceable, it is less likely to be used if a new item is available. The 1985 rotation study noted that soldiers in the field demanded the newest and best lots of ammunition available, thus older lots remained in storage. More recently, during Operation Desert Storm, battlefield commanders opted to use newer, more modern items. Ammunition that was shipped to Southwest Asia for Operation Desert Storm, partly from Europe, but was not used now occupies over 2 million square feet of space in the U.S. depot system, awaiting potential use and continuing to age. Also, according to single manager officials, commanders insist on training the way they are expected to fight a war. Consequently, they also do not want to train with the “old stuff.” Rather, they want to use the more modern and the most current ammunition, if available. The Joint Ordnance Commanders Group’s 1993 study and resulting report on the wholesale stockpile found that the excess ammunition in the stockpile contributes to the stockpile’s annual operational costs. The report suggested that the services reduce the amount of excess ammunition stored. The report also suggested that training, foreign military sales, grant aid programs, and destruction are among the ways of eliminating excess. However, the services have made little progress in eliminating excess and aging ammunition because they are reluctant to classify ammunition as excess; have no incentive to declare ammunition excess, since the Army pays for its storage; are storing ammunition for weapon systems no longer in their inventories; and have purchased ammunition that, according to their records, was not needed to meet required levels. In addition, the services keep ammunition over and above requirements, or in “long supply,” to meet various retention needs. Moreover, single manager personnel do not always issue the older stock, leaving it to continue to age. According to the 1993 report on the wholesale stockpile, the services have known for some time that they have excess quantities of ammunition items. We were told that the services do not like to declare ammunition excess because they then lose ownership of stocks. Also, if items in long supply are transferred to another service, the transferring service is reimbursed for the items. However, if an item is identified as excess and then given to another service, the issuing service is not paid for the item. Also, theater commanders may exercise their judgment to retain ammunition items even if requirements no longer exist. Air Force inventory control point officials agreed in October 1994 that they could no longer provide effective and efficient management of vast quantities of older, obsolete weapon systems. They listed 138 potential items for disposal because they had no operational requirement, were no longer reliable, were environmentally unacceptable, or their shelf life had expired. Although headquarters officials approved some of these items for disposal, they directed that others be retained until suitable substitutes became available or more data were provided about the items. Currently, the services have no incentive to reduce excess ammunition in the wholesale stockpile because the single manager is responsible for its care; that is, storage, inventories, surveillance, and disposal of the ammunition. The 1993 report on the wholesale stockpile notes that an incentive for inducing the services to reduce excess ammunition would be to charge a storage fee or charge each service for the cost to maintain its stock in the wholesale system. However, single manager officials we talked to did not support charging the services a storage fee. In their opinion, the real issue is the need for the services to identify nonrequired items and turn them over to the single manager for disposal or identify them for possible redistribution where they exceed stated requirements. However, the services have only partially provided this information. Ammunition is being stored and managed for weapon systems that either have been purged or are no longer in the active inventory. Although we did not determine the total amount of ammunition stored for weapon systems no longer in the inventory, we found specific examples of such ammunition. The M60A2 tank and the M42 self-propelled gun are obsolete weapon systems to the Army. However, the Army continues to store 147,300 152-millimeter cartridges valued at $43.6 million for the M60A2 tank and 269,000 40-millimeter cartridges valued at $2.5 million for the M42 self-propelled gun. Although Army officials acknowledged that the 152-millimeter cartridges were at one time used for the M60A2 tank, in commenting on this report, DOD said the Army is maintaining these 152-millimeter cartridges for the M551 Sheridan tank. However, DOD noted that there will be a reevaluation of the need to retain these cartridges. Also, the Army is storing 97 million rounds of various small arms ammunition valued at $146 million for weapons no longer in the Army’s inventory. According to Army officials, this ammunition cannot be used for other weapons currently in the inventory. The Air Force continues to store motors for the Nike Hercules rocket. According to the Air Force’s database, there is no requirement for these rocket motors, and the Air Force owns only 39 of them. However, the Standard Depot System database, which accounts for wholesale ammunition assets, shows that the Air Force owns 469 of the Nike rocket motors—430 more than the Air Force’s system shows. The Navy continues to store in the wholesale inventory about 4,000 16-inch projectiles for its battleships, which are no longer in the active fleet. These projectiles are in the single manager’s wholesale inventory database as belonging to the Navy. However, they are not in the inventory database used by the Navy. Also, the Navy stores 3-inch, .50-caliber ammunition and MK25 mines in the wholesale system. At one depot we visited, we were told it had little or no issues of the 3-inch, .50-caliber ammunition in 15 years, and according to an official at another installation, there had been no activity at all for the MK25 mines in over 10 years. Like the 16-inch projectiles, over 5,000 MK25 mines in the single manager’s wholesale inventory listed as belonging to the Navy are not in the Navy’s inventory database. The Marine Corps continues to store about 3 million .50-caliber cartridges for the M85 machine gun, even though the Marine Corps has removed the M85 gun from its inventory and no other weapon system uses this type of .50-caliber ammunition. Likewise, the Marine Corps continues to store over 4,000 105-millimeter projectiles that were used for the M60A1 tank. The M60A1 tank, however, is also no longer in the Marine Corps’ inventory. In commenting on this report, DOD noted phasing out of the M60A1 tanks from the Marine Corps’ inventory began in 1991 and was completed in 1994. DOD stated that the purging of ammunition for the M85 and M68 weapons began in October 1991 and is scheduled for completion in fiscal year 1997. We compared the services’ ammunition purchases during fiscal years 1993 through 1995 to ammunition items in excess quantities as of September 30, 1994. For fiscal years 1993 and 1994, we found that the Army and the Navy bought 17 types of ammunition at a cost of about $124.4 million and $0.3 million, respectively, that according to their records they did not need to meet stated requirements. We did not find that similar purchases were made for fiscal year 1995. As can be seen in table 2.5, in fiscal year 1993, the Army purchased six types of ammunition at a cost of over $114 million. According to Army records, all of these items were excess to their fiscal year 1995 stated requirements, and after deducting the quantities purchased in fiscal years 1993 and 1994, inventory quantities remaining still exceeded service-defined requirements. For example, the Army bought 118,893 155-mm projectiles (D864) at a cost of $78.9 million. After deducting this quantity from the excess quantity as of September 30, 1994, 86,307 of these projectiles remained in inventory. An Army official told us that these purchases may have been made because (1) the Congress directed the purchase, (2) it was more economical to purchase a large quantity rather than a small quantity to meet the requirement, or (3) the requirements decreased after the item was placed in the budget request cycle. Another Army official commented that the purchases could have been made before the requirements changed. Smaller, but similar purchases were made by the Navy (see table 2.6). In fiscal years 1993 and 1994, the Navy bought six types of ammunition at a cost of $320,000. According to Navy records, all of these items were excess to their fiscal year 1995 stated requirements and after deducting the quantities purchased in fiscal years 1993 and 1994, inventory quantities remaining still exceeded service-defined requirements. Assuming ammunition requirements are accurate and in accordance with Defense Planning Guidance, we believe the readiness posture of the Army and the Navy could have been enhanced if fiscal year 1993 and 1994 procurements had been focused on items with shortages rather than on items that either met and/or exceeded requirements. It is the single manager’s policy for installations to first issue ammunition from small lots and use older stocks for training. However, this policy is not always followed. All the installations we visited noted that, as a practical matter, this policy is often too difficult to follow. Not all items in a storage facility are easily accessible, and if the facility is at or near capacity, single manager personnel have little choice but to issue the more accessible stock to maximize efficiency and to ensure that the customer’s required delivery date is met. We agree that additional work would be required to consistently issue first-in stock and that this could increase labor costs and delay deliveries. We recognize, however, that the longer first-in stock remains in storage facilities, the older it becomes and the more likely it is to become obsolete and destined for destruction. As we noted previously, over 55 percent of ammunition in the wholesale system for which the age of the ammunition is recorded is over 10 years old. As of September 30, 1994, the services had shortages of items in 752 ammunition types valued at about $60 billion. According to the Deputy Chief of Staff for Ammunition, U.S. Army Materiel Command, however, “sufficient munitions are currently in the stockpile to support any projected military operation.” Inventory control point officials from all the services agree that they have no major problems with shortages because they consider inventory quantities sufficient, they have substitutable items, and/or they have plans to purchase the items. During our review, Marine Corps officials stated that the Marine Corps did not have enough ammunition to support requirements. However, in commenting on this report, DOD said a Marine Corps ammunition study conducted after our review was completed validated a lower level of war reserve requirements than was previously identified. Therefore, DOD commented that all the services have sufficient ammunition to support their requirements, although the mix of ammunition is not optimum. Thirty percent of the items with shortages were on hand in quantities ranging from over 50 percent of the requirement to almost the entire requirement; 41 percent were on hand in quantities ranging from 1 percent to 50 percent of the requirement; and 29 percent had none on hand to meet the requirement. Some of the items are expensive, which accounts for the large amount of money ($60 billion) needed to eliminate these shortages. Also, we used service-defined requirements in our analysis, and these requirements did not always take into account the availability of substitute items and the planned phaseout of ammunition. In six classified DOD/Inspector General (IG) reports issued from June 1994 through June 1995 on quantitative requirements for antiarmor munitions, DOD/IG concluded that the services had overstated requirements by $15.5 billion. Forty-two of the items identified as in a shortage condition in our analysis accounted for over 50 percent ($32 billion) of the total dollar value of the shortages. Fifteen items have a unit cost that exceeds $1 million, which accounts for over $18 billion in shortages. Stated requirements for many of these items may not reflect the true need for the item. For example, according to the Navy’s database, the Navy has a shortage of 1,587 AIM-54C Phoenix missiles, but the Navy does not consider the missile to be in a shortage status. In fact, after considering several other substitute items, the Navy’s inventory has about 191 percent of the requirement for the Phoenix. The replacement cost of each missile would be over $2 million; the shortage amount accounts for over $3.2 billion of the total shortage. Similarly, the Air Force is short about 18,000 AGM-88B High-Speed Anti-Radiation Missiles (HARM), which account for over $6 billion of the shortage amount. However, according to Air Force officials, HARMs are no longer being procured and their database only shows a lesser shortage amount. Likewise, the Army is short 616 Army Tactical Missile System (ATACMS), which accounts for over $390 million, but according to Army officials, the ATACMS is not recognized as being in a shortage position. Various versions of the Patriot missile are also shown in the database as being in short supply. The value of these missiles is about $760 million. According to an Army official, no procurements had been requested since about 1993, and there had been no procurements since about 1993 or 1994. A more sophisticated version of the Patriot missile will be the next missile purchased for the inventory. The official commented that the requirement in the database may be the number that was needed at an earlier date. Service officials generally disagreed with the service-defined requirements, which when compared to ammunition on hand indicated that 42 high dollar value items were actually in a shortage position. To the contrary, we were told that inventories are generally sufficient to meet requirements, particularly when quantities of substitute items are considered. With budget constraints, the services do not have the money to purchase some items in a shortage position. And with the exception of the Marine Corps, service officials generally believed that they had sufficient quantities of substitute ammunition and that future procurements would be adequate to meet wartime and peacetime requirements under the Defense Planning Guidance. Army officials noted, however, that in the future they anticipate problems in filling training requirements. We randomly selected 152 ammunition items showing shortages. Managers said that 67 of the items had shortages, and they planned future purchases for some of these items. However, despite the records, which showed that these items lacked sufficient quantities to meet established requirements, the item managers contended that most of the items (85) were not considered to have shortages because of available substitutes and planned buys. Our sample showed a serious shortage of top-priority items for the Marine Corps but no major problem for the other services. The Marine Corps asserted that it had an insufficient amount of some ammunition to support two nearly simultaneous major regional conflicts. According to the Marine Corps’ program manager for ammunition, the Marine Corps “is prepared and capable of executing one MRC [major regional conflict] and doing significantly more than that . . . does not have the ammunition to support .” The program manager noted that the Marine Corps is short of ammunition valued at about $1.5 billion, including $500 million in ammunition for current training needs. We were told that shortages are mainly long-range artillery and war reserve items such as .50-caliber SLAP 4 and 1-linked cartridges, 9-millimeter ball cartridges, and 7.62-millimeter ball linked cartridges. DOD’s comments on this report noted that a Marine Corps ammunition study conducted after this review was completed has validated a lower level of war reserve requirements than was previously identified. Therefore, DOD said all services, including the Marine Corps, have sufficient ammunition to support their requirements. Although the Army has shared some excess ammunition across the services, we found that (1) purchases of about $185 million in fiscal years 1993 and 1995 could have been avoided if ammunition in excess of stated requirements had been shared among the services, (2) $1.2 billion in ammunition in excess of stated requirements could be shared to alleviate shortages, and (3) $19 million in costs could be avoided by providing ammunition in excess of stated requirements in good condition to services that planned maintenance for the same ammunition. The Senate Committee on Appropriations has also recognized the need for the services to be more aggressive in sharing excess ammunition. For fiscal year 1995, on the basis of our identification of potential ammunition budget reductions, it directed the Army to transfer at least 17,000 excess M203A1 155-millimeter red bag charges, at no cost, to the Marine Corps and denied the Marine Corps $12 million for new charges. Ammunition officials stated that one reason that more ammunition in excess of stated requirements has not been shared is that the single manager does not know the other services’ requirements or the total holdings of ammunition. Even if the single manager did have this knowledge, it is not authorized to redistribute ammunition. It, therefore, cannot initiate the distribution of ammunition in excess of stated requirements and purge the wholesale system of unnecessary items for which there is no reason to retain. Cross-sharing of existing ammunition that exceeds one or more service’s stated requirements can preclude unnecessary purchases and redirect resources to fill or partially fill shortages. During fiscal years 1993 through 1995, the military services purchased $184.5 million of ammunition items that were not needed to meet stated requirements (see table 2.7). The ammunition purchased, according to service-defined requirements and inventory records, was already available or partially available in DOD inventories in quantities that exceeded fiscal year 1995 service requirements. For example, in fiscal year 1995, the most current year after the September 30, 1994, excess analysis, the military services bought 18 types of ammunition at a total cost of $102.2 million. However, enough of the same types of ammunition were already in the inventory system to completely satisfy or partially satisfy 58 percent, or $59.4 million, of the total fiscal year 1995 purchase quantity. Similar conditions existed in fiscal years 1993 and 1994. Examples of excess ammunition that could have filled services’ shortages include the Marine Corps’ 22 million 5.56-millimeter tracer rounds. As of September 30, 1994, the Marine Corps had a quantity of this ammunition sufficient to meet the quantities bought by the Air Force, the Army, and the Navy and still had about 12 million rounds more than needed. Redistribution of the Marine Corps’ assets in these instances could have saved and/or redistributed over $5 million spent by the other services for the same ammunition. In another example, the Army had over 1.9 million 25-millimeter APDS-T cartridges, which exceeded its stated requirements. The Navy bought this same item in fiscal years 1993 and 1995 at a cost of over $5 million, and the Marine Corps bought the item in fiscal years 1994 and 1995 at a cost of over $6 million. Redistribution of these assets could have saved or redirected over $11 million for ammunition with shortages or for other purposes, and the Army would still have had 1.4 million rounds more than its stated requirement. We believe that centralized oversight and management of DOD ammunition requirements and assets would enable better use of ammunition through redistribution and free up funds to purchase items determined to have shortages. We identified $1.2 billion of ammunition in excess of stated requirements that could be shared among the services to meet service shortages. Some cross-sharing of ammunition has been done. For example, in fiscal year 1993, the Army transferred over 1.8 million excess .50-caliber blank linked cartridges and 61,500 60-millimeter cartridges to the Navy and the Marine Corps, respectively. And in fiscal year 1994, the Army again transferred additional excess ammunition—about 3,800 .45-caliber blank cartridges and about 68,000 .50-caliber blank cartridges to the Navy, about 484,000 5.56-millimeter dummy cartridges and about 118,000 7.62-millimeter dummy cartridges to the Marine Corps, and 347,000 5.56-millimeter dummy cartridges and 16.5 million 5.56-millimeter cartridges to the Air Force. While this is a step in the right direction, the services must make a concerted effort to identify ammunition in excess of requirements that can be shared to reduce shortages. DOD directives currently require each service to report to the single manager its total assets against requirements to help identify excesses and corresponding needs among the services. However, the single manager has not regularly received this data from all the services. Despite the Army’s transfers of excess ammunition, our analysis of ammunition requirements and assets showed 139 instances where excess on-hand quantities of $1.2 billion could be shared among the services to meet shortages. For example, 30 ammunition items with shortages in the Navy could be partially or totally filled by excess quantities in the Army, the Air Force, and the Marine Corps; shortfalls of 8 items in the Army could be relieved by excess items from the Marine Corps; and 15 Air Force items with shortages could be partially or wholly filled by excess items from the Army. As shown in table 2.8, for some ammunition types, two of the four services have excess quantities that could be shared to fill a deficit in another service, and even when shortages are relieved by excess ammunition, excess quantities still remain. In addition to filling some of the services’ shortages, the cross-sharing of excess ammunition during fiscal years 1996 through 2000 could result in the avoidance of more than $19 million in planned maintenance costs (see table 2.9). For example, about $11.5 million in planned maintenance could be avoided by sharing a portion of the 839,694 excess 155-millimeter projectiles with services that plan maintenance on 370,000 projectiles. In addition, the $3.4 million cost to repair 40-millimeter cartridges could be avoided because, in this case, the Air Force has more than 1 million excess cartridges that could partially fill the Army’s requirement to repair 1.7 million rounds of this item. In 1979, we recommended that the Secretary of Defense assign responsibility to the single manager for operating a single national inventory control point to provide DOD-wide integrated inventory management, designate the single manager as owner of the ammunition in the wholesale inventory, and require the single manager to apply the principles of vertical stock management for inventory. DOD disagreed with these recommendations, stating that the single manager organization’s objective would be to permit the cross-sharing of stocks between services and to avoid procurements by one service for needs that could be satisfied with another service’s excess ammunition. DOD stated that the single manager would be provided information on location and condition of retail stocks and service stratification of stocks. This information would allow the single manager to perform, with service approval, cross-sharing to gain efficiencies in procurement, inventory, and transportation management. However, we found that the single manager does not have information on location and condition of retail stocks or information on service stratification of stocks. Concerning our 1979 recommendation that the single manager be the owner of the ammunition in the wholesale inventory, DOD disagreed. DOD said the services have an obligation to control the assets they acquire through congressional appropriations and the custodial responsibility of the single manager does not conflict with cross-sharing economies of common items or inhibit effective depot-level management. In our 1979 report, we noted that several problems with the existing organization of the single manager preclude achieving further centralized ammunition management. The single manager organization lacks visibility over the services’ retail stocks, has limited communication channels, and must compete for resources with other Army programs. It is principally staffed by Army personnel and is viewed by the other services as parochial. In addition, the single manager is unable to fully implement the concept within the single manager’s own service—the Army. As we noted in our 1979 report, the services are reluctant to give the single manager the degree of control the manager needs to provide efficient and economic inventory management in peacetime and the intensive inventory management needed during war. Ammunition at U.S. storage and production facilities is designated wholesale and the remainder retail. The services retain total responsibility for the retail inventory. In our 1979 report, we noted that single manager officials claim they could achieve more savings if they had retail asset visibility for all services through transportation savings and matching long supply and excess ammunition items against projected procurements. The wholesale and retail designations, coupled with the services’ responsibilities, preclude the single manager from managing a substantial segment of the inventory. DOD partially concurred with our findings. DOD agreed that there were excesses, but took exception to the criteria that we used in determining excess inventory. It said we inferred that stocks above established requirements were excess and should therefore be disposed of. Our report states that DOD has about $22 billion of serviceable ammunition that exceeds established needs and about $31 billion in excess serviceable and unserviceable ammunition. We agree that not all the ammunition in excess of stated requirements should be disposed of and do not state that it should be. However, we believe that the assets in excess of stated requirements should be made available for cross-sharing to avoid one service purchasing assets that another service has in excess of its wartime and peacetime requirements. In addition, we believe there are many items being stored that will never be used and should be identified for disposal. Furthermore, items in excess of stated needs that should be retained should be identified as not required, but to be retained for potential future use. This could greatly help the single manager to better apply limited resources to storing and maintaining ammunition. DOD agreed that cross-sharing of ammunition at the wholesale level would allow for better use of ammunition through redistribution. DOD stated the planned Joint Defense Total Asset Visibility Program will provide all the services the capability to review all assets and will further expand cross-sharing of assets at the wholesale level. DOD did not agree with our analysis of ammunition requirements and assets that showed excess on-hand quantities of $1.2 billion that could be shared among the services to meet shortages. DOD provided information for the Army that showed stockage retention levels rather than excesses for most of these items. DOD makes available for cross-sharing ammunition it considers excess; however, it does not consider stocks in its retention categories as available for cross-sharing. We believe all assets in excess of requirements, including retention stocks (such as economic retention levels) should be considered for cross-sharing, which may avoid a future procurement. Army data from its September 30, 1994, asset stratification of conventional ammunition, which excludes missiles, shows total assets of $18.7 billion and an authorized acquisition objective of $13.3 billion. It shows various retention levels totaling $4.4 billion, or 23.7 percent, and a potential excess of about $1 billion, or 5 percent. Using the stratification data for cross-sharing would only make the $1 billion of potential excess available while the $4.4 billion in various retention levels would not be identified for cross-sharing. We believe the economic retention amounts of over $1 billion should be made available for cross-sharing to avoid purchases by another service and other retention stocks should be considered for cross-sharing. Increases in the wholesale ammunition stockpile due to returns of massive amounts of munitions from Europe and Operation Desert Storm, combined with a decrease in the wholesale stockpile’s workforce, have created a situation that could, if allowed to continue, degrade the forces’ readiness to meet wartime and peacetime needs. Because the Army has placed a lower priority on funding ammunition functions, management of the stockpile has become a difficult task, and managers have had to concentrate on the receipt and delivery of ammunition to the detriment of their inspections, tests, maintenance, storage, and disposal. During the summer of 1993, the Joint Ordnance Commanders Group’s study team assessed the management of the stockpile and found major deficiencies in stockpile management. The team predicted that unless something was done about the deficiencies, conditions would worsen. Our review confirmed that the stockpile’s condition and readiness have indeed been degraded. We found that ammunition was reported as serviceable when it might not be because the single manager’s method of recording the condition of stock was misleading; the condition of ammunition was often unknown because required inspections and testing had not been done; top-priority ammunition was not serviceable because repairs had not been done; ammunition was inefficiently stored, taxing facilities where space is at a premium; and the ammunition designated for disposal is accumulating faster than it can be eliminated. In 1994, the single manager developed the Integrated Ammunition Stockpile Management Plan to improve the poor condition of the wholesale ammunition stockpile. However, the single manager has made little progress toward improving the stockpile’s operations and readiness. Two factors beyond the single manager’s control hinder the success of implementing the plan: (1) the services’ lack of incentives to identify required and nonrequired items in the stockpile and (2) the uncertainty of sustained funding for the care, maintenance, and disposal of ammunition. None of the services, including the Army, have provided a list of required and nonrequired ammunition, and although funding increased in fiscal years 1995 and 1996, the sustainment of increases to carry out the plan to completion is not ensured. Because of the vast influx of ammunition from overseas in recent years and decreases in storage space, funding, and staff, the ability of the single manager to manage the stockpile has been taxed. As discussed in chapter 2, much of this ammunition is excess, old, and deteriorating but has not been removed from the inventory and is taking up valuable space. The single manager has concentrated on receiving and issuing ammunition and because of resource constraints has neglected the surveillance, maintenance, and disposal of ammunition. As a result, the condition of the stockpile is unknown. This situation degrades the overall readiness of the ammunition stockpile and could, if allowed to continue, degrade the forces’ readiness. As of March 1995, 59 percent of the ammunition tonnage and 223,293 of the services’ ammunition lots were classified as serviceable; the remaining 41 percent of the tonnage was unavailable for issue because it was unserviceable, suspended, or designated for disposal. Because of the lack of identification of required and nonrequired items, we could not determine serviceability statistics for required stocks. Of the services’ top-priority items (which make up 25 percent of the stockpile’s tonnage), about 71 percent were classified as serviceable, but 29 percent were termed unusable because they needed repair, could not be fixed, needed inspection, or were suspended from issue (see fig. 3.1). For example, motors for the MK66 2.75-inch rocket could not be issued as of March 1995 because 100 percent of them needed inspection. The condition of ammunition lots is identified by codes signifying that the ammunition is serviceable, unserviceable, or suspended. Lots in all conditions may also have defect codes indicating, for example, rust, paint needed, replacement of unserviceable components required, or nonhazardous/unserviceable/nonreparable. Of the lots classified as serviceable, 24 percent had at least one defect, and 1,752 lots (about 1 percent) were identified as nonhazardous/unserviceable/nonreparable. Of the services’ top-priority serviceable items, 19 percent had at least one defect. When the lots with defect codes are deducted from the serviceable tonnage, the portion of the stockpile classified as serviceable without defect is about 46 percent, and the portion of top-priority items classified as serviceable is about 58 percent. One defect code indicates that an ammunition lot is overdue for periodic inspection by at least 6 months. Before 1990, overdue inspections were clearly indicated by changing the lot’s condition code, but the other services objected to this procedure, and the Army dropped it. Now, the condition code remains unchanged, and the defect code is added. According to one official, under this system, the lot’s condition does not look as bad as it really is, since the condition code is not changed. Even though the defect code is indicated on ammunition lots, inventory records that item managers routinely use do not include defect codes. Item managers must look up the lot number in an ammunition lot report to determine whether it has a defect. Because of personnel shortages, only a small percentage of overdue inspection codes is entered into the inventory database. Although stockpile officials’ statistics show that about 68,000 lots were past due for periodic inspections as of June 30, 1995, our analysis of stockpile data shows that only 6,609 lots had been coded as past due. Therefore, lots that appear to item managers as available for issue may, in fact, not be available. This situation creates a false impression of readiness, and issuance of ammunition could be delayed as a result. To ensure that requisitions can be speedily filled with usable ammunition, especially in wartime, the single manager must continually check the condition of ammunition items to ensure that they are ready for use and safely stored. Each stockpile installation is supposed to inspect ammunition periodically to ensure that items are serviceable, properly classified as to condition, and safe. Based on the expected rate of deterioration, ammunition is to be inspected every 2 to 10 years. For example, Army guidelines specify that blasting caps should be inspected every 2 years and small arms ammunition every 5 years. In addition, regular tests are to be done to ammunition, not only to ensure that all items are safe and reliable but also to identify those of marginal reliability or capacity and those for maintenance or disposal. However, inspections and ammunition tests have fallen so far behind in recent years due to personnel and funding cuts that the condition of many items, including the services’ top-priority items, is no longer known, with the result that stockpile readiness may be impaired. According to stockpile officials, a backlog of inspections has existed since the 1980s, when the lack of personnel precluded periodic inspections of unserviceable ammunition. However, the backlog has more than doubled since fiscal year 1989 (see fig. 3.2), largely because of the influx of material from Europe and Operation Desert Storm and the loss of inspection personnel. In fiscal year 1994, stockpile managers suspended periodic inspections for all but fast-moving items, and in fiscal year 1995, they concentrated instead on reducing the backlog of lots that were in an unknown condition. By fiscal year 2001, periodic inspections of more than 139,000 lots could be backlogged. Our analysis shows that the services’ priority items had not been treated any differently from lesser priority items when periodic inspections were done. As of March 1995, the periodic inspections of 15 percent (4,444) of the services’ top-priority lots were past due, meaning the serviceability, condition, and safety of these priority items were questionable. This number is likely to be larger because the date for the next inspection for 22 percent (8,396) of these lots was not in the inspection database. Periodic inspections of top-priority items are important because these are the items the services need to be available (without defect) and ready for war. Because inspections cannot detect all deterioration of ammunition, lot samples are regularly taken for test-firing or examination at test facilities or laboratories. This effort includes several testing programs, including programs for small-caliber and large-caliber ammunition. According to stockpile officials, of all the testing programs, only the large-caliber program is backlogged. Stockpile management has concentrated its limited testing funds on such programs as small arms at the expense of the large-caliber program, which is a much more costly effort. The large-caliber program covers 129 items having a 5-year test cycle, 85 of which are war reserve stock; the remaining 44 are classed as substitutes and do not have a war requirement. As of September 1995, testing for 25 percent of the war reserve items and 59 percent of the substitutes was overdue. Officials predicted that, by fiscal year 1998, these backlogs could increase to 55 percent for war reserve items and to 84 percent for the substitutes. (See fig. 3.3.) In the 1993 report on the wholesale stockpile, the single manager stated that 27 percent of the services’ critical items for war, including the M830 120-millimeter cartridge and the M864 155-millimeter projectile, were unserviceable; that is, the items needed maintenance before they could be used, were missing components, or were earmarked for reclamation. As of March 1995, 18 percent of the services’ top-priority ammunition for war and training needed repair, and 2 percent was beyond repair. Because of the backlog in inspections and tests of ammunition, however, the full extent of unserviceable items in the stockpile today is uncertain. As long as managers lack accurate information on the condition of stored items, effective planning and performance of maintenance are problematic. More important, the failure to maintain ammunition in good condition could affect the services’ ability to meet wartime requirements. Repairs and maintenance of ammunition in storage are important not only to sustain readiness but also to save funds, since an unserviceable item can be repaired, on average, for 10 to 12 percent of the cost of a new item. The single manager estimates that the average cost to repair a ton of ammunition is $800. Using that estimate, about $99 million would be needed to repair the 18 percent of top-priority ammunition currently known to need repair. The estimated cost to purchase new items could be as much as $826 million. Several factors contribute to the inefficient use of storage space. These factors include the loss of storage space due to downsizing, the addition of ammunition from Europe and Operation Desert Storm, the retention of ammunition that is unusable or awaiting disposal, and the proliferation of fragmented (broken up) lots of ammunition. As a result of these factors, some usable ammunition is stored outside when it should be stored inside. Since 1988, the storage space for ammunition has been drastically reduced. Storage space was reduced by 6 million gross square feet when four installations were closed based on the recommendations of the 1988 Base Realignment and Closure Commission. As of September 1995, over 80 percent of the stockpile installations’ net storage space of 26.1 million square feet was full, and that space will be reduced by about 16 percent when the Sierra, Seneca, and Savanna storage areas are closed, as recommended by the 1995 Base Realignment and Closure Commission. In addition to dealing with less space, storage facilities had to accommodate a vast amount of ammunition returned from abroad after Operation Desert Storm and from bases closing in Europe. Ammunition storage space will soon become even more cramped as ammunition use declines through force reductions and the stockpile receives another 113,000 tons of ammunition from Europe in fiscal year 1996. Due to the inefficient storage of ammunition, some serviceable items that should be stored inside were stored outside, while material with less demanding storage requirements occupied high-explosive storage areas. For example, serviceable high-explosive items were stored outside, while inert material was stored in about 600,000 square feet of structures designed to house high-explosive and small arms items. Also, serviceable Maverick, Patriot, and Hawk missiles, which should be stored inside, were stored outside at one depot. (Fig. 3.4 shows Maverick missiles stored outside.) Among the serviceable ammunition stored at installations were items that were beyond repair and designated for disposal and occupying considerable space. As of September 1995, 12 percent, or 3.2 million square feet, of the stockpile’s storage capacity was occupied by stocks designated as beyond repair or for disposal. For example, about 300,000 tons of items designated for disposal were stored inside at an annual cost of about $8 million and occupied nearly 2.8 million square feet. Aggregated, these stocks would fill at least two storage installations that could be used to store serviceable stocks. We found the following examples of individual types of ammunition with questionable needs. In one case, 251,000 propelling charges (for 155-millimeter guns) that had been condemned but not designated for disposal were taking up 36,031 square feet (see fig. 3.5). In another case, 715 unserviceable Nike Hercules rocket motors with no requirements occupied 31,212 square feet. One depot was storing 458 of these items, some of which were manufactured in 1959. According to an official there, these rocket motors occupied 16 to 20 storage sites at that depot (see fig. 3.6). Two types of 3-inch, 50-caliber gun ammunition occupied about 15,000 square feet, even though the Navy no longer has any weapon in active inventory that uses this ammunition. According to an official at one installation, this ammunition has had few or no issues in 15 years. In yet another case, 5,382 Navy MK25 mines that appeared in the Army’s wholesale inventory database as belonging to the Navy did not appear in the Navy’s inventory database, and was occupying 49,552 square feet. About 2,200 (40 percent) of these mines had been suspended because their condition was unknown. We noted that some of these mines at one installation were manufactured in 1954, and at another installation, none of these mines had moved in over 10 years (see fig. 3.7). The proliferation of small, fragmented lots of ammunition also impedes the efficient management and use of ammunition storage space. According to the 1993 report on the wholesale stockpile, about 32,000 fragmented lots were stored largely because of base closures and the return of ammunition from Europe and Operation Desert Storm. Installations were forced to store the returned ammunition without knowing whether additional quantities of the same lots would be received. These lots were often stored in more than one location. To optimize storage space and reduce inventories and surveillance, ammunition from the same lot in the same condition should be located in one storage structure when possible. If personnel have to fill requisitions from several locations, response time is delayed and issue costs increase. Our analysis shows that since October 1993, the number of fragmented lots in the stockpile has increased 14 percent. These lots—some of which were stored in more than three structures—occupy 24 percent (5.9 million square feet) of the total storage space (see fig. 3.8). Fragmented lots can be reduced by selecting them first when filling requisitions, either by using an automated lot selection process or a manual selection process. As storage space has been significantly reduced and ammunition has been added, the disposal of excess, obsolete, and unusable ammunition has become crucial. (See fig. 3.9 for ammunition disposal operations.) As of September 1995, nearly 375,000 tons of ammunition items designated for disposal remained stored in the stockpile. According to single manager officials, the ammunition designated for disposal has increased and is likely to increase further. Also, in recent years, the identification of ammunition for disposal has greatly exceeded the amount disposed of. Ammunition designated for disposal from fiscal years 1986 through 1995 amounted to 681,000 tons, while the amount eliminated was 390,000 tons (see fig. 3.10). Storage installations and contractors execute the ammunition disposal program. Before an item is earmarked for disposal, other options—sales, transfers, and reuse—are explored. According to single manager officials, foreign military sales have not proved a successful means of disposing of excess ammunition because foreign countries buy new, rather than obsolete, items if they have the means to do so. Currently, the primary means of disposing of ammunition is by open burning or detonation. Greater emphasis, however, is being placed on the resource recovery and recycling method of ammunition disposal, even though this will increase costs. In 1994, the single manager developed the Integrated Ammunition Stockpile Management Plan to improve the poor conditions found in the wholesale ammunition stockpile. The plan proposes specific actions to achieve, by 2001, a smaller, safer ammunition stockpile by changing operations and optimizing space with fewer installations and staff. However, except in its inventorying of ammunition, the single manager has not substantially improved the operations and readiness of the wholesale ammunition stockpile. The single manager cannot ensure success in implementing the plan and managing the stockpile until the Army and other services identify their ammunition as required and nonrequired, but the services have no incentives to do so. Successful implementation of the plan also is dependent on sufficient funding being provided for the care, maintenance, and disposal of stockpile items. The Congress established a minimum funding level in fiscal year 1995, and the conferees on the DOD appropriations act established a funding minimum for fiscal year 1996 for the care and maintenance of ammunition. Also, the House Committee on Appropriations, in its report on DOD’s fiscal year 1995 appropriations, said it expects DOD to fund disposal activities at a level that will decrease the disposal backlog to a sustainable level of about 100,000 tons early in the next century. The single manager has greatly improved its inventory records, a critical function previously identified as seriously degraded. In 1995, the single manager inventoried the entire wholesale stockpile at a cost of $14 million. This inventory restored the stock records’ accuracy of item locations and quantities. It also introduced major changes in the inventory process to focus on the accuracy of quantities within storage sites. It did not, however, assess condition. Once a site is physically inventoried, it is sealed and no longer subject to a yearly inventory unless activity affects its stock balance. To ensure that stock balances are correct, 10 percent of all sealed locations will be sampled annually. This new process is intended to reduce the inventory workload, freeing staff for other duties. The single manager has also taken steps to improve the stockpile’s operations, as planned. For example, it has consolidated some small, fragmented lots of material and redistributed them within warehouses and has removed some items from inappropriate storage. Storage installations in fiscal years 1994 and 1995 freed about 800,000 square feet of space. In addition, the single manager has adopted a priority system to ensure that required war reserve and training items receive maintenance first. Quarterly reviews will focus on the most urgent maintenance needs. At all six storage installations we visited, officials either were unaware of any progress made or had not detected any change in operations resulting from the single manager’s “tiering” concept, which relies on each service’s categorization of its ammunition as required and nonrequired. The problem is that neither the Army nor the other services have identified stock in those categories. The single manager’s three-tier concept is designed to ensure that the more critical ammunition is stored in depots capable of providing the quickest response to mobilization. Four tier I depots would contain mostly required items needed in the first 30 days of mobilization, items needed for training, and items needed beyond 30 days to augment tier II and III depots’ war reserve stocks. Tier I depots would receive all support necessary for storage, surveillance, inventories, maintenance, and disposal. Tier II depots would normally store war reserves needed more than 30 days after mobilization, production offset items, and some nonrequired stocks awaiting disposal. Tier III depots would be caretakers for items awaiting disposal or relocation. The single manager has not aggressively pursued the services’ efforts to identify stock as required and nonrequired, and the single manager does not know the priority the services place on each type of ammunition. As a result, surveillance, maintenance, storage, and inventories may not be focused on priority stock to ensure it is ready for shipment when needed, and scarce resources may be spent on items with low or no priority. During our review, we found that the Army had not fully complied with the single manager’s plan to identify ammunition, and the other services may not fully understand the stockpile’s definition of required and nonrequired ammunition. Some attempts were made to generate the necessary data, but the services did not provide sufficient detail. In 1993, the Air Force classified serviceable high-priority items as tier I, unserviceable items as tier III, and all others as tier II, but it did not know whether the items in tiers I and II were required and the items in tier III were nonrequired. Officials said that the single manager did not ask for the information by required and nonrequired categories. In 1994, the Navy provided tonnage data to the single manager by types of ammunition, which in a general sense categorizes items into tiers. Navy officials could not recall being requested to categorize ammunition as required or nonrequired, and they noted that the wholesale stockpile manages only 13 percent of the Navy’s ammunition inventory. Most of the Navy assets are stored aboard ships and at naval weapon stations, which they consider to be tier I and II locations. Marine Corps officials said they had not been required by the single manager to categorize items as required or nonrequired. During our review, we found that for inspection purposes, the Army had assigned a priority to each type of ammunition that can be used to identify required and nonrequired ammunition. The priorities range from ammunition needed for training and war reserve to ammunition for which there is no formal requirement. The single manager requested that the other services concur with these priority definitions. The Marine Corps responded; however, the Navy and the Air Force have not responded to this request, and the single manager cannot require the services to provide this information. The single manager is concerned that it will not consistently have sufficient funds through 2001 to implement its $2.7 billion plan to restore the stockpile to a usable condition and dispose of unneeded ammunition. The single manager uses operation and maintenance (O&M) funds for receipts and issuance, inventories, and surveillance of ammunition and procurement appropriations for disposal of excess, obsolete, and unsafe ammunition. The O&M funding allocated by the Army for inventories, storage, and surveillance has historically been less than needed by the single manager and has not yet been provided to implement the single manager’s plan. Therefore, the single manager has made little progress in correcting stockpile problems. Moreover, the progress made in correcting inventory records in 1995 may be jeopardized because funding allocated by the Army is insufficient to maintain the accuracy of the records. According to the single manager, to successfully carry out its plan and restore stockpile readiness, it must have consistent full funding over several years for stockpile activities. The plan was based on near-term funding levels, beginning in fiscal year 1996, and it projected full implementation by fiscal year 2001. However, actual funding for fiscal years 1996 and 1997 was less than required, which, according to the single manager, postponed implementation of the plan by 2 years—from 2001 to 2003. Moreover, because of limited staff at stockpile installations, large funding levels in any given year will not enable the single manager to catch up—a lost year will add an additional year to fully implement the plan. For fiscal year 1995, the Congress statutorily required that a minimum of $388.6 million of the Army’s 1995 O&M account be spent specifically for the safety and security, receipt and issue, efficient storage and inventory, surveillance, and other activities associated with conventional ammunition. For fiscal year 1996, the conferees on the DOD appropriations act directed that a minimum of $300.9 million be spent for the same purpose. According to single manager officials, setting a minimum is a good approach because funding levels are consistent and better planning and management decisions can be made. The House Committee on Appropriations report on the 1995 DOD appropriations stated that it expects the Army to fully fund ammunition activities in future budget submissions. It also commended DOD for increasing its budget for disposal activities to $95 million for fiscal year 1995, and it recommended funding of $110 million and stated the expectation that DOD would continue this level of funding in future budgets. In its 1994 plan to improve stockpile management, the single manager set a goal to reduce the 423,000 tons of ammunition awaiting disposal to 100,000 by fiscal year 2004. The three interrelated factors to accomplish this goal are anticipated disposal quantities between fiscal years 1996 and 2004, the actual disposal funding, and the average cost to destroy a ton of ammunition. In March 1996, the Army estimated that 685,900 tons—more than triple the 1994 single manager’s estimate of 225,000 tons—will be generated between fiscal years 1996 and 2004. This estimate does not include 98,834 tons (85,733 tons of industrial stocks and 13,101 tons of tactical missile and large rocket motor assets) that will be generated which have other sources of disposal funding. If the single manager receives $100 million a year through fiscal year 2004 for disposal, and the disposal cost per ton is no more than $909 a ton, the single manager will meet its goal of eliminating the 100,000-ton backlog. The single manager recognizes that it will be difficult to meet this goal because it relies on a significant level of funding and the cost to dispose of ammunition may increase. Therefore, the goal will not be met if the single manager does not receive $100 million a year or if the disposal cost per ton increases. For example, if the average cost per ton is $1,100, the disposal backlog will be over 239,000 tons at the end of fiscal year 2004. Likewise, if the cost is $1,300 a ton, the backlog will be over 365,000 tons. The disposal stockpile most likely will grow even more as ammunition quantities excess to service requirements are identified (see ch. 2). Moreover, the single manager is concerned that the disposal program will suffer from funding cuts, personnel shortages, and low priority. If the past is any indication, the single manager may be correct. During fiscal years 1986-94, funding for disposal totaled $266 million, considerably less than the $695 million the single manager estimated was needed to operate at maximum capacity. The disposal of obsolete and deteriorated ammunition is a time-consuming and expensive process. At the installation with the largest disposal capacity, 1,300 tons of ammunition were destroyed at a cost of about $1 million during 1 week we visited. Additionally, the lack of Army funding has affected the single manager’s ability to operate disposal facilities at full capacity. Although the estimated disposal capacity is over 100,000 tons of ammunition per year, the single manager has not been able to fully fund this function. Prior to 1995, the greatest amount disposed of was 61,500 tons in 1992; only 11,700 tons were disposed of in 1990. For example, one installation that can process 27,800 tons of ammunition annually had been allocated only 19,200 tons for disposal in fiscal year 1995. Another installation with a capacity to dispose of about 35,900 tons had been allocated only about 3,800 tons in fiscal year 1994. The single manager plans to gradually decrease its reliance on open burning/detonation of ammunition because environmental regulations have made these methods difficult and undesirable. Currently, however, open burning/detonation is the only cost-effective method of disposal for some items, such as cluster bombs and large rocket motors. Nonetheless, the single manager plans to increase disposal through resource recovery and recycling methods. These methods are more costly—over $2,000 per ton or over twice as much as for open burning/open detonation. Should the cost per ton to dispose of ammunition approach this higher level, the backlog would increase significantly. DOD concurred that problems with the ammunition stockpile management threaten readiness. DOD noted that funding levels in fiscal years 1993 and 1994 were so low as to force concentration on shipments and receipts at depots. DOD said that during this period surveillance, stockpile reliability testing, and priority maintenance projects were severely limited. DOD agreed that defect codes had not been entered for all items with overdue inspections but said inspections are performed prior to issuance of any item. DOD also said that during the first quarter of fiscal year 1996, significant progress was made toward prioritizing ammunition items and identifying those that satisfy power projection and training requirements. Based on the new priorities, periodic inspection backlogs were adjusted and reduced from approximately 60,000 lots to approximately 30,000 lots with the identification of the required part of the stockpile. We strongly support identifying what is needed for power projection and training and concentrating limited resources on these ammunition items. We believe that DOD’s observation that periodic inspection needs were reduced from 60,000 to 30,000 lots and is indicative of potential reductions that can be made in the care and maintenance functions of the single manager. DOD partially concurred that the single manager’s plan for improvement has been delayed. DOD said that while funding has been problematic, DOD does not believe that the implementation of the improvements in ammunition management will be delayed. DOD said the overall goal of the Integrated Ammunition Stockpile Management Plan is to accomplish (1) depot tiering by 2001 and (2) the other changes in stockpile management as soon as possible. With the closure of three depots, DOD expects to accomplish the tiering goal on schedule. DOD notes that the two major requirements to implement the management plan are adequate funding and segregation of the stockpile. We agree that these are important. We are particularly concerned that the identification of required ammunition, such as for power projection and training, be done as quickly as possible so that the single manager can better use limited resources. We are also particularly concerned that unless funding levels and ammunition disposal are closely monitored, the single managers will not meet its 2004 disposal goal. Unquestionably, the single manager faces difficulties in resolving problems that developed with the wholesale stockpile as the Cold War ended. These difficulties stem from DOD’s downsizing of its force and facilities in response to the much reduced threat. Reductions in ammunition storage space and the workforce, coupled with the return of massive amounts of ammunition from closed bases in Europe and from Operation Desert Storm, have degraded the single manager’s ability to manage the stockpile. In addition, this ammunition was returned in small, broken lots that were stored haphazardly as they came from overseas. Partly as a result of this situation, half of the ammunition types in the stockpile contain items in excess of stated requirements, which we estimated to be valued at about $31 billion. This $31 billion of usable and unusable ammunition, as well as $2.9 billion of excess ammunition that was on the single manager’s inventory records but not the services’ inventory records, was being treated by the single manager as necessary to meet requirements. Because the single manager has concentrated on responding to requests for usable ammunition, inspections and tests of ammunition have been delayed. The single manager does not know how much ammunition in excess of stated requirements is in the stockpile and is therefore unaware of what ammunition could be shared among the services to alleviate shortages and what unusable ammunition does not need attention beyond that for safety reasons. In addition, there are tremendous backlogs of ammunition to dispose of. For the foreseeable future, this disposable ammunition will increase and take up limited storage space. These problems are not insurmountable, but they will take time to overcome. The Integrated Ammunition Stockpile Management Plan is a step in the right direction. In addition, the minimum levels set for the care and maintenance of ammunition established by the Congress for fiscal year 1995 and the House Committee on Appropriations for fiscal year 1996 have helped the single manager in meeting its responsibilities. The single manager’s success in implementing the management plan is limited by the services’ lack of incentives to identify excess ammunition. The services are not inclined to determine which of their ammunition is required and declare the remainder excess because once ammunition is declared excess, a service is not reimbursed for its cost if another service wants it. Also, the services have no incentive to mark ammunition for disposal because they do not have to pay the single manager to store it. As the Joint Commanders Ordnance Group’s 1993 report points out, the single manager could charge the services a storage fee as an incentive for the services to relinquish ownership of excess, old, and obsolete ammunition. The report also suggested that additional storage space could be made available if excess ammunition was used in training, included in foreign military sales or grant aid programs, or destroyed. In addition, as we recommended in 1979, the single manager could own, manage, and control the entire ammunition stockpile. If this was the case, the manager would have visibility over ammunition in excess of established requirements and could distribute it to other services that need it or, if unneeded, dispose of it when there was no longer a reason to retain it. Another troublesome problem is the disposal of excess ammunition, which is a time-consuming, expensive process. For example, at the installation with the largest disposal capacity, 1,300 tons of ammunition were destroyed at a cost of about $1 million during 1 week we visited. With over 375,000 tons of ammunition awaiting disposal at the end of fiscal year 1995 and additional ammunition identified for disposal each year, it will take years to dispose of the ammunition. And because of the expense associated with disposing of this much ammunition, finding the funds to facilitate disposal is difficult. One option would be to require the services to include the cost to dispose of ammunition being replaced in budgets for new ammunition. While this option would not eliminate the significant quantities of ammunition already awaiting disposal, it would focus earlier attention on the ammunition disposal problem, provide additional funds for disposal, and over time significantly reduce the quantities for disposal. To impress upon the services the need to address the problem of excess ammunition, the Congress may wish to consider requiring the Secretary of Defense to report annually the amount of ammunition on hand and the amount that exceeds established requirements. This report could also cite progress made in addressing specific ammunition stockpile management problems, including identifying ammunition in excess of established requirements, cross-sharing of ammunition in excess of established requirements among services that have shortages, inspecting and testing ammunition, and disposing of excess ammunition when it no longer makes sense to retain it. With this information, the Congress could make more informed annual budget decisions related to the ammunition stockpile. To facilitate implementation of the single manager’s plan for storing, maintaining, and disposing of ammunition, we recommend that the Secretary of Defense develop incentives to encourage the military services to categorize their ammunition as required or as excess to stated requirements, to update this information annually, and to relinquish control of their excess ammunition to the Army single manager for distribution to other services that have shortages of ammunition or for disposal when it no longer makes sense to retain it. Possible changes in ammunition management, include requiring the services to pay the single manager a fee for storing their ammunition; using excess ammunition in training; authorizing the single manager to own, manage, and control the wholesale stockpile and/or have visibility of the services’ retail stocks and total requirements so the manager can identify ammunition excess to stated requirements and coordinate redistribution of it to services that need the ammunition or dispose of it; and requiring the services to include the cost to dispose of excess ammunition in their budgets for new ammunition. DOD partially concurred with the matter for congressional consideration. DOD said it already provides the Congress with ammunition inventory data in the Supply System Inventory Report and demilitarization information in the procurement budget justifications. We are aware of this report and the information contained in it. However, as currently prepared, the inventory report does not provide any information on the amount of ammunition that exceeds established requirements. Also, information on stockpile management problems and progress in solving these problems is not provided. DOD disagreed with the recommendation and options given for potential changes in ammunition management. DOD stated that it considers the present arrangement for managing much of the services’ stockpile to be satisfactory. DOD stated it believes stockpile stratification and cross-sharing could be enhanced but does not consider incentives to be necessary to encourage compliance by the military services. Problems with cross-sharing among the services noted in our 1979 report continue. In addition, due to large quantities of ammunition in storage and a reduced work force to manage this ammunition, problems with ammunition management threaten readiness. Therefore, we do not believe that existing DOD practices will solve the serious problems. The Integrated Stockpile Management Plan is a step in the right direction, yet all the services still have not identified required and nonrequired ammunition as called for in the 1994 plan. This is a very important part of this plan’s implementation. DOD disagreed with the options to require a storage charge or increase the single manager’s responsibilities. We agree other options are possible; those in our report are some potential options. However, we do not agree the present arrangement for managing the stockpile is working well and believe that existing DOD practices will not solve the problems. We are not advocating erosion of the centralized management of ammunition but are providing options to further strengthen ammunition management and provide incentives to the services to help the single manager operate more effectively. We continue to believe our recommendation is valid.
Pursuant to a congressional request, GAO reviewed the status of the Department of Defense's (DOD) ammunition stockpile, focusing on: (1) the amount of excess ammunition in the stockpile; and (2) problems related to the stockpile's management. GAO found that: (1) of the $80 billion in usable and unusable ammunition as of September 1994, about $31 billion was excess ammunition and about $22 billion was ammunition that was still usable; (2) the excess in usable ammunition is primarily due to the collapse of the Soviet Union and reduced U.S. military requirements; (3) while shortages of some specific ammunition types exist, the services generally have inventories that exceed their wartime and peacetime requirements; (4) in 1993 and 1994, the services spent about $125 million for ammunition that exceeded their fiscal year 1995 requirements; (5) the services have stored and continue to manage significant amounts of ammunition for weapons that are no longer in the active inventory; (6) increases in the ammunition stockpile and decreases in budget, workforce, and storage space could degrade the forces' readiness to meet wartime and peacetime needs; (7) DOD has not been able to conduct adequate ammunition testing and inspections to ensure the stockpile's usability and readiness; (8) DOD does not know the extent of excess ammunition stored at the services facilities; and (9) the ammunition stockpile will continue to grow until the services are given incentives to relinquish ownership of the ammunition and the single manager is provided with the funding and information necessary to expedite ammunition disposal.
Lead is unusual among drinking water contaminants in that it seldom occurs naturally in source water supplies like rivers and lakes. Rather, lead enters drinking water primarily as a result of the corrosion of materials containing lead in the water distribution system and in household plumbing. These materials include lead service pipes that connect a house to the water main, household lead-based solder used to join copper pipe, and brass plumbing fixtures such as faucets. The Safe Drinking Water Act is the key federal law protecting public water supplies from harmful contaminants. The Act established a federal-state arrangement in which states may be delegated primary implementation and enforcement authority (“primacy”) for the drinking water program. Except for Wyoming and the District of Columbia, all states and territories have received primacy. For contaminants that are known or anticipated to occur in public water systems and that the EPA Administrator determines may have an adverse impact on health, the Act requires EPA to set a non- enforceable maximum contaminant level goal (MCLG) at which no known or anticipated adverse health effects occur and that allows an adequate margin of safety. Once the MCLG is established, EPA sets an enforceable standard for water as it leaves the treatment plant, the maximum contaminant level (MCL). The MCL generally must be set as close to the MCLG as is “feasible” using the best technology or other means available, taking costs into consideration. The fact that lead contamination occurs after water leaves the treatment plant has complicated efforts to regulate it in the same way as most contaminants. In 1975, EPA set an interim MCL for lead at 50 parts per billion (ppb), but did not require sampling of tap water to show compliance with the standard. Rather, the standard had to be met at the water system before the water was distributed. The 1986 amendments to the Act directed EPA to issue a new lead regulation, and in 1991, EPA adopted the Lead and Copper Rule. Instead of an MCL, the rule established an “action level” of 15 ppb for lead in drinking water, and required that water systems take steps to limit the corrosiveness of their water. Under the rule, the action level is exceeded if lead levels are higher than 15 ppb in over 10 percent of tap water samples taken. Large systems, including WASA, generally must take at least 100 tap water samples in a 6-month monitoring period. Large systems that do not exceed the action level or that maintain optimal corrosion control for two consecutive 6-month periods may reduce the number of sampling sites to 50 sites and reduce collection frequency to once per year. If a water system exceeds the action level, other regulatory requirements are triggered. The water system must intensify tap water sampling, take additional actions to control corrosion, and educate the public about steps they should take to protect themselves from lead exposure. If the problem is not abated, the water system must annually replace 7 percent of the lead service lines under its ownership. The public notification requirements of the Safe Drinking Water Act are intended to protect public health, build trust with consumers through open and honest sharing of information, and establish an ongoing, positive relationship with the community. While public notification provisions were included in the original Act, concerns have been raised for many years about the way public water systems notify the public regarding health threats posed by contaminated drinking water. In 1992, for example, we reported, among other things, that (1) there were high rates of noncompliance among water systems with the public notification regulations in effect at that time and (2) notices often did not clearly convey the appropriate information to the public concerning the health risks associated with a violation and the preventive action to be taken. The 1996 Amendments to the Safe Drinking Water Act attempted to address many of these concerns by requiring that consumers of public water supplies be given more accurate and timely information about violations and that this information be in a form that is more understandable and useful. Drinking water is provided to District of Columbia residents under a unique organizational structure: The U.S. Army Corps of Engineers’ Washington Aqueduct draws water from the Potomac River and filters and chemically treats it to meet EPA specifications. The Aqueduct produces drinking water for approximately 1 million citizens living, working, or visiting in the District of Columbia, Arlington County, Virginia, and the City of Falls Church, Virginia. Managed by the Corps of Engineers’ Baltimore District, the Aqueduct is a federally owned and operated public water supply agency that produces an average of 180 million gallons of water per day at two treatment plants located in the District. All funding for operations, maintenance, and capital improvements comes from revenue generated by selling drinking water to the District of Columbia, Arlington County, Virginia, and the City of Falls Church, Virginia. The District of Columbia Water and Sewer Authority buys its drinking water from the Aqueduct. WASA distributes drinking water through 1,300 miles of water mains under the streets of the District to individual homes and buildings, as well as to several federal facilities directly across the Potomac River in Virginia. From its inception in 1938 until 1996, WASA’s predecessor, the District of Columbia Water and Sewer Utility Administration, was a part of the District’s government. In 1996, WASA was established by District of Columbia law as a semiautonomous regional entity. WASA develops its own budget, which is incorporated into the District’s budget and then forwarded to Congress. All funding for operations, improvements, and debt financing come from usage fees, EPA grants, and the sale of revenue bonds. EPA’s Philadelphia Regional Office has primary oversight and enforcement responsibility for public water systems in the District. According to EPA, the Regional Office’s oversight and enforcement responsibilities include providing technical assistance to the water suppliers on how to comply with federal regulations; ensuring that the suppliers report the monitoring results to EPA by the required deadlines; taking enforcement actions if violations occur; and using those enforcement actions to return the system to compliance in a timely fashion. The District’s Department of Health, while having no formal role under the Act, is responsible for identifying health risks and educating the public on those risks. Providing safe drinking water requires that water systems, regulators, and public health agencies fulfill individual responsibilities yet work together in a coordinated fashion. It is particularly important that these entities report and communicate information to each other in a timely and accurate manner. In the case of drinking water in the District of Columbia, one of the key relationships is the one between WASA, the deliverer of water to District customers, and EPA’s Philadelphia Office, the regulator charged with overseeing WASA’s compliance with drinking water regulations. Of particular note, one of WASA’s key obligations is to monitor the water it supplies to District customers through a tap water sampling program, and to report these results accurately and in a timely manner to EPA’s Philadelphia Office. As EPA itself has noted, one of the Philadelphia Office’s key obligations is to ensure that WASA understands the reporting requirements and reports monitoring results by required deadlines. It is noteworthy that WASA and EPA have taken or agreed to take steps that are clearly intended to improve communication and coordination between the agencies. For example: Under the Consent Order signed by EPA and WASA on June 17, 2004, WASA agreed to improve its format for reporting tap water samples by ensuring that the reports include tap water sample identification numbers, sample date and location, lead and copper concentration, service line materials, and reasons for any deviation from previously sampled locations. The monitoring reports are also to include the laboratory data sheets, which contain the raw test data recorded directly by the laboratory. Under the Order, WASA also agreed to submit to EPA for comment a plan and schedule for enhanced information, database management, and reporting. The plan is to describe how monitoring reports will be generated, maintained, and submitted to EPA in a timely fashion. EPA’s Philadelphia Office has altered the way in which it will handle compliance data from WASA and the Washington Aqueduct. According to the office, compliance data from both water systems will now be sent to those in the Office responsible for enforcing the Safe Drinking Water Act, so as to separate the enforcement/compliance assurance function from the municipal assistance function. Aside from the tap water monitoring issue, EPA’s Philadelphia Office acknowledges that its oversight of WASA public notification and education efforts could have been better, noting that “In hindsight, EPA should have asked more questions about the extent, coverage and impact of DC WASA’s public education program, and reacted to fill the public education gaps where they were evident.” To address the problem, the Philadelphia Office reported on its website that it will have to make some improvements in the way it exercises its own oversight responsibilities. Suggested improvements include obtaining written agreement from WASA to receive drafts of education materials and a timeline for their submission, reviewing drafts of public education materials for compliance with requirements, as well as effectiveness of materials and delivery, and acquiring outside expertise to assist in evaluating outreach efforts. As our work continues, we will seek to examine (to the extent it does not conflict with active litigation) other ways in which improved coordination between WASA and EPA could help both agencies better fulfill their responsibilities. We will also examine interrelationships that include other key agencies, such as the Aqueduct and the D.C. Department of Health. We will also examine how other water systems in similar situations interacted with federal, state, and local agencies. These experiences may offer suggestions on how coordination can be improved among the agencies responsible for protecting drinking water in the District of Columbia. WASA is not the first system to exceed the action level for lead. According to EPA, when the first round of monitoring results was completed for large water systems in 1991 pursuant to the Lead and Copper Rule, 130 of the 660 systems serving populations over 50,000 exceeded the action level for lead. EPA data show that since the monitoring period ending in 2000, 27 such systems have exceeded the action level. As part of our work, we will be examining the innovative approaches some of these systems have used to notify and educate their customers. I would like to touch on the activities of two such systems, the Massachusetts Water Resources Authority and the Portland, Oregon, Water Bureau. Each of these systems has employed effective notification practices in recent years that may provide insights into how WASA, and other water systems, could improve their own practices. The Massachusetts Water Resources Authority (MWRA) is the wholesale water provider for approximately 2.3 million customers, mostly in the metropolitan Boston area. Under an agreement with the Massachusetts Department of Environmental Protection, monitoring for lead under the Lead and Copper Rule occurs in each of the communities that MWRA serves and the results are submitted together. Initial system-wide tap water monitoring results in 1992 showed a 90th percentile lead concentration of 71 ppb (meaning 10 percent of its samples scored at this level and above). According to MWRA, adjustments in corrosion control have led to a reduction in lead levels, but the 90th percentile lead concentration in MWRA’s service area has still been above the action level in four of the seven sampling events since early 2000. According to an MWRA official, the public education program for lead in drinking water is designed to ensure that all potentially affected parties within MWRA’s service area receive information about lead in drinking water. He noted, for example, that while the Lead and Copper Rule requires that information be sent to consumers in their water bills, the large population of renters living in MWRA’s service area often do not receive water bills. Therefore, MWRA included information about lead in its consumer confidence report, which is sent to all mailing addresses within the service area. Additionally, MWRA uses public service announcements, interviews on radio and television talk shows, appearances at city councils and other local government agency meetings, and articles in local newspapers to convey information. MWRA also conducted focus groups to judge the effectiveness of the public education program and continually makes changes to refine the information about lead in drinking water. An MWRA official also noted that MWRA focuses portions of its lead public education program on the populations most vulnerable to the health effects of lead exposure. For example, MWRA worked with officials from the Massachusetts Women, Infants and Children Supplemental Nutrition Program (WIC) to design a brochure to help parents understand how to protect their children from lead in drinking water. Among other things, the brochure includes the pertinent information in several foreign languages, including Spanish, Portuguese, and Vietnamese. The WIC program also includes information on how to avoid lead hazards when preparing formula. The Portland Water Bureau provides drinking water to approximately 787,000 people in the Portland metropolitan area, nearly one-fourth of the population of Oregon. Since 1997, the city has exceeded the lead action level 6 times in 14 rounds of monitoring. According to Bureau officials, the problem stems mainly from lead solder used to join copper plumbing and from lead in home faucets. Portland’s system has never had lead service lines, and the Water Bureau finished removing all lead fittings within the water system’s control in 1998. The Portland Water Bureau sought flexibility in complying with the Lead and Copper Rule. The state of Oregon allowed the Water Bureau to implement a lead hazard reduction program as a substitute for the optimal corrosion control treatment requirement of the Lead and Copper Rule. Portland’s lead hazard reduction program is a partnership between the Portland Water Bureau, the Multnomah County and Oregon State health departments, and community groups. According to Portland Water Bureau officials, the program consists of four components: (1) water treatment for corrosion control; (2) free water testing to identify customers who may be at significant risk from elevated lead levels in drinking water; (3) a home lead hazard reduction program to prevent children from being exposed to lead from lead-based paint, dust, and other sources; and (4) education on how to prevent lead exposure targeted to those at greatest risk from exposure. As the components suggest, the program is focused on reducing exposure to lead through all exposure pathways, not just through drinking water. For example, the Water Bureau provides funding to the Multnomah County Health Department’s LeadLine—a phone hotline that residents can call to get information about all types of lead hazards. Callers can get information about how to flush their plumbing to reduce their lead exposure and can request a lead sampling kit to determine the lead concentration in the drinking water in their home. The Water Bureau also provides funding for lead education materials provided to new parents in hospitals, for billboards and movie advertisements targeted to neighborhoods with older housing stock, and to the Community Alliance of Tenants to educate renters on potential lead hazards. Each of these materials directs people to call the LeadLine if they need additional information about any lead hazard. The Water Bureau evaluates the results of the program by tracking the number of calls to the LeadLine, and by surveying program participants to determine their satisfaction with the program and the extent to which the program changed their behavior. In January 2004, the Portland Water Bureau sent a targeted mailing to those residents most likely to be affected by lead in drinking water. The mailing targeted homes of an age most likely to contain lead-leaching solder where a child 6 years old or younger lived. Approximately 2,600 postcards were sent that encouraged residents to get their water tested for lead, learn about childhood blood lead screening, and reduce lead hazards in their homes. Water Bureau officials said that they obtained the information needed to target the mailing from a commercial marketing company, and that the commercial information was inexpensive and easy to obtain. In an ideal world, a water utility such as WASA would have several different types of information that would allow it to monitor the health of individuals most susceptible to the health effects of lead in drinking water. The utility would know the location of all lead service lines and homes with leaded plumbing (pipes, solder and/or fixtures) within its service area. The utility would also know the demographics of the residents of each of these homes. With this information, the utility could identify each pregnant woman or child six years old or younger who would be most likely to be exposed to lead through drinking water. These individuals could then be educated about how to avoid lead exposure, and lead exposure for each of these individuals could then be monitored through water testing and blood lead testing. Unfortunately, WASA and other drinking water utilities do not operate in an ideal world. WASA does have some information on the location of lead service lines within its distribution area. Its predecessor developed an inventory of lead service lines in its distribution system in 1990 as part of an effort to identify sampling locations to comply with the Lead and Copper Rule. According to WASA officials, identifying the locations of lead service lines was difficult because many of the records were nearly 100 years old and some of the information was incomplete. According to this 1990 inventory, there were approximately 22,000 lead service lines. WASA updated the inventory in September 2003, and estimated that it had 23,071 “known or suspected” lead service lines. WASA subsequently identified an additional 27,495 service lines in the distribution system made of “unknown” materials. Consequently, there is some uncertainty over the actual number and location of the lead service lines in WASA’s distribution system. The administrative order that EPA issued in June 2004 requires WASA to further update its inventory of lead service lines. Regardless of the information WASA has about the location of lead service lines, according to WASA officials, WASA has little information about the location of customers who are particularly vulnerable to the effects of lead. The District’s Department of Health is responsible for monitoring blood lead levels for children in the District. Officials from the Department of Health told us that they maintain a database of the results of all childhood blood lead testing in the District, and have studied the distribution of blood lead levels in children on a neighborhood basis. However, according to a joint study by the D.C. Department of Health and the Centers for Disease Control and Prevention (CDC) published in March 2004, it is difficult to discern any effect of lead in drinking water on children’s blood lead levels because the older homes most likely to have lead service lines are also those most likely to have other lead hazards, such as lead in paint and dust. This joint study also described efforts by the Department of Health and the United States Public Health Service to conduct blood lead monitoring for residents of homes whose drinking water test indicated a lead concentration greater than 300 ppb. None of the 201 residents tested were found to have blood lead levels exceeding the levels of concern for adults or children, as appropriate. A good deal of research has been conducted on the health effects of lead, in particular on the effects associated with certain pathways of contamination, such as ingestion of leaded paint and inhalation of leaded dust. In contrast, the most relevant studies on the isolated health effects of lead in drinking water date back nearly 20 years—including the Glasgow Duplicate Diet Study on lead levels in children upon which the Lead and Copper Rule is partially based. According to recent medical literature and the public health experts we contacted, the key uncertainties requiring clarification include the incremental effects of lead-contaminated drinking water on people whose blood lead levels are already elevated from other sources of lead contamination and the potential health effects of exposure to low levels of lead. As we continue our work, we will examine the plans of EPA and other organizations to fill these and other key information gaps. Lead is a naturally occurring element that, according to numerous studies, can be harmful to humans when ingested or inhaled, particularly to pregnant and nursing women and children aged six or younger. In children, for example, lead poisoning has been documented as causing brain damage, mental retardation, behavioral problems, anemia, liver and kidney damage, hearing loss, hyperactivity, and other physical and mental problems. Exposure to lead may also be associated with diminished school performance, reduced scores on standardized IQ tests, schizophrenia, and delayed puberty. Long-term exposure may also have serious effects on adults. Lead ingestion accumulates in bones, where it may remain for decades. However, stored lead can be mobilized during pregnancy and passed to the fetus. Other health effects in adults that may be associated with lead exposure include irritability, poor muscle coordination and nerve damage, increased blood pressure, impaired hearing and vision, and reproductive problems. There are many sources of lead exposure besides drinking water, including the ingestion of soil, paint chips and dust; inhalation of lead particles in soil or dust in air; and ingestion of foods that contain lead from soil or water. Extensive literature is available on the health impacts of lead exposure, particularly from contaminated air and dust. CDC identified in a December 2002 Morbidity and Mortality Weekly Report the sources of lead exposure for adults and their potential health effects. In a September 2003 Morbidity and Mortality Weekly Report, CDC identified the most prevalent sources of lead in the environment for children, and correlated high blood lead levels in children with race, sex, and income bracket. The surveys suggest that Hispanic and African-American children are at highest risk for lead poisoning, as well as those individuals who are recipients of Medicaid. Dust and soil contaminated by leaded paint were documented as the major sources of lead exposure. Children and adults living in housing built before 1950 are more likely to be exposed to lead paint and dust and may therefore have higher blood lead levels. Articles in numerous journals have reported on the physical and neurological health effects on children of lead in paint, soil, and dust. The New England Journal of Medicine published an article in April 2003 that associated environmental lead exposure with decreased growth and delayed puberty in girls. In 2000, the Journal of Public Health Medicine examined the implications of lead-contaminated soil, its effect on produce, and its potential health effects on consumers. Lead can also enter children’s homes if other residents are employed in lead contaminated workplaces. In 2000, Occupational Medicine found that children of individuals exposed to lead in the workplace were at higher risk for elevated blood lead levels. The EPA has aided in some similar research through the use of its Integrated Exposure Uptake Biokinetic Model for Lead in Children (IEUBK). This model predicts blood lead concentrations for children exposed to different types of lead sources. According to a number of public health experts, drinking water contributes a relatively minor amount to overall lead exposure in comparison to other sources. However, while lead in drinking water is rarely thought to be the sole cause of lead poisoning, it can significantly increase a person’s total lead exposure—particularly for infants who drink baby formulas or concentrated juices that are mixed with water from homes with lead service lines or plumbing systems. For children with high levels of lead exposure from paint, soil, and dust, drinking water is thought to contribute a much lower proportion of total exposure. For residents of dwellings with lead solder or lead service lines, however, drinking water could be the primary source of exposure. As exposure declines from sources of lead other than drinking water, such as gasoline and soldered food cans, drinking water will account for a larger proportion of total intake. Thus, according to EPA, the total drinking water contribution to overall lead levels may range from as little as 5 percent to more than 50 percent of a child’s total lead exposure. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of this Subcommittee may have at this time. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony included Steve Elstein, Samantha Gross, Karen Keegan, Jessica Marfurt, and Tim Minelli. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Concerns have been raised about lead in District of Columbia drinking water and how those charged with ensuring the safety of this water have carried out their responsibilities. The 1991 Lead and Copper Rule (LCR) requires water systems to protect drinking water from lead by, among other things, chemically treating it to reduce its corrosiveness and by monitoring tap water samples for evidence of lead corrosion. If enough samples show corrosion, water systems officials are required to notify and educate the public on lead health risks and undertake additional efforts. The Washington Aqueduct, owned and operated by the U.S. Army Corps of Engineers, treats and sells water to the District of Columbia Water and Sewer Authority (WASA), which delivers water to D.C. residents. EPA's Philadelphia Office is charged with overseeing these agencies. GAO is examining (1) the current structure and level of coordination among key government entities that implement the Safe Drinking Water Act's regulations for lead in the District of Columbia, (2) how other drinking water systems conducted public notification and outreach, (3) the availability of data necessary to determine which adult and child populations are at greatest risk of exposure to elevated lead levels, and what information WASA is gathering to help track their health, and (4) the state of research on the health effects of lead exposure. The testimony discusses preliminary results of GAO's work. GAO will report in full at a later date. This statement discusses GAO's preliminary observations and highlights areas of further examination. One of the key relationships in the effort to ensure the safety of the District's drinking water is the one between WASA, the deliverer of water, and EPA's Philadelphia Office, which oversees WASA's compliance with drinking water regulations. Recent public statements and corrective actions by these parties clearly indicate that coordination and communication between them could have been better in the years preceding the current lead controversy. GAO's future work will examine (to the extent appropriate) the interrelationships among other key agencies (such as the Aqueduct and the D.C. Department of Health); how other water systems in similar situations interacted with federal, state, and local agencies; and what the experiences of these other jurisdictions may suggest concerning how improved coordination can better protect drinking water in the District of Columbia. Other water systems facing elevated lead levels used public notification and education practices that may offer lessons for conducting outreach to water customers. For example, some of the practices of the two water systems we have begun to examine--the Massachusetts Water Resources Authority and the Portland (Oregon) Water Bureau--include tailoring their communications to varied audiences in their service areas, testing the effectiveness of their communication materials, and linking demographic and infrastructure data to identify populations at greatest risk from lead in drinking water. WASA faces challenges in collecting the information needed to identify District citizens at greatest risk from lead in drinking water. Specifically, WASA has partial information on which of its customers have lead service lines, and is in the process of obtaining more complete information. GAO's future work will examine the efforts of other water systems to go one step further by linking data on at-risk populations (such as pregnant mothers, infants, and small children) with data on homes suspected of being served by lead service pipes and other plumbing fixtures that may leach lead into drinking water. Nationally, much is known about the hazards of lead once in the body and how lead from paint, soil, and dust enter the body, but little research has been done to determine actual lead exposure from drinking water, and the information that does exist is dated. In our future work, we will examine the plans of EPA and other organizations to fill this key information gap.
Emissions of heat-trapping greenhouse gases are believed to contribute to global warming. Carbon dioxide, generated both naturally and by the burning of fossil fuels, accounts for the majority of emissions. According to administration representatives, the potential environmental, health, and economic consequences of increasing accumulations of greenhouse gas emissions are serious. For example, according to an Assistant Administrator of the Environmental Protection Agency (EPA), without significantly decreased emissions, over the long term, 15 percent or more of the nation’s coastal wetlands could be submerged, the quality of drinking water in certain states could be severely degraded, malaria and other infectious diseases could increase, and severe droughts and floods could increase personal and property damage. In October 1997, the President proposed a three-stage response to climate change, covering a period of 14 years. Stage 1 (1999-2003) is intended to put the nation “on a smooth path” to reducing greenhouse gases through research and development, tax credits for energy-efficient products, and eight other voluntary actions (listed in app. I). During stage 2 (2004-07), the results of stage 1 would be studied, and a system would be designed, and perhaps tested, for awarding and trading permits to emit greenhouse gases. In stage 3 (2008-12), mandatory limits on emissions would be put in place through a market-based domestic and international emissions trading system. Under the Kyoto Protocol, the United States agreed to limit its emissions during the 5-year period 2008 through 2012 to 7 percent below the 1990 emissions level. To achieve this new level, emissions would have to be cut by 31 percent by 2010 (the midpoint of the 5-year period), or the equivalent of about 552 million metric tons of carbon. In February 1998, the administration submitted its budget for fiscal year 1999, including a request to add $6.3 billion over the 5 years of stage 1 to existing funding levels for climate change activities. The majority of this sum ($3.6 billion) was for tax incentives administered by the Department of the Treasury. The balance was designated for the Department of Energy (DOE) ($1.9 billion), EPA ($677 million), the U.S. Department of Agriculture ($86 million), the Department of Commerce ($38 million), and the Department of Housing and Urban Development ($10 million). According to an Office of Management and Budget (OMB) official, that office and seven other government entities will also be involved—the departments of Defense and State, the General Services Administration, the National Science Foundation, the Office of Science and Technology Policy, a White House task force, and the Council of Economic Advisers. In recent years, the Congress has emphasized the need for good planning practices to ensure that federal funds are spent effectively and has directed federal agencies to focus their planning efforts on the results to be achieved. The Government Performance and Results Act of 1993 requires, among other things, that federal agencies set program goals and measure their performance in achieving those goals. In doing this, agencies are to set annual performance goals that have objective, quantifiable, and measurable target levels and that focus on results to the extent possible. In addition, the act implies that federal programs attempting to achieve the same or similar results should be closely coordinated to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. To answer the three questions you asked us, we interviewed officials at DOE, EPA, and Treasury because of their responsibilities for stage 1 actions; we also reviewed budget documents, agencies’ strategic and performance plans, and other documents relating to their programs. In addition, we discussed the governmentwide scope of stage 1 efforts with OMB officials. Of the 10 proposed stage 1 actions, we selected 3 for detailed review because of their significant budgeted costs and our past work: (1) tax credits, (2) research and development, and (3) the increased use of energy-efficient products. These three actions account for nearly all of the requested $6.3 billion in additional funding. We did not attempt to determine the reasonableness of the administration’s cost estimates. We performed our review from January through June 1998 in accordance with generally accepted government auditing standards. The administration has several broad goals for what it wants to accomplish in stage 1 and a broad plan for accomplishing those goals. However, the administration has not established a quantitative goal for reducing greenhouse gas emissions by the end of stage 1—a primary focus of its initiative. Furthermore, while OMB officials acknowledge that the plan is broad, they have no specific time frame for preparing a more specific plan that would include overall performance goals and measures to meet the spirit of the Government Performance and Results Act. The administration’s goals and plan for accomplishing its goals are contained in the President’s October 1997 speech, according to OMB’s Office of Natural Resources, Energy and Science. There are at least three major goals, according to this office: (1) to spur energy efficiency and encourage the development and deployment of energy sources that produce lower levels of carbon, (2) to provide an immediate incentive for near-term action to reduce greenhouse emissions, and (3) to seek win-win solutions to reduce carbon emissions that can improve energy efficiency and save consumers money. However, the administration has not established a quantitative goal for reducing greenhouse gas emissions by the end of stage 1. According to OMB’s Associate Director for Natural Resources, Energy and Science, the administration expects to establish emissions reduction goals for stage 1 but has not yet done so because the effort is so new. He also pointed out that DOE and EPA have performance measures related to their respective activities. He said that OMB expects to continue coordinating and monitoring the efforts of individual agencies. While OMB officials acknowledge that the existing stage 1 plan is broad, they have no specific time frame for preparing a more detailed plan that would include overall performance goals and measures to meet the spirit of the Government Performance and Results Act. We believe a quantitative overall stage 1 goal, and a plan to implement that goal, are desirable primarily because the proposed federal response is extensive—involving 14 federal entities and budgeted to cost $6.3 billion in additional funding. Coordinated program efforts could help ensure that federal funds are used efficiently and could contribute to the overall effectiveness of the federal effort. The extent to which the $6.3 billion stage 1 proposal will help the United States meet the protocol’s target for reduced emissions is unclear. The largest investment under the proposal, tax credits, with an estimated cost of about $3.6 billion, has no estimate of the expected benefits and thus is not tied to the protocol’s emissions reduction target. The administration has set performance goals for most of the $2.7 billion proposed for research and development and the increased use of energy-efficient products and has estimated potential emissions reductions. However, DOE only recently provided its estimates, while commenting on a draft of this testimony, and we have not analyzed the method or assumptions used to support them. Such an assessment would require a detailed examination of DOE’s impact analysis for the technology sectors involved. In addition, EPA’s estimates may be overstated. Therefore, it is uncertain how much these activities will help the United States meet the target specified by the protocol. The administration has proposed a package of nine tax credits designed to accelerate the adoption of more energy-efficient technologies. Treasury will be responsible for administering the tax credits, which are estimated to cost $421 million in fiscal year 1999 and a total of $3.6 billion during stage 1. The credits are primarily intended to encourage more energy-efficient buildings, transportation, industrial processes, and electricity generation. However, the administration has not estimated the benefits that would result from the credits. According to the Deputy Assistant Secretary for Tax Analysis, official estimates of the benefits are being prepared but are not yet available. DOE is responsible for implementing most of the research and development activities under the administration’s climate change proposal. It plans to increase its spending to $1.06 billion for climate change research and development in fiscal year 1999, a $331 million increase in funding from the 1998 level. The $331 million increase, as well as the remaining $729 million, will continue to support and expand existing research and development programs in energy efficiency and renewable energy, as well as other programs related to climate change. Over the 5-year period, DOE estimates that it will increase spending for climate change research and development by about $1.9 billion. While DOE plans to spend over $1 billion for research and development in fiscal year 1999, the results of that spending are uncertain. Because the research and development efforts address multiple objectives, a senior DOE official told us that the agency’s performance goals do not specifically quantify the extent to which these activities could decrease greenhouse gas emissions. These multiple objectives include decreasing U.S. dependence on foreign oil, improving air quality, decreasing energy costs for consumers and businesses, increasing economic competitiveness, and decreasing greenhouse gas emissions, according to departmental officials. However, DOE recently provided us with estimates while commenting on a draft of this testimony. The Department’s estimates assume a continuation of its proposed fiscal year 1999 funding of approximately $1.06 billion per year during the 5-year period. DOE estimates reductions in carbon ranging from 31 million to 48 million metric tons by 2005; 87 million to 140 million metric tons by 2010; and 189 million to 338 million metric tons by 2020. Because we received the estimates so recently, we have not analyzed the method or assumptions used to support them. Such an assessment would require a detailed examination of DOE’s impact analysis for the technology sectors involved—renewable energy, transportation, industry, buildings, and federal energy use. Nonetheless, we are concerned about the reasons why these estimates have not been expressed as performance goals and measures in DOE’s annual performance plan. As such, they would be useful in helping DOE benchmark its progress in this area. Furthermore, in our April 1998 report, we pointed out five common questions the Congress may want to consider before funding DOE’s proposed increase for research and development or any research and development: (1) Would the private sector do the research without federal funding? (2) Will consumers buy the product? (3) Do the benefits exceed the costs? (4) Have efforts been coordinated? (5) Have implementation concerns been addressed? In discussing these themes, we cited previous GAO reports—concerning DOE and other agencies—to illustrate these areas. The primary focus of EPA’s responsibilities under the climate change initiative is to increase the use of energy-efficient products. As with DOE’s research and development activities, EPA’s efforts will largely continue and expand ongoing activities. For fiscal year 1999, the agency is proposing to spend about $142 million in that effort; this is an increase of about $77 million over the previous year’s $65 million. EPA has specified performance goals for this action. The goals include reducing U.S. energy consumption by over 45-billion kilowatt-hours and reducing emissions by 40-million metric tons of carbon equivalent per year. However, the goals may overstate the potential results of EPA’s programs. In a 1997 report on selected voluntary climate change programs, which are now included in EPA’s portion of the Climate Change Technology Initiative, we found that, in some cases, EPA did not adjust reported reductions to take account of nonprogram factors that may have contributed to the reported results. For example, for the Green Lights Program (which is intended to encourage businesses and others to install energy-efficient lighting), we found that EPA did not take into account the fact that utility companies’ financial incentives and other factors may have induced participants to undertake some energy-saving activities. In commenting on our 1997 report, EPA said it would further study the programs’ impact. In commenting on a draft of this statement, an EPA official stated that the results of the further study support EPA’s position that it has adequately accounted for nonprogram factors in reporting results. We have not had an opportunity to review the basis for this statement. Because stage 1 lacks a quantitative goal for reducing greenhouse gas emissions, does not have a specific performance plan, and contains incomplete information on expected outcomes and links to the protocol’s target, stage 1 may not provide a firm foundation for stages 2 and 3. The success of voluntary efforts in stage 1 would make it easier for the United States to adjust to the mandatory measures envisioned in stage 3 and to achieve the substantial reductions in emissions specified in the Kyoto Protocol. These mandatory measures would be implemented in the third stage (2008-12), when the protocol’s target must be reached. There may be penalties for noncompliance if the United States ratifies the protocol but does not reach the target, although the specific penalties have not been agreed upon. The various stage 1 actions are designed to stimulate the development and use of energy-efficient products and technologies, according to administration officials. In so doing, they are meant to improve the nation’s energy efficiency, thus reducing greenhouse gas emissions, and to smooth the transition to the mandatory measures that are to be implemented in stage 3. However, because there is no emissions reduction goal and only a broad plan for stage 1, it is not clear how the transition is to be accomplished. A number of factors, including the short time period for achieving the emissions reduction target, make an effectively planned and implemented stage 1 important. First, the United States would be required by the protocol to meet the emissions target during the 5-year period, 2008 through 2012. This time period coincides with stage 3 of the President’s proposal. Second, the projected growth in U.S. carbon emissions will make the protocol’s target challenging to meet, according to an April 1998 estimate by the Energy Information Administration. Taking into account both the growth expected from 1990 through 2010 and the protocol’s target of reducing emissions to 7 percent below the 1990 level, the United States will need to reduce its emissions by 31 percent in 2010. Finally, according to the Department of State, the protocol’s targets are binding on nations that enter into the accord, and noncompliance could eventually carry penalties. The parties are to begin discussing procedures for eventually establishing penalties for noncompliance in Buenos Aires in November 1998. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions you may have. The administration has outlined 10 actions in stage 1, listed below: 1. Tax cuts to spur energy efficiency and the development of lower-carbon energy sources. 2. Research and development to accomplish the same goals. 3. Use of energy-efficient products, through a broad-based effort to expand the use of existing energy-efficient technologies. 4. Credit for early action, to provide an immediate incentive for companies to take near-term actions to cut emissions. 5. Industry-by-industry consultations, for key industry sectors to prepare plans for reducing emissions. 6. Focus on federal procurement and energy use as a means to reduce greenhouse gas emissions from federal sources. 7. Electricity restructuring, to change the rules that can impede the introduction of cleaner technologies. 8. The setting of a concentration goal for greenhouse gases in the atmosphere. 9. Bilateral dialogues with key developing countries to promote clean energy. 10. Economics and science reviews. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed: (1) the potential impact of efforts to comply with the Kyoto Protocol; (2) whether the administration has an overall goal for stage 1 and a plan for accomplishing that goal; (2) if funded, to what extent will the $6.3-billion stage 1 climate change proposal help the United States meet the protocol's emissions target; and (3) what the current implications are for the United States if the Senate ratifies the protocol, given the current status of the administration's efforts to implement the climate change proposal. GAO noted that: (1) the administration has several broad goals for what it wants to accomplish in stage 1 and a broad plan for accomplishing them; (2) both the broad goals and plan are contained in the President's October 1997 speech; (3) the administration has not established a quantitative goal for reducing greenhouse gas emissions by the end of stage 1--a primary focus of its initiative; (4) while Office of Management and Budget officials acknowledge that the plan is broad, they have no specific timeframe for preparing a more detailed plan that would include overall performance goals and measures to meet the spirit of the Government Performance and Results Act; (5) the extent to which the $6.3-billion stage 1 proposal will help the United States meet the protocol's target for emission reductions is unclear; (6) the largest investment under the proposal, tax credits, with an estimated cost of about $3.6 billion, has no estimate of the expected benefits and thus is not explicitly tied to the protocol's target for emission reductions; (7) the administration has set performance goals for most of the $2.7 billion proposed for research and development and the increased use of energy-efficient products and has estimated potential emissions reductions; (8) the Department of Energy only recently provided its estimates, while commenting on a draft of this testimony, and GAO has not analyzed them; (9) in addition, the Environmental Protection Agency's estimates may be overstated; (10) therefore, it is uncertain how much these activities will help the United States meet the target specified by the protocol; (11) without an overall goal and plan for stage 1 and complete information on expected outcomes and links to the protocol's emission reduction target, it is uncertain whether stage 1 will effectively lay the foundation for the 31-percent emissions reduction required by the protocol; and (12) although the administration's response to the protocol is relatively recent, a firm foundation in stage 1 is important because the protocol's targets for emission reductions are binding on the nations that agree to the protocol, and penalties for noncompliance with the targets are to be discussed by the parties to the protocol in November 1998.
According to the Coast Guard, the COP became operational in 2003 and is comprised of four elements: Track data feeds: The primary information included in the Coast Guard’s COP is vessel and aircraft position information—or tracks— and descriptive information about the vessels, their cargo, and crew. Track information may be obtained from a variety of sources depending on the type of track. For example, the COP includes track information or position reports of Coast Guard and port partner vessels. Information data sources: The information data sources provide supplementary information on the vessel tracks to help COP users and operational commanders determine why a track might be important. The COP includes data from multiple information sources that originate from the Coast Guard as well as from other government agencies and civilian sources. Command and control systems: These systems collect, fuse, disseminate, and store information for the COP. Since the COP became operational in 2003, the Coast Guard has provided COP users with various systems that have allowed them to view, manipulate and enhance their use of the COP. These systems have included the Global Command and Control System (GCCS), Command and Control Personal Computer (C2PC), and Hawkeye. In addition to the technology needed to view the COP, the Coast Guard has also developed technology to further enhance the information within the COP and its use to improve mission effectiveness. This has occurred in part through its former Deepwater Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) program system improvements. COP management procedures: These procedures address the development and the use of the COP. This would include, for example, the Concept of Operations document, which identifies the basic components, use, and exchange of information included in the COP and the requirements document, which identifies the essential capabilities and associated requirements needed to make the COP function. These procedures also include other documents such as standard operating procedures on how the Coast Guard uses the COP, agreements with others using the COP on how information is to be shared or exchanged, and rules for how data are correlated and how vessels are flagged as threats or friends. Figure 1 depicts the Coast Guard’s vision of the COP with Coast Guard internal and external users. In April 2013, we reported that since the COP became operational in 2003, the Coast Guard has made progress in adding useful data sources and in increasing the number of users with access to the COP. In general, the COP has added internal and external data sources and types of vessel-tracking information that enhance COP users’ knowledge of the maritime domain. Vessel tracking information had been available previously to Coast Guard field units located in ports through a Vessel Tracking Service—that is, a service that provides active monitoring and navigational advice for vessels in confined and busy waterways to help facilitate maritime safety. However, adding it to the COP provided a broader base of situational awareness for Coast Guard operational commanders. For example, before automatic identification system (AIS) vessel-tracking information was added to the COP, only Coast Guard units specifically responsible for vessel-tracking, were able to easily track large commercial vessels’ positions, speeds, courses, and destinations. According to Coast Guard personnel, after AIS data were added to the COP in 2003, any Coast Guard unit could access such information to improve strategic and tactical decision making. In 2006, the ability to track the location of Coast Guard assets, including small boats and cutters, was also added to the COP. This capability—also known as blue force tracking—allows COP users to locate Coast Guard vessels in real time and establish which vessels are in the best position to respond to mission needs. Similarly, blue force tracking allows the Coast Guard to differentiate its own vessels from commercial or unfriendly vessels. Another enhancement to the information available in the COP was provided through the updating of certain equipment on Coast Guard assets that enabled them to collect and transmit data. Specifically, the Coast Guard made some data collection and sharing improvements, including the installation of commercial satellite communications equipment and AIS receivers, onboard its older cutters. This added capability made the COP information more robust by allowing Coast Guard vessels at sea to receive, through AIS receivers, position reports from large commercial vessels and then transmit this information to land units where it would be entered into the COP. This equipment upgrade on older Coast Guard cutters added information into the COP that is generally not available through other means. According to Coast Guard officials, in addition to adding information to the COP, the Coast Guard has also made the information contained in the COP available on more computers and on more systems, which, in turn, has increased the number of users with access to the COP. One of the key steps toward increasing the number of users with COP access occurred in 2004 with the implementation of C2PC, which made both the classified and unclassified COP available to additional Coast Guard personnel. According to Coast Guard officials, the advent of C2PC allowed access to the COP from any Coast Guard computer connected to the Coast Guard data network. Prior to C2PC, Coast Guard personnel had access to the COP through Coast Guard GCCS workstations. We previously reported that the Coast Guard has experienced challenges with COP-related technology acquisitions that resulted from the Coast Guard not following its own information technology acquisition guidance and processes. These challenges included poor usability and the inability to share information as intended, and ultimately resulted in the Coast Guard not meeting its goals for multiple COP-related systems. For example, four COP-related systems have been affected by the Coast Guard not closely following its acquisition processes. C4ISR project. The C4ISR project was designed to allow the Coast Guard’s newly acquired offshore vessels and aircraft to both add information to the COP using their own sensors as well as view information contained within the COP, thereby allowing these assets to become both producers and consumers of COP information. However, in July 2011, we reported that the Coast Guard had not met its goal of building the $2.5 billion C4ISR system. Specifically, we reported that the Coast Guard had repeatedly changed its strategy for achieving C4ISR’s goal of building a single fully interoperable command, control, intelligence, surveillance, and reconnaissance system across the Coast Guard’s new vessels and aircraft. Further, we found that not all aircraft and vessels were operating the same C4ISR system, or even at the same classification level, and hence could not directly exchange data with each other. For example, an aircraft operating with a classified system had difficulty sharing information with others operating on unclassified systems during the Deepwater Horizon oil spill incident. In addition, we reported at that time that the Coast Guard may shift away from a full data- sharing capability and instead use a system where shore-based command centers serve as conduits between assets while also entering data from assets into the COP. This approach could increase the time it takes for COP information, for example, gathered by a vessel operating with a classified system to be shared with an aircraft operating with an unclassified system. Because aircraft and vessels are important contributors to and users of COP information, a limited capability to quickly and fully share COP data could affect their mission effectiveness. We concluded that given these uncertainties, the Coast Guard did not have a clear vision of the C4ISR required to meet its missions. We also reported in July 2011 that the Coast Guard was managing the C4ISR program without key acquisition documents. At that time, the Coast Guard lacked an acquisition program baseline that reflected the planned program, a credible life-cycle cost estimate, and an operational requirements document for the entire C4ISR acquisition project. According to Coast Guard information technology officials, the abundance of software baselines could increase the overall instability of the C4ISR system and complexity of the data sharing among assets. We recommended, and the Coast Guard concurred, that it should determine whether the system-of-systems concept for C4ISR is still the planned vision for the program, and if not, ensure that the new vision is comprehensively detailed in the project documentation. In response to our recommendation, the Coast Guard reported in 2012 that it was still supporting the system-of-systems approach, and was developing needed documentation. We will continue to assess the C4ISR program through our ongoing work on Coast Guard recapitalization efforts. Development of WatchKeeper. Another mechanism that was expected to increase access to COP information was the DHS Interagency Operations Center (IOC) program, which was delegated to the Coast Guard for development. This $74 million program began providing COP information to Coast Guard agency partners in 2010 using WatchKeeper software. The IOCs were originally designed to gather data from sensors and port partner sources to provide situational awareness to Coast Guard sector personnel and to Coast Guard partners in state and local law enforcement and port operations, among others. Specifically, WatchKeeper was designed to provide Coast Guard personnel and port partners with access to the same unclassified GIS data, thereby improving collaboration between them and leveraging their respective capabilities in responding to cases. For example, in responding to a distress call, access to WatchKeeper information would allow both the Coast Guard unit and its local port partners to know the location of all possible response vessels, so they could allocate resources and develop search patterns that made the best use of each responding vessel. In February 2012, we reported that the Coast Guard had increased access to its WatchKeeper software by allowing access to the system for Coast Guard port partners. However, the Coast Guard had limited success in improving information sharing between the Coast Guard and local port partners and did not follow its established guidance during the development of WatchKeeper—a major component of the $74 million Interagency Operations Center acquisition project. By not following its guidance, the Coast Guard failed to determine the needs of its users, define acquisition requirements, or determine cost and schedule information. Specifically, prior to the initial deployment of WatchKeeper, the Coast Guard had made limited efforts to determine port partner needs for the system. For example, we found that Coast Guard officials had some high level discussions, primarily with other DHS partners, but that port partner involvement in the development of WatchKeeper requirements was primarily limited to Customs and Border Protection because WatchKeeper had grown out of a system designed for screening commercial vessel arrivals—a Customs and Border Protection mission. However, according to the Interagency Operations Process Report: Mapping Process to Requirements for Interagency Operations Centers, the Coast Guard identified many port partners as critical to IOCs, including other federal agencies (e.g., the Federal Bureau of Investigation) and state and local agencies. We also determined that because few port partners’ needs were met with WatchKeeper, use of the system by port partners was limited. Specifically, of the 233 port partners who had access to WatchKeeper for any part of September 2011 (the most recent month for which data were available at the time of our report), about 18 percent had ever logged onto the system and about 3 percent had logged on more than five times. Additionally, we reported that without implementing a documented process to obtain and incorporate port partner feedback into the development of future WatchKeeper requirements, the Coast Guard was at risk of deploying a system that lacked needed capabilities, which would continue to limit the ability of port partners to share information and coordinate in the maritime environment. We concluded, in part, that the weak management of the IOC acquisition project increased the program’s exposure to risk. In particular, fundamental requirements-development and management practices had not been employed; costs were unclear; and the project’s schedule, which was to guide program execution and promote accountability, had not been reliably derived. Moreover, we reported that with stronger program management, the Coast Guard could reduce the risk that it would have a system that did not meet Coast Guard and port partner user needs and expectations. As a result, we recommended, and the Coast Guard concurred, that it collect data to determine the extent to which (1) sectors are providing port partners with WatchKeeper access and (2) port partners are using WatchKeeper; then develop, document, and implement a process to obtain and incorporate port-partner input into the development of future WatchKeeper requirements; and define, document, and prioritize WatchKeeper requirements. As of April 2013, we had not received any reports of progress on these recommendations from the Coast Guard. Coast Guard Enterprise Geographic Information System (EGIS). In April 2013, we also reported that Coast Guard personnel we interviewed who use EGIS--an important component, along with its associated viewer, for accessing COP information—stated that they had experienced numerous challenges with the system after it was implemented in 2009. Our site visits to area, district, and sector command centers in six Coast Guard field locations, and discussions with headquarters personnel, identified numerous examples of user concerns about EGIS. Specifically, the Coast Guard personnel we interviewed who used EGIS stated that it was slow, did not always display accurate and timely information, or degraded the performance of their computer workstations—making EGIS’s performance generally unsatisfactory to them. For example, personnel from one district we visited reported losing critical time when attempting to determine a boater’s position on a map display because of EGIS’s slow performance. Similarly, personnel at three of the five districts we visited described how EGIS sometimes displayed inaccurate or delayed vessel location information, including, for example, displaying a vessel track indicating a 25-foot Coast Guard boat was located off the coast of Greenland—a location where no such vessel had ever been. Personnel we met with in two districts did not use EGIS at all to display COP information because doing so caused other applications to crash. In addition to user-identified challenges, we reported in April 2013 that Coast Guard information technology (IT) officials told us they had experienced challenges largely related to insufficient computational power on some Coast Guard work stations, a lack of training for users and system installers, and inadequate testing of EGIS software before installation. For example, according to Coast Guard IT officials, Coast Guard computers are replaced on a regular schedule, but not all at once, and EGIS’s viewer places a high demand on the graphics capabilities of computers. They added that this demand was beyond the capability of the older Coast Guard computers used in some locations. Moreover, Coast Guard IT management made EGIS available to all potential users without performing the tests needed to determine if capability challenges would ensue. In regard to training, Coast Guard officials told us that they had developed online internal training for EGIS, and classroom training was also available from the software supplier. However, Coast Guard IT officials stated that they did not inform users that this training was available. This left the users with learning how to use EGIS on the job. Similarly, the installers of EGIS software were not trained properly, and many cases of incomplete installation were later discovered. These incomplete installations significantly degraded the capabilities of EGIS. Finally, the Coast Guard did not pre-test the demands of EGIS on Coast Guard systems in real world conditions, according to Coast Guard officials. Tests conducted later, after users commented on their problems using EGIS, demonstrated the limitations of the Coast Guard network in handling EGIS. According to Coast Guard officials, some of these challenges may have been avoided if they had followed established acquisition processes for IT development. If these problems had been averted, users may have had greater satisfaction and the system may have been better utilized for Coast Guard mission needs. Poor communication by, and among, Coast Guard IT officials led to additional management challenges during efforts to implement a simplified EGIS technology called EGIS Silverlight. According to Coast Guard officials, the Coast Guard implemented EGIS Silverlight to give users access to EGIS data without the analysis tools that had been tied to technical challenges with the existing EGIS software. Coast Guard personnel from the Office of the Chief Information Officer (CIO) stated that EGIS Silverlight was available to users in 2010; however, none of the Coast Guard personnel we spoke with at the field units we visited mentioned awareness of or use of this alternative EGIS option when asked about what systems they used to access the COP. According to CIO personnel, it was the responsibility of the system sponsor’s office to notify users about the availability of EGIS Silverlight. However, personnel from the sponsor’s office stated that they were unaware that EGIS Silverlight had been deployed and thus had not taken steps to notify field personnel of this new application that could have helped to address EGIS performance problems. These Coast Guard officials were unable to explain how this communication breakdown had occurred. Coast Guard One View (CG1V). In April 2013, we reported that the Coast Guard had not followed its own information technology development guidance when developing its new COP viewer, known as Coast Guard One View, or CG1V. The Coast Guard reported that it began development of CG1V in April 2010 to provide users with a single interface for viewing GIS information, including the COP, and to align the Coast Guard’s viewer with DHS’s new GIS viewer. However, in 2012, during its initial development of CG1V, the agency did not follow its System Development Life Cycle (SDLC) guidance which requires documents to be completed during specific phases of product development. Specifically, 9 months after CG1V had entered into the SDLC the Coast Guard either had not created certain required documents or had created them outside of the sequence prescribed by the SDLC. For example, the SDLC-required tailoring plan is supposed to provide a clear and concise listing of SDLC process requirements throughout the entire system lifecycle, and facilitate the documentation of calculated deviations from standard SDLC activities, products, roles, and responsibilities from the outset of the project. Though the SDLC clearly states that the tailoring plan is a key first step in the SDLC, for CG1V it was not written until after documents required in the second phase were completed. Coast Guard officials stated that this late completion of the tailoring plan occurred because the Coast Guard’s Chief Information Officer had allowed the project to start in the second phase of the SDLC because they believed CG1V was a proven concept. However, without key phase one documents, the Coast Guard may have prematurely selected CG1V as a solution without reviewing other viable alternatives to meet its vision, and may have dedicated resources to CG1V without knowing project costs. In October 2012, Coast Guard officials acknowledged the importance of following the SDLC process and stated their intent to complete the SDLC-required documents. Clarifying the application of the SDLC to new technology development would better position the Coast Guard to maximize the usefulness of the COP. In our April 2013 report, we recommended that the Commandant of the Coast Guard direct the Coast Guard Chief Information Officer to issue guidance clarifying the application of the SDLC for the development of future projects. The Coast Guard concurred with the recommendation and reported that it planned to mitigate the risks of potential implementation challenges of future technology developments for the COP by issuing proper guidance and clarifying procedures regarding the applicability of the SDLC. The Coast Guard estimated that it would implement this recommendation by the end of fiscal year 2013. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions. For questions about this statement, please contact Stephen L. Caldwell at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Dawn Hoff (Assistant Director), Jonathan Bachman, Jason Berman, Laurier Fish, Bintou Njie, Jessica Orr, Lerone Reid, and Katherine Trimble. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To facilitate its mission effectiveness through greater maritime situational awareness, the Coast Guard developed its COP--a map-based information system shared among its commands. The COP displays vessels, information about those vessels, and the environment surrounding them on interactive digital maps. COP information is shared via computer networks throughout the Coast Guard to assist with operational decisions. COP-related systems include systems that can be used to access, or provide information to, the COP. This statement summarizes GAO's work on (1) the Coast Guard's progress in increasing the availability of data sources and COP information to users and (2) the challenges the Coast Guard has experienced in developing and implementing COP-related systems. This statement is based on GAO's prior work issued from July 2011 through April 2013 on various Coast Guard acquisition and implementation efforts related to the COP, along with selected updates conducted in July 2013. To conduct the selected updates, GAO obtained documentation on the Coast Guard's reported status in developing COP-related acquisition planning documents. The Coast Guard, a component of the Department of Homeland Security (DHS), has made progress in developing its Common Operational Picture (COP) by increasing the information in the COP and increasing user access to this information. The Coast Guard has made progress by adding internal and external data sources that allow for better understanding of anything associated with the global maritime domain that could affect the United States. The COP has made information from these sources available to more COP users and decision makers throughout the Coast Guard. For example, in 2006, the ability to track the location of Coast Guard assets, including small boats and cutters, was added to the COP. This capability--also known as blue force tracking--allows COP users to locate Coast Guard vessels in real time and establish which vessels are in the best position to respond to mission needs. In addition to adding information to the COP, the Coast Guard has also made the information contained in the COP available on more computers and on more systems, which, in turn, has increased the number of users with access to the COP. The Coast Guard has also experienced challenges in developing and implementing COP-related systems and meeting the COP's goals for implementing systems to display and share COP information. These challenges have affected the Coast Guard's deployment of recent COP technology acquisitions and are related to such things as the inability to share information as intended and systems not meeting intended objectives. For example, in July 2011, GAO reported that the Coast Guard had not met its goal of building a single, fully interoperable Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance program (C4ISR) system--a $2.5 billion project intended to enable the sharing of COP and other data among its new offshore vessels and aircraft. Specifically, GAO noted that the Coast Guard: (1) repeatedly changed its strategy for achieving the goal of the C4ISR system and (2) that not all vessels and aircraft were operating the same C4ISR system, or even at the same classification level and hence could not directly exchange data with one another as intended. GAO found similar challenges with other Coast Guard COP-related systems not meeting intended objectives. For example, in February 2012, GAO reported that the intended information-sharing capabilities of the Coast Guard's WatchKeeper software--a major part of the $74 million Interagency Operations Center project designed to gather data to help port partner agencies collaborate in the conduct of operations and share information, among other things--met few port agency partner needs, in part because the agency failed to determine these needs when developing the system. Further, in April 2013, GAO reported that, among other things, the Coast Guard experienced challenges when it deployed its Enterprise Geographic Information System (EGIS), a tool for viewing COP information that did not meet user needs. The challenges Coast Guard personnel experienced with EGIS included system slowness and displays of inaccurate information. GAO has made recommendations in prior work to enhance the Coast Guard's development and implementation of its COP-related systems. DHS generally concurred with the recommendations and has reported actions under way to address them.
The 1935 Indian Arts and Crafts Act created the Indian Arts and Crafts Board within Interior to promote the economic welfare of Indian tribes and individuals through the development of Indian arts and crafts and through the expansion of the market for the products of Indian art and craftsmanship. In support of this mission, the Board: implements the Indian Arts and Crafts Act, as amended; increases the participation of Native Americans in the fine arts and crafts business; assists emerging artists entering the market; and assists Native American cultural leaders in supporting the evolution and preservation of tribal cultural activities. The Board’s policies are determined by its five commissioners—currently including four representatives from American Indian and Alaska Native communities and the Deputy Director of the Federal Trade Commission’s Bureau of Consumer Protection—who are appointed by the Secretary of the Interior and serve without compensation. With a fiscal year 2010 budget of about $1.2 million, the Board currently has three professional and two administrative staff in the Washington, D.C., office to carry out its responsibilities, including a Director with overall responsibility for implementing the Board’s policies, and four full-time and one part-time temporary staff to operate three museums. A priority of the Board is the implementation and enforcement of the act’s provisions to prevent misrepresentation. The Indian Arts and Crafts Act is essentially a truth-in-advertising law that prohibits misrepresentation in the marketing of Indian arts and crafts products within the United States and provides criminal and civil penalties for marketing products as Indian made when such products are not made by Indians. Under the act, it is unlawful to offer or display for sale or to sell any good in a manner that falsely suggests it is Indian produced, an Indian product, or the product of a particular Indian or Indian tribe or Indian arts and crafts organization resident in the United States. Under the act and its implementing regulations, an Indian is an individual who is a member of an Indian tribe or who is certified by an Indian tribe as a nonmember Indian artisan. Indian tribes include federally recognized tribes, Alaska Native villages, and state-recognized tribes. An Indian arts and crafts organization is any legally established arts and crafts marketing organization composed of members of Indian tribes. The terms “Indian product” and “product of a particular Indian tribe or Indian arts and crafts organization” are defined in two regulations promulgated by the Board. The regulations implementing the Indian Arts and Crafts Act of 1990 that were issued in October 1996 defined in general the nature and origin of Indian products. In June 2003, as directed by the Indian Arts and Crafts Enforcement Act of 2000, the Board promulgated additional regulations that include specific examples of items that may be marketed as Indian products and when non-Indians may make and sell Indian-style arts and crafts, thereby informing the public as to when an individual may be subject to civil or criminal penalties for falsely marketing a good as an Indian product. The act designated the Board as the primary agency to handle complaints of violations of the act. Individuals who witness market activity they believe may be in violation of the act can contact the Board to report this activity in a number of ways. The Board’s Web site provides an online form that can be completed and submitted to the Board’s staff for review, which also includes examples of how violations can occur at various venues, such as retail stores, powwows, Internet Web sites, or when an artist and consumer meet in person. The form permits the person submitting the complaint to provide personal contact information or file anonymously, and it requests information on the alleged violator, date, location, and venue of the violation; the type of art or craft involved; how the item was offered for sale and what representations were made about it—such as statements regarding authenticity of the item or the tribal membership of the maker— and any documentation that may help to verify the complaint, such as advertisements or catalogs. Complaints may also be written or faxed directly to the Board’s office in Washington, D.C., or may be reported by phone directly to the office or via the Board’s toll-free complaint line, (888) ART-FAKE. The Board maintains computerized files of complaints and tracks subsequent follow-up actions by its staff or other federal or state agencies to which it refers complaints for further investigation. Lacking criminal investigators on its own staff, the Board relies on other law enforcement agencies for assistance in enforcing the Indian Arts and Crafts Act. Since the mid-1990s, the Board has referred complaints to the FBI; Interior’s Bureau of Indian Affairs, Office of Inspector General, and National Park Service; and state attorneys general for investigation. The Indian Arts and Crafts Act of 1990 designated the FBI to investigate violations of the act. To bolster its investigative resources, the Board worked with Interior to put in place in 2007 a memorandum of understanding between the Departments of Justice and the Interior to allow all appropriate Interior law enforcement officers to work such cases. The Board now has a memorandum of agreement with National Park Service Investigative Services for a full-time dedicated agent to work on Indian Arts and Crafts Act cases through a reimbursable support agreement. The dedicated National Park Service agent coordinates with fellow National Park Service agents on these investigations and encourages collaborations with other Interior law enforcement officers. The Board also coordinates with offices of state attorneys general on investigations of alleged violations of state Indian arts and crafts laws. After passage of the Indian Arts and Crafts Amendments Act of 2010, any federal law enforcement officer can investigate alleged violations of the act. To further protect Indian artists, the Board works with the Patent and Trademark Office to promote registration of trademarks for arts and crafts marketing purposes. A trademark is a distinctive sign or indicator—such as a word, name, symbol, design, image, or any combination thereof—used by a person or organization to uniquely identify the source of its products or services and to distinguish them from those of other individuals or entities. Registering trademarks with the Patent and Trademark Office or a state bars others from registering marks likely to cause confusion with previously registered trademarks. Trademarks are part of what is collectively termed “intellectual property,” which includes copyrights, patents, and trade secrets—anything that one might create—as distinguished from more tangible things that might be owned, such as a house or car, or “real property.” A copyright is the exclusive right to reproduce, publish, sell, or distribute, for a certain period of time, original works of authorship fixed in any tangible medium of expression, such as literary, musical, or artistic works. The Board educates Indian artists about intellectual property through on-site meetings with tribal governments and members and distributes a brochure on the subject. The brochure also refers artists needing additional information to the World Intellectual Property Organization—a specialized agency of the United Nations dedicated to developing a balanced and accessible international intellectual property system. In June 2005, Interior’s Office of Inspector General issued a review of counterfeit Indian arts and crafts. The report concluded that the extent of the problem is difficult to quantify because of limited statistics, conflicting perceptions of what makes something counterfeit, and public misconceptions about federal and state laws. The report also concluded that enforcement of the Indian Arts and Crafts Act largely depends on the cooperation of agencies outside of Interior’s control and that criminal enforcement actions have had no measurable effect on counterfeit activity. The report suggested actions to mitigate the situation, including (1) amending the act to provide the Board greater enforcement authority and capabilities; (2) collaborating with Customs and Border Protection to revise the country of origin marking regulations to remove exceptions and require that Indian-style jewelry items be indelibly marked; (3) working with Congress or the Department of Commerce, or both, to allow the Board to facilitate trademark registration for Indians, tribes, and arts and crafts organizations; and (4) seeking civil penalties for misrepresentation before seeking criminal penalties. The actual size of the Indian arts and crafts market, and extent of misrepresentation that is occurring, are unknown, because existing estimates are outdated, limited in scope, or anecdotal and no national sources contain the data necessary to make reliable estimates. Conducting a comprehensive study to estimate the size of the market and level of misrepresentation would be complex and costly and may not provide reliable results. Instead of exact information, descriptions of the size of the Indian arts and crafts market and extent of misrepresentation are based largely on estimates prepared by various federal and state entities. However, we found that these estimates are outdated and unreliable. For example, the most recent and relevant national estimates were provided in a 1985 Department of Commerce study. The study estimated gross sales of $400 million to $800 million annually for the Indian arts and crafts industry and that 10 to 20 percent of the market is misrepresented. Our analysis of the methodology used to produce the study, however, found that these estimates are not only outdated but also unreliable. Specifically, a primary contributor to the study told us that the estimates were based on “guesses” from industry experts, not data collected from a survey or other systematic data collection technique and can therefore be considered only as opinions. Nevertheless, these estimates have been referred to repeatedly in other reports and documents on the topic. For example, Interior’s 2005 Inspector General report cites the Department of Commerce estimate of $400 million to $800 million in annual gross sales for the industry. In addition, a fact sheet put out by the Board states that the Indian arts and crafts industry has $1 billion in gross sales annually, which, according to a Board official, is based on the Department of Commerce’s 1985 estimate adjusted for inflation. Similarly, state and local studies that have described the arts and crafts markets in specific locations are also limited in their scope and methodology, making them unusable for estimating the size of the national Indian arts and crafts market or the extent of misrepresentation. For example, in 2001 the Alaska State Council on the Arts commissioned a private research company to study Alaska’s arts industry. The study estimated that Alaska artists’ income totaled about $20 million in 2001. Our review of the methodology and discussion with a primary contributor, however, found that the estimate is for the entire Alaska arts market and that no specific data were collected on the Alaska Native arts market. Therefore, in addition to being outdated, this study cannot be used to estimate either the Alaska Native arts market or extrapolated to estimate the national arts and crafts market. Similarly, the University of New Mexico’s Bureau of Business and Economic Research issued a report in 2004 estimating that the arts and cultural industry and cultural tourism in Santa Fe County generated approximately $1 billion in revenues in 2002. Again, our review of the report’s methodology and discussion with a primary contributor found that the report does not estimate revenue for Indian artists alone—rather, it represents the entire arts and cultural industry and cultural tourism in Santa Fe County—and therefore is not useful for characterizing the local or national market for Indian arts and crafts. Many Indian artists, agency officials, and others with whom we spoke who have knowledge of the national, state, and local Indian arts and crafts markets offered anecdotal estimates of the size of the Indian arts and crafts market and the extent of misrepresentation but generally could not provide reliable support for their estimates. For example, we spoke with Indian artists in Alaska, Illinois, New Mexico, Oklahoma, and Washington who told us that non-Indian artists representing themselves as Indian artists and marketing their goods as such was a widespread problem with a significant value in sales, but this information was based largely on their observations and personal experiences and not corroborated with reliable documentation or other support. Similarly, a New Mexico Assistant Attorney General told us that he thought misrepresentation was a multimillion-dollar problem in New Mexico, but he had no data to support his estimate. An official from the Indian Arts and Crafts Association provided an anecdotal estimate of revenue for select vendors from the association’s market event but could not document sales for the entire market or reliably estimate the extent of misrepresentation. While these estimates were informed by personal experiences, the lack of documentary support for the estimates makes it impossible to independently replicate the estimates and verify and validate their reliability. No national database specifically tracks Indian arts and crafts sales or misrepresentation. Consequently, we examined various national data sources to determine if the information they contain could be used to estimate these characteristics. We found that because these data sources were designed for other purposes and not intended to track the size of the Indian arts and crafts market or extent of misrepresentation, the information they contain is not specific or comprehensive enough to be used for that purpose. For example, the Department of Commerce’s Bureau of Economic Analysis maintains national data on purchases of various categories of goods. The data include a category for jewelry purchases, but they do not separate out a specific category for Indian-style jewelry or have the level of detail that would help distinguish such jewelry from other types of jewelry. Likewise, the International Trade Commission maintains a database tracking imported goods by various categories. This database includes a category for imported jewelry with semiprecious stones valued at more than $40 per article but contains no additional detail that could be used to determine which items, if any, are Indian-style. Both of these databases also contain information on other categories of goods that may or may not include Indian arts and crafts, but it is not possible to specifically identify those items from the data collected. We also found that information specifically collected about Indian arts and crafts was not comprehensive or detailed enough to determine the size of the market or extent of misrepresentation. For example, the Board maintains a registry of about 350 Indian-owned and -operated businesses but told us that this list represents only a small number of sellers who choose to register with the Board and excludes other categories of sellers, such as non-Indian wholesalers and non-Indian galleries offering Indian art and craftwork. Similarly, the Indian Arts and Crafts Association maintains a directory of 500 to 600 artists, retailers, and wholesalers, but, again, it is not a comprehensive list of Indian arts and crafts sellers, and it contains only association members who choose to be listed. Moreover, neither of these organizations collects information on the sales of goods by these sellers. Regarding misrepresentation, the Board maintains a database of complaints of alleged violations of the Indian Arts and Crafts Act. Although this database contains information describing individual instances of alleged misrepresentation, it is not a comprehensive listing of all incidents of misrepresentation that have occurred; rather, it represents only instances where an individual recognized a potential violation and made the effort to report it. In the absence of reliable estimates or sufficiently detailed national data, accurately estimating the size of the Indian arts and crafts market would require a completely original study. But our analysis and the opinions of experts suggest that such a study would be complex and costly and may not produce reliable estimates. Experts we spoke with who had conducted state and local surveys suggested that such a study should include one or more surveys of individuals and businesses in the Indian arts and crafts market to estimate the size of the market. For example, one survey could request data from Indian artists about their income from the sales of arts and crafts, and another survey could request sales information from businesses and establishments that sell Indian and Indian-style goods. However, these experts agreed that it would take substantial resources to conduct such surveys and that the usefulness of the results may be limited because of various challenges, such as: Artists may not maintain detailed income records and may not be able to reliably estimate, or may not want to provide, their annual income from the sale of their art. A store selling Indian-style and other goods may not be able to accurately estimate what proportion of total sales comes from Indian-style goods. A comprehensive list of Indian artists and establishments that sell Indian and Indian-style arts and crafts does not exist. The meanings of key terms, such as “Indian-style,” are not universally agreed upon, and a survey to identify all of the goods that make up the market using such terms may be flawed. For example, with regard to the term “Indian-style,” one respondent may think they must include all jewelry with turquoise stones, while another respondent may consider only turquoise jewelry with recognizable tribal patterns or markings as being “Indian-style.” A study on the extent of misrepresentation in the market would be difficult because it would rely largely on self-reporting of illegal activity by violators of the Indian Arts and Crafts Act. Federal and state agencies have relied on educational efforts more than law enforcement actions to curtail misrepresentation of Indian arts and crafts, but these efforts are hampered by fundamental challenges, such as ignorance of the law, competing law enforcement priorities, the high cost of pursuing legal actions, and limitations on the enforcement of customs regulations. The Board maintains a computerized database of the complaints it receives of alleged violations of the act and tracks subsequent actions by its staff or by law enforcement agencies to resolve complaints. According to the database, from fiscal year 2006 through fiscal year 2010, the Board received 649 complaints of alleged violations. The Board’s investigation of these 649 complaints identified apparent violations of federal or state laws in 23 percent of the complaints—148 violations of the Indian Arts and Crafts Act and 2 of state law, but for 61 percent of the complaints— 395 of the 649—the Board, upon investigation, identified no violation of the federal law or could not make a determination; for example, according to Board officials, anonymous complaints sometimes do not provide sufficient information to identify a violation. Most of the allegations during these 5 years—49 percent—involved retail store sales, followed by Internet sales, which made up 33 percent. The remaining 18 percent involved an assortment of venues such as powwows, art markets, and individual sellers (see app. I). According to its Director, the Board’s preferred approach to investigating an apparent violation of the act is to send an educational or warning letter to the alleged offender to obtain voluntary compliance. Our analysis of information from the Board’s complaint files from fiscal year 2006 through fiscal year 2010 found that 102 educational and 188 warning letters were sent to potential offenders in response to 290 of 649 complaints, or about 45 percent. Educational letters are generalized to businesses that sell Indian arts and crafts, outlining the act’s requirements for the sale of Indian arts and crafts, defining penalties, and identifying sources for additional information on the act. Warning letters that are sent to sellers regarding specific items they are offering for sale as Indian products include information on the act’s requirements and penalties, advise the sellers to cease any representations that potentially violate the act, and suggest alternative descriptive wording that the sellers could use to avoid violating the act. The Director told us that this approach is practical, given the Board’s limited staff and resources, and also effective, often resulting in the seller’s agreeing to comply or seeking additional information on the act. As indicated in its database, after the Board completed its own investigation of the complaints, from fiscal year 2006 through fiscal year 2010, it referred 117 complaints of apparent violations to law enforcement agencies for further investigation. According to the Board’s Director, however, limited investigative resources and turnover of investigators hampered these efforts. For example, while the 1990 act directed the Board to refer complaints to the FBI for investigation, the FBI generally declined referrals because of other priorities. Consequently, in August 2007, Interior entered into a memorandum of understanding with the Department of Justice, which delegated authority to Interior to investigate alleged violations of the act. The Board entered into a reimbursable agreement with the Bureau of Indian Affairs for the services of an investigator in September 2007, but the detail lasted only until February 2008. Between June 2008 and January 2010, the Board received investigative assistance on specific complaints or on a part-time basis from three different National Park Service law enforcement personnel. The National Park Service subsequently hired one of the investigators to work full-time for the Board in January 2010, but that investigator suddenly passed away in March 2010. Effective May 2010, the Board had a reimbursable support agreement with the National Park Service for a full- time agent dedicated to investigating Indian Arts and Crafts Act cases in collaboration with Board staff. According to the Director, the Board has been successful in obtaining cooperation from other agencies to investigate complaints, but the lack of continuity and resources for law enforcement investigative assistance has been a significant challenge to developing complaints into criminal cases for prosecution. Consequently, following the 2010 act amendments allowing any federal law enforcement officer to investigate alleged violations, the Board is currently awaiting Interior’s approval to hire an investigator as a Board employee for greater program continuity and success. Although the Board referred 117 complaints for further investigation from fiscal year 2006 through fiscal year 2010, none of these referrals led to a case being filed under the act. According to Department of Justice data from fiscal year 2006 through fiscal year 2010, no federal prosecutions were initiated under the act. More broadly, since 1990, only five federal cases have been filed under the act (see app. II), the first in 1999 and the last in 2005. For the case filed in 2005, an FBI special agent investigated a 2004 complaint referred by the Board regarding an individual in New Mexico selling imported weavings as Navajo made. With assistance from Board staff, a National Park Service agent, and another FBI agent, the case was prosecuted in federal court, resulting in a guilty plea and sentencing of the defendant in December 2007 to 5 years probation and an order to pay the victims restitution totaling more than $30,000. To increase investigation and prosecution of complaints—whether under the act or under state laws—the Board has partnered with some of the 12 states that have their own Indian arts and crafts laws to share complaint information and provide other assistance. According to the Board’s complaint database, 248 of the 649 complaints—about 38 percent—from fiscal year 2006 through fiscal year 2010 came from states that have their own state Indian arts and crafts laws. According to the Director, the Board has contacted many of these states’ offices of attorneys general to offer information, assistance, and coordination on any investigations or prosecutions of misrepresentation cases. In recent years, the Board has had the most success collaborating with New Mexico’s Attorney General, beginning with a 2004 investigation initiated by the Board, which was subsequently handed off for successful prosecution under the state law prohibiting fraud, resulting in a guilty verdict, sentence of probation, and an order to pay a fine and restitution. A subsequent 2007 meeting in New Mexico—including the Board’s Director, the State Attorney General and staff, the U.S. Attorney for the District of New Mexico, an FBI agent, and representatives from four Interior law enforcement offices—led to further collaboration, with the Board providing support and assistance to obtain consent decrees in 2009— agreements to not misrepresent merchandise and to pay restitution and a civil penalty—under the state Indian Arts and Crafts Sales law for misrepresenting Indian jewelry in two stores in Santa Fe, New Mexico. The New Mexico Assistant Attorney General who prosecuted these cases told us that the Board’s support, particularly in assisting sting operations at the stores, was instrumental in the investigations’ success. While New Mexico has pursued cases under its law, the offices of attorney general in seven other states that we contacted with Indian arts and crafts laws could not provide any information on cases investigated or prosecuted under those laws in recent years. Besides the limited federal and state efforts to enforce the Indian Arts and Crafts Act and related state laws, we also identified one arts and crafts organization that has brought numerous civil lawsuits under the act. This organization—Native American Arts, Inc.—is owned and operated by an Indian tribal member and sells authentic Indian arts and crafts through a retail store and the Internet. Finding it difficult to compete with stores that were misrepresenting unauthentic goods as real Indian arts and crafts, Native American Arts, Inc., began filing lawsuits in 1998 for violations of the act and since then, according to the attorney for the organization, has filed about 80 lawsuits in total. The attorney told us that the lawsuits have been highly successful, obtaining injunctions in almost every case to prevent the defendants from violating the act and requiring them to include a disclaimer on imitation products or in their advertising, stating that their products are not made by Indians and are not Indian products under the act. Furthermore, the attorney stated that the defendants have generally complied with the injunctions and that in only two cases follow- up action was needed to obtain compliance. As mentioned earlier, the Board and its federal, state, and industry partners have emphasized educational activities for buyers and sellers to increase awareness of the act and help reduce apparent violations and complaints. For example, educational activities undertaken by the Board have included the following: Publishing brochures to educate sellers and buyers on the act and to help buyers identify authentic Indian arts and crafts, such as “How to Buy Genuine American Indian Arts and Crafts,” produced in collaboration with the Federal Trade Commission. The Board has also collaboratively produced brochures tailored for specific states that have state arts and crafts laws, including Alaska, Arizona, New Mexico, and South Dakota, and specifically for items made of turquoise. Collaborating with the Federal Trade Commission and six states to survey Web sites that market art or craftwork potentially covered under the act and sending operators educational materials on compliance with both the Indian Arts and Crafts Act and the Federal Trade Commission Act, as amended. In addition, the Board worked with a prominent online sales and auction Web site to compose an educational message to educate online Indian art sellers about the act’s requirements. Sending reminder letters to business owners in the Board’s Source Directory of American Indian and Alaska Native Owned and Operated Arts and Crafts Businesses about compliance with the act. The Board also produces and distributes wall calendars and shop posters to display information about the act where Indian goods are sold. Operating informational booths at Indian conventions and arts and crafts shows. For example, from 2005 through 2009, the Board hosted a booth with the Federal Trade Commission and the Alaska State Attorney General’s Consumer Protection Unit at the annual Alaska Federation of Natives Convention. Placing educational advertisements in Indian art, state tourism, and airline in-flight magazines. State and local programs also help to increase awareness of authentic Indian arts and crafts on a local or regional level and, to a certain extent, help “self-police” the market. Examples include the following: The Alaska State Council on the Arts’ Silver Hand Permit Program has a mission to promote authentic Alaska Native art made in the state exclusively by individual Alaska Native artists. Participating artists must be (1) residents of Alaska, (2) Alaska Natives who can verify Alaska Native tribal enrollment, (3) 18 years of age or older, and (4) producing art exclusively in the state. Participating artists receive tags or stickers with the Silver Hand seal of authenticity for marking arts and crafts that are authentic Alaska Native-made arts and crafts. According to the Director of the Silver Hand program, about one-third of Alaska Native artists are enrolled in the program. New Mexico’s Portal Program at the Palace of the Governors in Santa Fe is a self-policing Indian arts and crafts group that provides free space for the sale of handmade Indian goods in front of the Palace of Governors. The Portal program participants we spoke with told us that it has about 4,000 total members, with about 500 who participate actively on a regular basis. Governed by a 10-person committee elected annually from among program participants, the program requires Indian artists to adhere to traditional materials and processes, display registration cards that clearly show their individual trademark(s), and use their trademark(s) on all wares. According to committee members, these standards are strictly enforced and are among the most stringent of any Indian arts and crafts organization. The committee monitors Portal sellers, spot-checks goods for sale, and terminates membership of any artist found to be in violation of Portal rules and regulations. Even with its partnerships and educational and other outreach efforts, the Board acknowledges that a number of challenges exist to curtailing misrepresentation of Indian arts and crafts. Specifically, ignorance of the Indian Arts and Crafts Act remains one of the most significant challenges. According to the Board’s Director, the continuous education of sellers, consumers, and law enforcement officials is key to curtailing misrepresentation and improving compliance with the act. With sellers, noncompliance can be caused by a lack of awareness of the act, and the Board has learned from sending out educational and warning letters that sellers are often willing to comply after they are better informed about the act. In addition, some sellers may be aware of the act but unaware of the Board’s role. For example, one seller we met with knew about the act but said she was unfamiliar with the Board until we pointed out that the brochures about the act that she had on hand were produced by the Board. Consequently, while sellers may be aware of the act, they may not be aware that the Board is available to respond to complaints of violations or to help clarify the act and offer other support. With regard to consumers, the Board’s brochures include information on how consumers can identify genuine arts and crafts and avoid imitations— for example, by asking specific questions about the artist and how the good was made—but ignorance of the act can cause consumers to unwittingly support the market for imitation and potentially misrepresented Indian-style arts and crafts. For example, Indian artists in Santa Fe’s Portal program with whom we spoke told us that while members of the program must adhere to strict authenticity, criteria, buyers are drawn across the street to the town square, where sellers do not adhere to those same criteria and may imply that their imitation goods are Indian products while significantly undercutting the prices of authentic goods. Portal members told us that, if consumers were better informed about cultural significance and quality, they might feel a greater obligation to buy authentic arts and crafts—even if they cost a bit more—and avoid buying imitations. Other Indian artists mentioned examples of what they consider to be deliberate confusion of consumers by sellers, such as galleries labeling art created by non-Indians that is clearly inspired by Northwest Indian art as “Northwestern Art”; such labeling avoids explicit misrepresentation but fails to inform a buyer that the art was not created by an Indian artist. Indian artists also mentioned that non-Indian artists will take on Indian-sounding names to create the illusion of authenticity. Better-informed consumers could ask the questions necessary to avoid such ploys. With regard to increasing awareness of the act within the law enforcement community, the Board has provided training in recent years via numerous conferences and workshops including U.S. Attorneys and Interior, FBI, tribal, and state law enforcement personnel, and it is planning future training for federal law enforcement officers. Nevertheless, an Interior law enforcement official told us that, although such exposure to the act may be helpful, most Interior law enforcement personnel are trained and focused on specific issues affecting the land units they are assigned to and are unlikely to pursue violations of the act, particularly if they involve investigation outside the borders of that unit. According to the Board’s Director, in addition to ignorance of the Indian Arts and Crafts Act, another significant challenge to curtailing misrepresentation is that other crimes have higher law enforcement priority. After the 1990 amendments charged the FBI with investigative duties, the Board learned through experience that enforcing the act was not high among the FBI’s competing priorities. An FBI official confirmed that the FBI’s involvement in the investigation of act violations has always been infrequent, and no change to this situation is foreseen, given the FBI’s primary focus on violent crimes. According to the Board’s Director, the delegation of investigatory authority from the FBI and reliance on law enforcement officers from other Interior agencies have posed additional challenges for the Board. The Board has had to make requests through other agencies within Interior for support to enforce the act, and, although a National Park Service investigator now works full-time for the Board, support from Interior law enforcement has been sporadic over time. Furthermore, according to an Interior law enforcement official, it is challenging to have only one dedicated investigator conducting multiple investigations at once, or even a single broad or complex investigation. Under National Park Service policies and procedures, the investigator can be assisted by investigators in other geographic areas for interviews or investigative work if needed. But the ideal enforcement scenario, according to the Interior law enforcement official, would be a critical mass of 8 to 10 investigators working with the Board and dedicated to investigating potential violations of the act. It is difficult, however, to devote additional resources to enforcing the act within Interior because of the many priorities already competing within each of Interior’s seven law enforcement groups. According to the Director, the Board’s planned hiring of an investigator as an employee will allow the Board to recruit and employ an individual with uniquely suited talents and retain that individual to gain experience and skills specifically related to enforcing the Indian Arts and Crafts Act. Another challenge to prosecuting violations of the act that have been investigated is the capacity of U.S. Attorneys’ Offices to adjudicate the alleged violations of the act. According to an Interior law enforcement official, after the investigator gathers evidence, the case must be presented to the appropriate U.S. Attorneys to determine if prosecution or further investigation should be pursued. The official told us that the U.S. Attorneys’ Offices are overwhelmed with cases, and those involving violations of the act tend to receive low priority for federal prosecution. A Bureau of Indian Affairs agent also told us that because so few Indian Arts and Crafts Act cases have gone through the courts, little case history exists for the U.S. Attorneys’ Offices to look at for guidance on how to put together a winning case. In addition, U.S. Attorneys generally require that the case be “large scale,” meaning involving either a large dollar amount or a network of shops implicated in misrepresentation; putting together such a large-scale case is both resource and time intensive. The owner and attorney for Native American Arts, Inc., told us that in their opinion civil action under the act is more effective than criminal prosecution to curtail misrepresentation. The act provides uniformity under the law, and the statutory and triple damages provisions are effective deterrents. They have observed that, in part because of their successful lawsuits, companies they have not yet sued have preemptively placed disclaimers on their products to prevent a lawsuit. Nevertheless, neither of them was aware of any other Indian arts organizations, tribes, or individuals bringing such suits. The challenges to bringing suits are that they are costly and time-consuming—investigating cases and developing the evidence to meet legal requirements for civil cases, in their experience, make for an expensive and lengthy process. The cases can also take a long time to resolve if they are defended vigorously, and because this area of law is little developed, appeals may be required to get a positive outcome. In their opinion, most Indian artists do not have the resources or attorney access needed to be successful with this approach. As reported by Interior’s Office of Inspector General in 2005 and confirmed in our discussions with the Board’s Director, other federal and state officials, and Indian artists, it is generally agreed that a significant challenge to curtailing misrepresentation is the limited enforcement of Customs and Border Protection regulations for imported Native American- style goods. The regulations require that Native American-style jewelry be indelibly marked with the country of origin by cutting, die-sinking, engraving, stamping, or some other permanent method on the clasp, in some other conspicuous location, or on a metal or plastic tag permanently attached to the jewelry, unless an exception applies. The Inspector General report noted, however, that the exceptions may allow importers to use adhesive labels, string tags, or to simply mark a jewelry container instead of the jewelry itself, thus allowing unmarked goods to be misrepresented at the point of sale. According to Customs and Border Protection officials, if an exception had been requested for Native American-style imports, a ruling would appear for that request in the Customs Ruling Online Search System. Customs and Border Protection officials identified two rulings—one about Native American-style jewelry and another about Native American-style arts and crafts—written in response to a request from importers regarding the country of origin marking of their products. The regulation could be amended to remove any exceptions, but removal would not likely increase enforcement, according to Customs and Border Protection officials. Customs and Border Protection also does not visit stores to determine if country of origin stickers or tags are being removed from imported goods, but it does have a Web form for “e-allegations,” which could be used by concerned artists or consumers to report such violations for follow-up by an enforcement team. U.S. federal and state laws protecting intellectual property do not explicitly include Indian traditional knowledge and cultural expression and therefore do not provide adequate protection from misappropriation or distortion. Some international frameworks or guiding principles exist for protecting traditional knowledge and cultural expressions, but these rely on individual countries taking steps to implement them. To date, the United States has not taken any such steps. Other countries have taken actions to explicitly protect the intangible intellectual property of their indigenous groups, and these efforts provide options for the United States to consider. Traditional knowledge and cultural expressions may be vulnerable to misappropriation and distortion because existing U.S. federal and state laws do not explicitly protect Indian traditional knowledge and cultural expressions. For example, Indian traditional knowledge and cultural expressions that have been handed down for generations are not generally eligible for copyright protection because they are not original and usually not fixed in any tangible medium of expression. U.S. copyright law protects original works of authorship fixed in any tangible medium of expression. When such a work is copyrighted, the creator receives the exclusive right to reproduce, publish, sell, or distribute the work for a certain period of time. Indian traditional knowledge and many cultural expressions, such as songs, dance, and origin stories, are passed orally from generation to generation and are not fixed in any tangible medium. Moreover, much of Indian traditional knowledge and cultural expression is not original because it is a product of shared cultural understanding spanning thousands of years. For example, the traditional dances and songs that a tribe has performed for generations cannot be copyrighted because they are not original. Therefore, the tribe cannot sue for copyright infringement when others representing themselves as tribal members perform the traditional dances and songs. Similarly, many tribes have an origin story that has been part of their cultural heritage for thousands of years but has only been transmitted orally. If the tribe has not published the story, it is not copyrighted, and the tribe cannot sue for copyright infringement when someone else publishes the story. U.S. trademark law can provide some protection for Indian traditional knowledge and cultural expression, but its applicability is limited. A trademark is a distinctive sign or indicator—such as a word, name, symbol, design, image, or any combination thereof—used by a person or organization to uniquely identify the source of its products or services and to distinguish them from those of other individuals or entities. Because trademarks are used to protect manufacturers, merchants, and consumers, traditional knowledge and cultural expressions not used in commercial transactions are still vulnerable to misappropriation or misrepresentation. For example, the sun symbol—a crimson circle with lines extending outward in each cardinal direction—is a religious symbol for the Zia Pueblo, but to pursue a trademark infringement case against those who use it without authorization, the Pueblo would have to use the symbol in commercial transactions. The establishment of the Patent and Trademark Office tribal insignia database in 2001 provides tribes with an opportunity to prevent merchants or manufacturers from registering marks that would falsely suggest a connection with the tribe. But inclusion of an insignia in the database does not provide the tribe with the benefit of trademark registration. Instead, tribes submit their flag, coat of arms, or other emblem or device adopted by tribal resolution for inclusion in the database so that the Patent and Trademark Office can use it when examining applications for trademark registration. If a mark that an applicant wishes to register as a trademark resembles the insignia of an Indian tribe, the Patent and Trademark Office might conclude that the mark would suggest a false connection with the tribe and reject the application. For example, the Port Gamble Indian Community in the state of Washington submitted its tribal insignia—an orca whale depicted in the traditional colors, shapes, and designs of Northwest Coast Indian art—for inclusion in the tribal insignia database. If a company submitted a trademark application for its logo that depicted such an orca whale, the Patent and Trademark Office might conclude that the logo falsely suggested a connection to the tribe and deny the company’s trademark application. According to Patent and Trademark Office officials, various federal and state laws—including invasion-of-privacy and trade secrets laws—protect the moral rights of artists and performers that are recognized in two international treaties. Specifically, the Berne Convention for the Protection of Literary and Artistic Works, which includes productions in literary and artistic domains, whatever the mode or form of its expression, and the World Intellectual Property Organization Performances and Phonograms Treaty, which applies to performers of literary or artistic works or expressions of folklore and producers of sound recordings of those performances, grant moral rights to artists, performers, and producers. As articulated in these treaties, moral rights are the right of attribution (the right to claim authorship of the work or performance) and the right of integrity (the right to object to any distortion, mutilation or other modification of, or other derogatory action in relation to the work or performance) that would be prejudicial to the artist or performer’s honor or reputation. Most states allow lawsuits to be brought for invasion of the right of privacy when publicity unreasonably places an individual in a false light before the public. For example, in the mid-1980s the Pueblo of Santo Domingo sued a newspaper for invasion of privacy because the newspaper published photographs of a ceremonial dance that were taken despite a tribal ban on photography. However, some legal experts have expressed skepticism about using lawsuits for invasion of privacy in response to misrepresentation or misappropriation of traditional knowledge and cultural expressions. The skepticism arises in part because the tribe would have to show how a nontribal member performing traditional tribal dances or using copies of traditional masks and performing traditional ceremonies is unreasonable and highly objectionable publicity that attributes to the tribe false characteristics, conduct, or beliefs and thereby places the tribe in a false position before the public. Finally, according to Patent and Trademark Office officials, state trade secrets laws would apply equally to Indian traditional knowledge and cultural expressions if they were kept secret and had some economic value. State trade secrets laws provide a means for redress when information that (1) derives independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable by proper means by, other persons who can obtain economic value from its disclosure or use and (2) is the subject of efforts that are reasonable under the circumstances to maintain its secrecy, is misappropriated. However, Patent and Trademark Office officials also noted that they were not aware of any cases alleging that misappropriation of traditional knowledge and cultural expressions violated state trade secrets laws. Some legal experts are also skeptical about the use of trade secrets law to prevent misappropriation of traditional knowledge and cultural expressions. Existing international frameworks offer protections for traditional knowledge, but the United States has not implemented them to date. A Patent and Trademark Office official told us that rather than using U.S. intellectual property laws to protect traditional knowledge and cultural expressions, other actions should be taken to safeguard them. For example, the United Nations Educational, Scientific and Cultural Organization’s Convention for the Safeguarding of Intangible Cultural Heritage requires parties to ensure that intangible cultural heritage is safeguarded, including its protection and promotion, through identification, inventory, and other measures. However, the United States is not a party to this convention, although the collections of the American Folklife Center of the Library of Congress—which maintains an archive of creative works and records representing or illustrating some aspect of American folklife—include Native American songs and dances. Implementing this international convention could help safeguard traditional knowledge and cultural expressions, according to an expert on traditional knowledge and intellectual property law, but safeguarding would not provide the legal protection that can only be afforded by intellectual property law. Similarly, the U.N. Declaration on the Rights of Indigenous Peoples, adopted by the United Nations General Assembly on September 13, 2007, includes provisions on protecting traditional knowledge and cultural expressions. The declaration proclaims several standards of achievement for countries to pursue, including that indigenous people have the right to maintain, control, protect, and develop their intellectual property over their cultural heritage, traditional knowledge, and traditional cultural expression, and that countries should take effective measures to recognize and protect the exercise of these rights. The United States originally voted against the resolution but later expressed its support for the declaration on December 16, 2010. At this time, it is not clear what policy actions, if any, the federal government will undertake to implement the Declaration on the Rights of Indigenous Peoples’ standards of achievement in the United States. Currently, negotiations are under way at the World Intellectual Property Organization on an instrument that, once implemented by countries, would help protect the intangible intellectual property of indigenous peoples. In response to the perceived and growing concern by indigenous people worldwide that misappropriation and unfair misuse of traditional knowledge and cultural heritage are increasing, the World Intellectual Property Organization in 2000 established the Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. The Intergovernmental Committee’s mandate calls on member states to reach agreement on one or more international legal instrument(s), which will ensure the protection of (1) traditional knowledge, (2) traditional cultural expressions and expressions of folklore, and (3) genetic resources. Expert working group discussions are being conducted for each of the three topic areas. Of these three areas, the most work has been done on an instrument to protect traditional cultural expressions and folklore, but, according to Patent and Trademark Office officials, member states are still far from reaching agreement on a final text. In addition, member states also have not reached agreement on whether the instrument will be a declaration, a model law for member states, or a binding international treaty. After agreement is reached on the text and type of instrument, each member state will have to take actions to implement the instrument. According to Patent and Trademark Office officials, it is not clear that the United States will be able to agree to any instrument because protection of folklore raises significant concerns for the public domain and for stakeholders such as libraries and the motion picture industry. Options for protecting traditional knowledge and cultural expressions are also found in the experiences of other countries that have established or attempted to establish laws and programs to address the issue. For example, in Australia, state and federal Cultural Affairs Ministers asked a nongovernmental organization to work on developing resources to address the needs of the indigenous arts community. In response, the Arts Law Centre of Australia developed an indigenous intellectual property “toolkit” to promote closer links between business and indigenous communities; raise awareness among indigenous communities, consumers, and commercial operators; and enhance coordination of existing networks of indigenous and nonindigenous organizations in relation to intellectual property matters. As part of the toolkit, the center launched a Web site with information on various intellectual property issues, including contracts, copyright, licensing, and moral rights for artists; recorded short messages in indigenous languages about various intellectual property issues, to air on the radio; and provided information for consumers and commercial operators. In addition, according to an Arts Law Centre official, until 2010, the center also provided direct legal services to indigenous artists who needed assistance with intellectual property, among other issues, but it stopped providing such services because it lacked funding for these time-consuming efforts. Even with the 2010 launch of the intellectual property toolkit, the Arts Law Centre official acknowledged that intellectual property and moral rights laws may not sufficiently protect indigenous traditional knowledge and cultural expressions. For example, when an art gallery erected a sculpture of Wandijina, the creation spirit sacred to three aboriginal groups in Australia, the groups were unable to use intellectual property laws to prevent the display of the sculpture, because it was inspired by the idea of the creation spirit, rather than copied from a tangible image. According to an official from the Arts Law Centre, the Australian government may, in the future, consider either amending the intellectual property laws or creating unique laws to address such issues. In contrast to Australia’s education and outreach approach, Panama, Nigeria, and New Zealand have provided specific protections for traditional knowledge and cultural expressions in their national laws. In 2000, Panama passed a law specifically to protect the collective rights of indigenous communities’ traditional knowledge and cultural expressions. The law protects the collective intellectual property rights and traditional knowledge of indigenous peoples in their creations by allowing traditional indigenous authorities or the congressional bodies that rule indigenous autonomous territories to register their collective rights with a government office and prohibiting unauthorized third parties from holding exclusive rights in indigenous traditional knowledge and cultural expressions. But the law does not address scenarios where a member of the indigenous group, as opposed to a third party, violates a registered collective right. According to a legal expert, as of 2005, only one of the seven indigenous groups in Panama had registered collective rights; the extent to which others will do so is unknown. In Nigeria, when expressions of folklore are made either for commercial purposes or outside their traditional or customary context, the country’s copyright act protects them against (1) reproduction; (2) communication to the public by performance, broadcasting, distribution by cable or other means; and (3) adaptations, translations, and other transformations. The right to authorize the reproduction, communication, and adaptation of these expressions of folklore is vested in the Nigerian Copyright Commission. A traditional knowledge and intellectual property law expert we spoke with, however, does not believe that the law has ever been used. Furthermore, the Nigerian Copyright Commission states on its Web site that it is seeking financial sponsorship for a project to document Nigerian indigenous folklore. According to the commission’s Web site, this project is a prelude to effective administration and enforcement of the provision listed in the copyright act. New Zealand has also taken steps to protect the intangible intellectual property of its indigenous groups. Specifically, the Trade Marks Act of 2002 prohibits the Commissioner of Trade Marks from registering a trademark when its use or registration would, “in the opinion of the Commissioner, be likely to offend a significant section of the community,” including New Zealand’s indigenous population. The law also requires the Commissioner to establish a committee comprising those knowledgeable about indigenous matters to advise the Commissioner on whether the proposed use or registration of a trademark that is, or appears to be, derivative of an indigenous sign, text, or image is, or is likely to be, offensive to indigenous groups. In addition, the country’s Ministry of Economic Development is examining the relationship between intellectual property rights and traditional knowledge. After examining the relationship, the ministry will develop options to address any problems identified, hold consultations on those options, and then make policy recommendations to the government. We provided a copy of our draft report to the Departments of Commerce, Homeland Security, the Interior, and Justice for review and comment. In their written responses, the Department of Commerce’s U.S. Patent and Trademark Office and the Department of Homeland Security generally agreed with the contents of the report and also provided technical comments, which we incorporated into the report as appropriate. Commerce’s and Homeland Security’s comments are presented in appendices III and IV, respectively. The Departments of the Interior and Justice provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Homeland Security, and the Interior; the Attorney General of the United States; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. As shown in table 1, the majority of complaints received by the Indian Arts and Crafts Board involved retail establishments and online sales. As shown in table 2, complaints were reported for a variety of arts and craft types, with the majority of complaints involving flutes, a mixture of arts and crafts, and jewelry. The number of flute complaints may not represent the relative scale of flute misrepresentation: two individuals submitted most of these complaints. As shown in table 3, the majority of alleged violations of the Indian Arts and Crafts Act occurred in states located in the western and southwestern United States. As shown in table 4, of the 12 states with laws regarding Indian arts and crafts, Nebraska is the only one where no complaints were reported to the Board during fiscal years 2006 to 2010. The defendant pleaded guilty to violating the Bald and Golden Eagle Protection Act and the other charges were dismissed without prejudice. The defendant was sentenced to 1 year probation. United States v. Jerry Lee Boose, No. 1:01-CR-20017 (E.D. Mich. Mar. 19, 2002) The defendant pleaded guilty to the mail fraud charge and was sentenced to 13 months jail time. The rest of the charges were dismissed with prejudice. United States v. Nader Pourhassan, No. 2:00-CR-00229 (D. Utah Dec. 31, 2001) The charges were dismissed with prejudice. United States v. Richard Tescher, No. 3:01CR0168 (D. Alaska Jan. 1, 2005) The charges were dismissed without prejudice. One count of wire fraud. United States v. Rose Morris, No. 1:05-CR-01378 (D. N.M. Dec. 5, 2007) The defendant pleaded guilty and was sentenced to 5 years probation. “Without prejudice” means the federal government can charge the defendant again for the crimes. “With prejudice” means the federal government cannot charge the defendant again. In addition to the contact named above, Jeffery D. Malcolm, Assistant Director; Pedro A. Almoguera; Paola Bobadilla; Mark A. Braza; Ellen W. Chu; Brad C. Dobbins; Jeanette M. Soares; and Michelle Loutoo Wilson made key contributions to this report.
In 1935 the Indian Arts and Crafts Act was enacted, establishing the Indian Arts and Crafts Board as an entity within the Department of the Interior. A priority of the Board is to implement and enforce the act's provisions to prevent misrepresentation of unauthentic goods as genuine Indian arts and crafts. As the market for Indian arts and crafts grew and the problem of misrepresentation persisted, the act was amended to, among other things, enhance the penalty provisions and strengthen enforcement. GAO was asked to examine (1) what information exists regarding the size of the market and the extent to which items are misrepresented and (2) actions that have been taken to curtail the misrepresentation of Indian arts and crafts and what challenges, if any, exist. In addition, this report provides information on some options available to protect Indian traditional knowledge and cultural expressions. GAO analyzed documents and interviewed international, federal, state, and local officials about the arts and crafts market and enforcement of the act. GAO is making no recommendations in this report. In commenting on a draft of this report, the Departments of Commerce and Homeland Security generally agreed with the contents of the report. The Departments of Commerce, Homeland Security, the Interior, and Justice also provided technical comments which were incorporated into the report as appropriate.. The size of the Indian arts and crafts market and extent of misrepresentation are unknown because existing estimates are outdated, limited in scope, or anecdotal. Also, there are no national data sources containing the information necessary to make reliable estimates. For example, the most often cited national estimates about the size of the market and the extent of misrepresentation come from a 1985 Department of Commerce study. GAO found that not only is this study outdated, but the estimates included in the study are unreliable because they were based on anecdotal information and not systematically collected data. No national database specifically tracks Indian arts and crafts sales or misrepresentation, and GAO found that no other national databases contain information specific or comprehensive enough to be used for developing reliable estimates. Moreover, GAO determined that to conduct a study that could accurately estimate the size of the Indian arts and crafts market and the extent of misrepresentation would be a complex and costly undertaking and may not produce reliable estimates. Federal and state agencies have relied largely on educational efforts rather than law enforcement actions to curtail misrepresentation of Indian arts and crafts, but these efforts are hampered by a number of challenges, including ignorance of the law and competing law enforcement priorities. From fiscal year 2006 to fiscal year 2010, the Indian Arts and Crafts Board received 649 complaints of alleged violations of the Indian Arts and Crafts Act. The Board determined that 150 of these complaints, or 23 percent, involved an apparent violation of the law, and it referred 117 of the complaints for further investigation by law enforcement officers, but no cases were filed in federal court as a result. According to the Board and law enforcement officials, support from law enforcement personnel and others to prosecute these cases has been sporadic because of higher law enforcement priorities. Therefore, the Board has relied primarily on educational efforts to curtail misrepresentation. For example, in response to complaints, the Board sent educational and warning letters to about 45 percent of alleged violators, and it produced educational brochures and participated in other educational efforts for artists, sellers, consumers, and law enforcement officers. GAO identified one arts organization that has successfully used civil actions to curtail misrepresentation, but this approach can be costly and time-consuming. U.S. federal and state laws protecting intellectual property do not explicitly include Indian traditional knowledge and cultural expressions--such as ceremonial dances or processes for weaving baskets--and therefore provide little legal protection for them. Some international frameworks offer protection for traditional knowledge and cultural expressions, but the federal government has not yet undertaken steps to implement these frameworks in the United States. Other countries, like Panama and New Zealand, have taken actions--which offer options for consideration--to protect the intellectual property of indigenous groups.
According to CMS documentation, the transition to value-based payment generally involves two major shifts from traditional fee-for-service payment. 1. Accountability for both quality and efficiency. Value-based payment models link payments to providers to the results of health care quality and efficiency measures. CMS uses a variety of measures to assess health care quality and efficiency and to hold physicians and other providers accountable for the health care they deliver. Quality measures include process and outcome measures. Process measures assess the extent to which providers effectively implement clinical practices (or treatments) that have been shown to result in high-quality or efficient care. Examples of process measures are those that measure care coordination, such as the percentage of patients with major depressive disorder whose medical records show that their physician is communicating with the patients’ other physicians who are treating comorbid conditions. Outcome measures track results of health care, such as mortality, infections, and patients’ experiences of that care. Efficiency measures may vary across models. For example, models may require that a minimum savings rate be achieved, which is established using a benchmark based on fee-for-service claims as well as other information such as patient characteristics, or that cost targets are achieved for various episodes of care. 2. Focus on population health management. Value-based payment models encourage physicians to focus on the overall health and well- being of their patients. Population health management includes provider activities such as coordination of patient care with other providers; identification and provision of care management strategies for patients at greatest risk, such as those with chronic conditions; promotion of health and wellness; tracking patient experience; and using health information technology (IT) to support population health. In value-based payment models, physicians and other providers are paid and responsible for the care of a beneficiary for a long period and accountable for the quality and efficiency of the care provided. In contrast, Medicare fee-for-service payments to providers are tied only to volume, rewarding providers, for example, on the basis of the number of tests run, patients seen, or procedures performed, regardless of whether these services helped (or harmed) the patient. This shift in care delivery can require substantial investments by providers. For example, providers may need to invest in health IT to manage patients and record data necessary for quality and efficiency measurement and reporting. Providers may also need to hire additional staff to assist with population health management activities, such as care coordination. The CMS Innovation Center has developed and is testing a number of value-based payment models. The following are examples of Medicare value-based payment models in which physician practices can participate. These models are often referred to as alternative payment models. ACOs. As noted earlier, ACOs are groups of physicians—including independent physician practices—hospitals, and other health care providers who voluntarily work together to give coordinated care to the Medicare patients they serve. When an ACO succeeds in delivering high-quality care and spending health care dollars more efficiently, part of the savings generated goes to the ACO and part is kept by Medicare. ACOs participate in models with upside risk only or models with both upside and downside risk. Bundled payment models. Bundled payment models provide a “bundled” payment intended to cover the multiple services beneficiaries receive during an episode of care for certain health conditions, such as cardiac arrhythmia, hip fracture, and stroke. If providers are able to treat patients with these conditions for less than the target bundled payment amount and can meet performance accountability standards, they can share in the resulting savings with Medicare. CMS’s initiative, Bundled Payments for Care Improvement (BPCI), tests four broadly defined models of care, under which organizations enter into payment arrangements that include financial and performance accountability for episodes of care. Comprehensive primary care models. Comprehensive primary care models are designed to strengthen primary care. CMS has collaborated with commercial and state health insurance plans to form the Comprehensive Primary Care (CPC) initiative. The CPC initiative provides participating primary care physician practices two forms of financial support: (1) a monthly non-visit-based care management payment and (2) the opportunity to share in any net savings to the Medicare program. In January 2017, CMS will build upon the CPC initiative, which ends December 31, 2016, by beginning CPC Plus, a comprehensive primary care model that includes downside risk. In November 2016, CMS published a final rule with comment period to implement a Quality Payment Program under MACRA, which established a new payment framework to encourage efficiency in the provision of health care and to reward health care providers for higher-quality care instead of a higher volume of care. The Quality Payment Program is based on eligible Medicare providers’ participation in one of two payment methods: (1) MIPS or (2) an advanced alternative payment model. Under MIPS, providers will be assigned a final score based on four performance categories: quality, cost, clinical practice improvement activities, and advancing care information through the meaningful use of EHR technology. This final score may be used to adjust providers’ Medicare payments positively or negatively. CMS will begin assessing providers’ performance in three of the four performance categories in 2017. Cost will not be measured in the first year. The first year that payments will be adjusted is 2019 (based on the 2017 performance year). Under the final rule, an alternative payment model will qualify as an advanced alternative payment model if it has downside risk, among other requirements. Providers with sufficient participation in advanced alternative payment models are excluded from MIPS and qualify to receive incentive payments beginning in 2019 (based on performance in 2017). Providers who participate in alternative payment models that do not include downside risk, such as some ACO models, will be included in MIPS. The final rule refers to these models as MIPS “alternative payment models.” To coincide with the final rule, CMS also issued a fact sheet with information on the supports available to providers participating in the Quality Payment Program. In the final rule, CMS stated that protection of small, independent practices was an important thematic objective and that in performance year 2017 many small practices will be excluded from the new MIPS requirements due to the low-volume threshold. CMS also stated that while it is not implementing “virtual groups” for 2017—which would allow small practices to be assessed as a group across the four MIPS performance categories—the agency looks forward to stakeholder engagement on how to structure and implement virtual groups in future years of the program. Further, CMS is reducing the number of clinical practice improvement activities that small and rural practices will have to conduct to receive full credit in this performance category in performance year 2017. CMS also announced in April 2016 that it intends to solicit and award multiple contracts to qualified contractors for MACRA quality improvement direct technical assistance. Direct technical assistance through this program will target providers in small group practices of 15 or fewer, and especially those in historically under resourced areas, such as rural areas. CMS indicated that the purpose of the contracts is to provide a flexible and agile approach to customized direct technical assistance and support services to ensure success for providers who either participate in MIPS or want to transition to an alternative payment model, thereby easing the transition to a payment system based on performance and patient outcomes. In addition, CMS has been testing models aimed at helping small and rural providers participate in value-based payment models. For example, in 2016, CMS began the ACO Investment Model, which provides advanced up-front and monthly payments to providers so they can make important investments in their care coordination infrastructure. According to information on CMS’s website, the ACO Investment Model was developed in response to stakeholder concerns and available research suggesting that some providers lack access to the capital to invest in infrastructure that is necessary to successfully implement population care management. According to literature we reviewed and the 38 stakeholders we interviewed, small and rural physician practices face many challenges associated with deciding whether to participate, when to begin participating, or whether to continue participating in value-based payment models. We identified 14 challenges that can be classified into five key topic areas: (1) financial resources and risk management, (2) health IT and data, (3) population health management care delivery, (4) quality and efficiency performance measurement and reporting, and (5) effects of model participation and managing compliance with requirements. (See table 1.) These 14 challenges are discussed in detail in the sections that follow. Small and rural practices need financial resources to make initial investments, such as those to make EHR systems interoperable, and need financial reserves or reinsurance to participate in models that have downside risk. Recouping investments may take years because the models must have a year of performance data, which then must be analyzed to determine any shared savings payment. Limited ability to take on financial risk because of having fewer financial resources/reserves compared with larger providers. Some stakeholders told us that small and rural practices have few financial resources and financial reserves. This limits their ability to take on the downside risk associated with some value-based payment models. In some value-based payment models, providers are financially responsible if their actual spending for treating Medicare beneficiaries exceeds the payment amount they receive from Medicare. In other models, a provider’s spending is compared to its historical spending, and if spending is higher than the historical benchmark, the provider has to repay a portion of the exceeded spending to Medicare. As a result, in order to participate, practices need either to have financial reserves to cover instances such as patients with unexpectedly costly medical events or to purchase reinsurance to cover such expenditures, according to some stakeholders we interviewed. Some stakeholders suggested that for reinsurance to help small and rural practices, it must be affordable, and the types of reinsurance currently available are costly. High costs of initial and ongoing investments needed for participation. Some stakeholders reported that significant investments are needed for participation in value-based payment models. Initial investments can cost practices thousands if not millions of dollars, and it can be difficult for small practices to pay for this out of their own pockets, according to some stakeholders. For example, one stakeholder told us that most small practices are on a month-to- month budget and have small profit margins. Some stakeholders told us that the costs of making EHR systems interoperable between providers can be expensive and often is the same cost regardless of practice size. A stakeholder from a physician practice told us that it cost about $20,000 for the group to connect two EHR systems, which would be the same cost for a small or large practice. Small practices have fewer physicians to spread these costs among. Additionally, some stakeholders reported that capital is needed to hire additional staff to help with the care coordination activities that are part of model participation. Difficulties with recovering investments in a timely manner. Small and rural practices often struggle with the amount of time it takes for them to recoup the investments they have made to participate in a model, according to some stakeholders we interviewed and literature we reviewed. After making initial investments, practices must wait for the completion and analysis of a performance year before they can receive a shared savings payment. Some stakeholders told us that it can take 2 or more years for this to occur. Furthermore, some stakeholders expressed concern about model sustainability and commented on the unpredictability of the models, which could affect physicians’ confidence in their ability to recuperate investments made if a model becomes obsolete or changes significantly. For example, at the beginning of calendar year 2017, CMS is making a significant change by replacing a 4-year-old model, the CPC initiative, with CPC Plus—a model in which practices must take on downside risk to participate. This change may prevent some small and rural practices from participating in the successor model, and consequently affect their ability to recoup the investments they made to participate in the CPC initiative. Small and rural practices need to have access to data that is important for care management and cost control. Also, these practices need to hire and train staff, as well as develop experience using EHR systems and analyzing data needed for participation. Difficulties with data system interoperability and limited ability to access data outside the practices’ own systems. Some stakeholders reported that having access to other providers’ data through interoperable EHR systems is beneficial as it can provide information to help coordinate and determine the appropriate care for a patient; however, they also reported difficulties in constructing interoperable systems. One small physician practice stakeholder told us that the practice has had difficulties accessing the results of tests conducted in an outside lab because the lab scans rather than types the test results into its system. The stakeholder said that the practice is working with its EHR vendor to address the problem but that he suspected the vendor may be less concerned about the practice’s challenges because the practice is small. He stated that such challenges are common for many rural health care facilities. Separate from interoperability, some stakeholders also reported that providers and payers may not be willing to share information, such as claims and price data, that would aid analysis and help a practice manage patient care—such as tracking when patients visit specialists or fill prescriptions—as well as control costs. It may be especially challenging for small and rural physician practices to gain access to such data as they may not have the relationships with payers that larger practices may have, which is needed for data sharing. According to a publication from our literature review, physician practices reported that price data for services and supplies could be difficult to obtain, maybe in part due to payer confidentiality and agreements with pharmaceutical and device companies regarding rebates or discounts. Difficulties with educating and training staff about EHR systems and the data entry, management, and analysis needed for participation. Some stakeholders reported that significant resources are needed for staff education and training to properly enter data required for model participation. These data are often needed for quality measurement associated with a specific value-based payment model, and physician practices need to ensure that staff have accurately and appropriately captured these data for patients to meet the model’s requirements. Additionally, some stakeholders stated that managing and analyzing data can be difficult and time-consuming, as small and rural practices often struggle with how to use their EHR systems to obtain data for analysis and timely decision making. For example, one stakeholder told us that practices often do not know how to use their EHR system to make a list of all patients with a certain disease, which could help the practice develop population health management strategies for that particular disease, among other activities. Further, another stakeholder told us that uniquely qualified staff are often needed to complete this work. Practices’ ability to manage care of their entire patient population is affected by patients’ geographic location and preferences, and this is especially true for rural physician practices whose patients may have to travel distances to receive regular wellness visits and seek specialists when recommended. In addition, the transition to value-based care, which focuses on population health management, will require adjustment by some physician practices, such as rural practices, that are generally more experienced with a fee-for-service system, especially as the two systems may have incentives that are difficult to reconcile. Patient preferences and geographic location affect practices’ ability to implement population health management care delivery and account for total cost of care. Literature we reviewed and some stakeholders indicated that physician practices’ ability to succeed in value-based payment models can be hindered by the preference and location of patient populations. For example, one stakeholder stated that physicians may have difficulty getting patients to complete wellness visits or other activities necessary for them to stay healthy. This is especially relevant for rural physician practices, as some patients in rural areas may have to travel long distances for wellness care or care from specialists, which can influence how often they actually seek such care. If patients do not receive recommended care, this can affect the rural physician’s ability to effectively manage patients’ conditions. Patient behavior and location can also make it difficult for providers to control the total cost of patient care or know about all the costs. For example, one stakeholder said that under a bundled payment model, practices are responsible for costs during an entire episode of care, but practices cannot influence where the patient receives post-acute care, which could affect the total cost of patient care. Additionally, another stakeholder told us that it can be difficult to engage patients using technology. This ACO has tried to manage patients’ post-acute care by communicating with patients through a technology system. However, the effectiveness of the system has been limited because some patients do not want to use it, preferring to speak with their physician directly. Provider resistance to making adjustments needed for population health management care delivery. Small and rural physician practices are having difficulty adjusting to a value-based care system, which focuses on population health management, as opposed to being paid based on volume, according to some stakeholders. For example, because providers are paid for each service under Medicare fee-for-service, providers have an incentive to provide a high volume of services without consideration of the costs or value of such services. Rural practices have a larger percentage of their Medicare patients enrolled in fee-for-service compared to non- rural practices, which have a larger percentage of their Medicare patients enrolled in Medicare Advantage, the private plan alternative to Medicare fee-for-service. Therefore, rural practices may be more influenced than others by the incentives under Medicare fee-for- service. In contrast, under value-based payment models, population health is a major component that requires care coordination and consideration about whether certain services are necessary that might involve additional attention and time from physicians. According to a publication from our literature review, some practices experience conflicting incentives—to increase volume under their fee-for-service contracts while reducing costs under their risk-based contracts—and not knowing which patients will be included in the value-based payment model can also make managing care difficult. Additionally, some providers in small and rural practices may be concerned about relying on the care of the other providers over which they have little or no influence, according to some stakeholders. One stakeholder we interviewed told us that this lack of trust in the ability of others to effectively coordinate and co-manage care spawns an unwillingness to enter into value-based payment models that require extensive care coordination across numerous providers to achieve shared savings. Value-based payment models require a full year of performance data, and the time lag between data submission and when a practice receives its performance report delays practices’ understanding of actions needed to improve care delivery and receive financial rewards. Further, the number and variation of quality measures required by Medicare and private payers are burdensome for small and rural practices, and practices with small patient populations face quality and efficiency measurement that may be more susceptible to being skewed by patients who require more care or more expensive care. Difficulties with receiving timely performance feedback. Some stakeholders mentioned a variety of issues related to delays in performance assessments associated with value-based payment models. As noted previously, it takes a full year of performance in addition to the time it takes for data about that year to be analyzed before information is known about a physician practice’s performance within a model. According to some stakeholders, this time lag makes it difficult for the practices to efficiently identify the areas that are working well and those that need improvement. For example, one stakeholder told us that a physician may receive the results of his or her performance within a model in 2016 for care that was provided in 2014. This limits physicians’ ability to make meaningful and timely changes to the care they provide. Additionally, some stakeholders reported that practices may not understand how best to improve their performance due to the limited information they receive from CMS. Misalignment of quality measures between various value-based payment models and payers. Some stakeholders told us that physician practices can be overwhelmed and frustrated by the number of quality measures that they need to report on for participation in value-based payment models and that the measures used by Medicare value-based payment models are not well-aligned with those used by commercial payers. Even if payers have similar quality measures, there may be slight variations in their calculation, which makes reporting burdensome. One stakeholder who works within an ACO stated that there are 58 unique quality measures across all the payers he works with. Performance measurement accuracy for practices with a small number of Medicare patients. Since small and rural physician practices often have fewer patients to measure, their performance may be more susceptible to being skewed by outliers, according to some stakeholders we interviewed. Even if these practices have only a few patients that require more comprehensive or expensive care, these few can disproportionately affect their performance negatively, and in turn the financial risk they bear, compared to practices with much larger patient populations. For at least one model type— ACOs—this challenge may be addressed by a requirement that an ACO have a minimum number of patients to participate, as well as by CMS adjusting the performance of some ACOs to account for their size. This patient size requirement and adjustment can help ensure statistical reliability when assessing an ACO’s performance against measures. However, some stakeholders told us that this requirement also has its challenges. For example, it can be particularly difficult for rural practices to find other practices to group with to meet this patient requirement. To participate in value-based payment models, small and rural physician practices may feel pressure to join with other practices. Model participation may also mean that physician and other practice staff must take on additional administrative responsibilities to meet conditions of participation. Furthermore, practices must work to stay abreast of regulations and model requirements as the models evolve. Difficulties with maintaining practice independence. Literature we reviewed and some stakeholders indicated that, in the movement toward value-based payment models, many small and rural practices feel pressure to join other practices or providers (such as a hospital or health system) to navigate these models even if the practices would prefer to remain independent. Limited time of staff and physicians to complete administrative duties required for model participation. Some stakeholders reported that both physicians and practice staff had to juggle many administrative responsibilities as part of participating in value-based payment models, which may be especially challenging for small and rural practices that tend to have fewer staff. Administrative duties may conflict with time needed for patient care. For example, one stakeholder told us that physicians are often busy seeing patients throughout the day and are unable to complete administrative tasks, such as attending meetings. Small physician practices may have limited staff time to devote to other administrative duties, including completing required documentation or collecting and reporting data on quality measures needed for participation in value-based payment models. Practices that want to add staff may also face challenges, such as finding qualified staff that are experts within their field and that understand the requirements associated with value-based payment models. Difficulties with understanding and managing compliance with the terms and conditions of waivers related to various fraud and abuse laws. The Secretary of Health and Human Services is authorized to waive certain requirements as necessary to implement the Shared Savings Program to encourage the development of ACOs and to test innovative payment and service delivery models, such as BPCI. However, some stakeholders stated that understanding and navigating the terms and conditions of waivers can be difficult and overwhelming for practices to manage. This may be especially true for small and rural practices that have less time to develop the knowledge necessary to understand waiver options or the resources to hire assistance in doing so, such as legal counsel. Difficulties with staying abreast of regulatory changes and managing compliance with multiple requirements of value-based payment models. Some stakeholders said that small and rural physician practices find it challenging to stay informed of and to incorporate regulation and requirement changes associated with value-based payment models. This may be due, in part, to small and rural practices often having fewer staff and resources to monitor changes. We found that organizations that can help small and rural practices with challenges to participating in value-based payment models can be grouped into two categories: partner organizations and non-partner organizations. Partner organizations share in the financial risk associated with model participation and provide comprehensive services. Non- partner organizations do not share financial risk but provide specific services that can help mitigate certain challenges. However, not all small and rural physician practices have access to services provided by these organizations. Based on the 38 stakeholder interviews we conducted and the related documentation collected, we found that some organizations serve as partners to small and rural physician practices. As partners, these organizations share in the financial risk associated with the models and provide comprehensive services that help with challenges in each of the five key topic areas affecting small and rural physician practices. Partner organizations can help with a variety of value-based payment models, including ACOs, comprehensive primary care models, and bundled payments. Certain partner organizations, known as awardee conveners, have binding agreements with CMS to assist providers with participation in BPCI, including helping them plan and implement care redesign strategies to improve the health care delivery structure. Other partner organizations may bring small and rural practices together to help form and facilitate an ACO. In this role, these partner organizations can help small and rural practices fulfill any requirements for an ACO to have a minimum number of patients and facilitate the reporting of performance measures as a larger group while still allowing practices to remain independent. This type of assistance can mitigate two of the challenges stakeholders have identified—performance measurement accuracy for practices with a small number of Medicare patients and maintaining practice independence. Depending on the arrangement between the practices and the partner organization, the partner organization may receive all or some of the savings generated by the ACO or bundled payment, as well as share in any financial losses incurred. For example, a partner organization stakeholder stated that the organization—which helps form ACOs— retained 40 percent of the shared savings, and the physician practices received the remaining 60 percent. Similarly, another partner organization stakeholder told us that the organization took on the entire share of any financial losses incurred and received a third of any gains. In some agreements, practices may receive different distributions of the financial savings based on their performance compared to set performance goals or to other practices in the group. In this type of arrangement with a partner organization, a practice will receive, at most, a portion of its shared savings, which could extend the time it takes practices to realize financial gains. See figure 1 for how sharing financial risk can mitigate a challenge faced by small and rural physician practices. Comprehensive services provided by partner organizations can either directly or indirectly help to mitigate many of the participation challenges faced by small and rural physician practices. As a way of directly assisting, for example, partner organizations can aid small and rural physician practices with population health management by analyzing data to identify high-risk patients such as those with chronic conditions who need comprehensive care management. Conversely, one challenge identified for small and rural physician practices was their limited ability to take on financial risk because they have fewer financial reserves when compared to their larger counterparts. While partner organizations do not directly address that these practices have fewer financial reserves, they can indirectly assist by taking on part or all of the financial risk of model participation. A small physician practice stakeholder told us that without the services provided by a partner organization, the practice would not be able to participate in the model. While the services offered by partner organizations can vary, they generally include the following. Provide or share resources. Partner organizations can support the cost of resources needed for model participation, such as health IT and care coordination resources, or help share resources across many practices to reduce costs for individual small and rural practices. For example, an awardee convener stakeholder told us that the organization manages a care innovation center staffed with about 70 nurses who work with patients and providers to make appointments and coordinate services, among other population management activities. Another partner organization stakeholder told us that the organization had formed a pharmacy hub in which the pharmacist works directly with the practices on comprehensive medication management. Further, some stakeholders stated that partner organizations can help reduce the costs of EHR systems and data analytics for the practices by, for example, sharing the EHR system and data analytics staff across practices. One partner organization stakeholder told us that, in another type of arrangement, the partner organization provides up-front funding for technology and other resources in return for 40 percent of any shared savings generated by the ACO. This arrangement can be particularly helpful to small and rural practices that may not have a lot of capital to invest. See figure 2 for the challenges mitigated by partner organizations by providing or sharing resources. Manage health IT systems and data. Partner organizations generally work with practices to enhance the interoperability of the practices’ data systems so that data can be shared and easily retrieved for analysis. For example, an awardee convener stakeholder told us that the organization had developed a way to connect providers’ EHR systems to its data system, as well as developed software that providers can use to more easily share data among themselves. Similarly, partner organizations can manage data and provide analytics. Some partner organization stakeholders stated that they conduct analysis and provide reports and data to physicians to help them with population management, such as identifying high-risk patients and practice improvement needs. A partner organization stakeholder told us that the organization collects beneficiary level data from all payers—including those that the partner organization does not work with—to monitor quality improvements and identify where physicians missed opportunities to diagnose patients. See figure 3 for the challenges that are mitigated by partner organizations managing health IT systems and data. Provide education and training related to population care management. Partner organizations can provide on-site training and mentoring for the practices’ staff related to population management care delivery. This can help small and rural physician practices transition their staff, who may be accustomed to being payed based on volume, to a value-based care system that focuses on population health management. It can also provide practices with tools on how to manage and engage patients, such as patients who are not accustomed to having regular wellness visits or using technology. For example, one partner organization stakeholder we interviewed said that the organization holds quality improvement workshops for physicians every quarter to work on implementing population health management activities, such as wellness visits. Another partner organization stakeholder said that the organization has practice transformation staff who spend about 4 hours each week working directly with each physician practice to implement a care management program. This stakeholder stated that it was important to provide physician practices with the tools, but it was just as important to provide in-practice support on how to use those tools and help to strengthen the practice. See figure 4 for the challenges that are mitigated by partner organizations providing education and training on population health management. Provide population health management services. Partner organizations can provide population health management activities, including identifying and tracking high-risk patients, scheduling wellness visits, and managing patients with chronic conditions. For example, an awardee convener stakeholder told us that the organization helps providers by checking on whether the patients have rides to their appointments, setting up patients’ appointments, and contacting other social services. Another partner organization stakeholder told us that the organization has care navigators, who work with physician practices to engage with patients and help those at high health risk, as well as patient care advocates, who identify patients with gaps in care or who need annual wellness visits. See figure 5 for the challenges that are mitigated by partner organizations providing population health management services. Measure quality and efficiency performance. Partner organizations can conduct analyses and provide reports to physician practices to help them understand and track their performance. For example, some partner organization stakeholders we spoke with measured physician practice performance against a defined set of quality measures and compared practices with their peers. These reports can help physician practices identify opportunities for quality improvement and savings without waiting for performance feedback from CMS. For example, one partner organization stakeholder told us that the organization analyzes data at the patient and physician level looking for opportunities to help the physician practice gain efficiencies, as well as identify differences in quality among practices. This partner organization also uses the data to educate the physician practices about patient attribution and differences in quality. According to another partner organization stakeholder, the analysis the organization conducts for their physician practices helps these practices manage the number and variety of performance measurements associated with value-based payment models. See figure 6 for the challenges that are mitigated by partner organizations helping physician practices measure their quality and efficiency performance. Manage compliance with requirements of value-based payment models. Partner organizations can provide assistance with value- based payment model requirements, as small and rural physician practices may not be structured to handle this administration. For example, an awardee convener stakeholder stated that it liaisons with CMS and prepares and submits all CMS-required documentation on behalf of providers. Another partner organization stakeholder stated that the organization’s legal counsel explains the various waivers relevant to the ACO, as well as the requirements of these waivers to providers in the ACO. See figure 7 for the challenges that are mitigated by partner organizations helping physician practices manage compliance with the rules and regulations of value-based payment models. Based on the 38 stakeholder interviews we conducted and the related documentation collected, the other category of organizations we identified that help small and rural practices participate in value-based payment models are non-partner organizations. Non-partner organizations provide services that are generally not as comprehensive as partner organizations, and they do not share in the financial benefits or risks with the practices. The specific services they provide—primarily in the key topic areas of health IT and data, quality and efficiency performance measurement and reporting, and population health management care delivery—help with certain challenges. The source of funding for non- partners also varies. For example, non-partner organizations might be hired by the practice itself or funded separately by government grants. The following are the types of non-partner organizations identified in our review and the types of services they can provide to small and rural physician practices. Facilitator conveners. These organizations have arrangements with providers or awardee conveners to provide administrative and technical assistance to aid with participation in BPCI. Although facilitator conveners do not bear risk, they are similar to awardee conveners in that they can assist physician practices and other providers with quality measurement and performance activities. For example, a facilitator convener could help track quality measures for providers. They can also help physician practices transition toward population health management care delivery by providing education to physician practices through webinars, for example, and by helping providers develop processes to coordinate episodes of care across providers. Health IT vendors. These technology companies are hired by physician practices to provide EHR systems, as well as data analytics software and services. Health IT vendors can assist practices with system interoperability challenges. For example, one health IT vendor stakeholder said that the vendor provides a connectivity engine so that physician practices’ EHR systems are interoperable with other providers and payers. Health IT vendors can also conduct analyses— such as using data to evaluate physician practices against performance measures to identify additional opportunities for improvement—or help develop population health management processes. Health IT vendors can help practices manage misalignment of quality measures between payers. A health IT vendor stakeholder told us that the organization uses numerous codes within practices’ datasets to allow practices to produce reports for multiple payers whose quality measures do not align; however the stakeholder added that this process is time intensive and could increase costs for the practices. Health IT vendors can also provide education and training for physician practices on best practices for EHR integration and optimization. A health IT vendor stakeholder told us that for small physician practices they generally provide EHR services; revenue and practice management service; and patient engagement services, which can include automatic check-in for patients, patient payment collection, and patient portals so practices can communicate electronically with patients. Regional Extension Centers (REC). RECs provide on-the-ground technical assistance intended to support small and rural physician practices, among others, that lack the resources and expertise to select, implement, and maintain EHRs. According to Department of Health and Human Services’ (HHS) documentation, RECs stay involved with physician practices to provide consistent long-term support, even after the EHR system has been implemented. REC services include outreach and education on systems, EHR support (e.g., working with vendors, helping to choose a certified EHR system), and technical assistance in implementing health IT. Technical assistance in implementing health IT includes using it in a meaningful way to improve care, such as using systems to support quality improvement and population health management activities. Sixty-two RECs were funded through cooperative agreements by HHS’s National Learning Consortium. RECs include public and private universities and nonprofits. Quality Innovation Network-Quality Improvement Organizations (QIN-QIO). QIN-QIOs work with small and rural physician practices, among others, to improve the quality of health care for targeted health conditions. For example, if a QIN-QIO has an initiative related to a specific health condition, such as a heart condition, the QIN-QIO would help practices improve clinical quality measures for patients with this condition, such as measures for blood pressure, cholesterol, and smoking cessation. The assistance provided and work performed by QIN-QIOs can vary greatly. A QIN-QIO stakeholder we interviewed told us that the QIN-QIO helps providers learn how to produce a quality report, how to interpret quality measures, and how to improve those measures, as well as educates providers on various requirements of value-based payment models. Other activities the network performs include educating physician practices on how to capture and understand EHR data since, according to this same stakeholder, small and rural physician practices often struggle with proper documentation for quality and performance management. The 14 QIN-QIOs each cover a region of two to six states and are awarded contracts from CMS. Practice Transformation Networks (PTN). PTNs are learning networks designed to coach, mentor, and assist clinicians in developing core competencies specific to population health management to prepare those providers that are not yet enrolled in value-based payment models. According to CMS officials, PTNs work with physician practice leadership to assist with patient engagement, use data to drive transformation of care toward population health management, and develop a comprehensive quality improvement strategy with set goals. The degree of help provided by the PTN depends on how far along the physician practice is in transforming to value-based care, according to CMS officials. PTNs provide technical assistance to physician practices on topics such as how to use data to manage care and move toward population health management. For example, a PTN stakeholder told us that the PTN makes sure the physician practice creates a registry to track high-risk patients and then uses the registry to perform outreach to patients to initiate follow- up care appointments. Similarly, PTNs can help ensure that practices use a referral tracking system, such as a system to determine whether a patient that a practice referred for a mammogram actually had the mammogram. PTNs can also provide other educational resources such as live question-and-answer chat sessions, peer-to-peer webinars, and computer modules that cover topics including quality improvement and patient engagement. The 29 PTNs receive funding through CMS grants and are part of CMS’s Transforming Clinical Practice Initiative. The PTNs include public and private universities, health care systems, and group practices. The services of non-partner organizations could help assist with some challenges we identified for small and rural practices. (See fig. 8.) Although we found that organizations can assist with many of the challenges identified for small and rural practices, not all such practices can access these services for a variety of reasons. First, some stakeholders we interviewed said that small or rural physician practices do not necessarily have access to an organization, such as an organization that forms ACOs. For example, some ACO stakeholders told us that they used criteria to determine which physician practices they would reach out to for inclusion in the ACO. One ACO stakeholder stated that the organization analyzes public data to identify the physician practices that look like good candidates for population health management and then talks to the practices about a possible partnership. Therefore, some small or rural physician practices struggling with changes needed to deliver population health management may not be contacted by an organization that forms ACOs. Second, we heard from some stakeholders that the limited resources of many small and rural physician practices may hinder their access to services provided by organizations. For example, small and rural physician practices may not have the financial resources to hire organizations that could assist them with participation, such as health IT vendors. Also, according to some stakeholders, organizations’ ability to assist practices is hindered when the practices struggle to make the initial investments needed to participate, such as hiring new staff or developing necessary data systems. Last, even if practices have access to an organization, that organization may not offer the services that the practice needs since the services offered can vary by organization. For example, not all partner organizations that form ACOs have access to and use other payers’ data to aid in the management of patient care. When we asked one partner organization stakeholder how the organization received access to data, the stakeholder stated that it was because of long-standing relationships it had with payers. Other partners that form ACOs may not be able to provide similar data to share. Additionally, according to CMS officials, each facilitator convener and awardee convener has discretion in the services it provides, and the services can vary, as can the services provided by CMS and HHS grantees—RECs, QIN-QIOs, and PTNs. We provided a draft of this report to CMS for comment. CMS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and the CMS administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Greg Giusto, Assistant Director; Christie Enders, Analyst-in-Charge; Deirdre Gleeson Brown, Analyst-in- Charge; and Samantha Pawlak made key contributions to this report. Also contributing were George Bogart, Beth Morrison, and Vikki Porter.
Based on a review of literature and interviews with 38 stakeholders, GAO identified challenges faced by small and rural physician practices when participating in Medicare's new payment models. These models, known as value-based payment models, are intended to reward health care providers for resource use and quality, rather than volume, of services. The challenges identified are in five key topic areas.
The Homeland Security Act of 2002 combined 22 federal agencies specializing in various missions under DHS. Numerous departmental offices and seven key operating components are headquartered in the NCR.its various components were not physically consolidated, but instead were dispersed across the NCR in accordance with their history. For example, in 2007, DHS employees in the NCR were located in 85 buildings and 53 locations, accounting for approximately 7 million gross square feet of government-owned and -leased office space. As of July When the department was formed, the headquarters functions of 2014, DHS employees were located in 94 buildings and 50 locations, accounting for approximately 9 million gross square feet of government- owned and -leased office space. GSA, the landlord for the civilian federal government, acquires space on behalf of the federal government through new construction and leasing, and acts as a caretaker for federal properties across the country. Federal agencies give GSA information on their program and mission requirements and GSA then works with those agencies to develop and refine their real estate space needs. As such, GSA had the responsibility to select the specific site for a new, consolidated DHS headquarters facility, based on DHS needs and requirements. In addition, GSA is responsible for awarding and managing contracts for design and construction. DHS began planning the consolidation of its headquarters in 2005. According to DHS, increased colocation and consolidation were critical to achieve the following objectives: (1) improve mission effectiveness, (2) create a unified DHS organization, (3) increase organizational efficiency, (4) size the real estate portfolio accurately to fit the mission of DHS, and (5) reduce real estate occupancy costs. DHS and GSA developed a number of capital planning documents to guide the DHS headquarters consolidation process. To start, DHS identified its original housing requirements in 2006 during the development of the DHS National Capital Region Housing Master Plan. In the housing master plan, DHS identified a requirement for approximately 7.1 million square feet of total office space in the NCR to accommodate DHS headquarters operations, with 4.5 million square feet on a secure campus. DHS also developed a program of requirements for DHS headquarters components that included a listing of current and projected space needs. In June 2007, DHS released its Consolidated Headquarters Collocation Plan. The colocation plan summarized component functional requirements and the projected number of seats needed on- and off-campus for NCR headquarters personnel. According to DHS, the colocation plan is based on the idea that the consolidated headquarters campus serves as a central hub for leadership, operations coordination, policy, and program management in support of the department’s strategic goals. Table 1 summarizes DHS and GSA key planning documents. According to GSA’s planning documents, the West Campus of St. Elizabeths, held by GSA, was the preferred site for DHS headquarters consolidation because (1) it could accommodate the 4.5 million square feet of office space, plus parking, and (2) was available immediately—two key requirements for DHS. GSA developed a Master Plan in 2009 that was to guide the overall development at the St. Elizabeths site. The plan was vetted through numerous stakeholders and received final approval in 2009. Construction started at the campus in 2009. Figure 1 depicts the campus as envisioned in the 2009 Master Plan. The full development of the St. Elizabeths Campus was intended to occur in three phases, with subphases, over 8 years. Table 2 shows the original planned construction for each of the project’s three phases, including the subphases, and their original and current estimated completion dates. From fiscal years 2006 through 2014, the St. Elizabeths consolidation project had received $494.8 million through DHS appropriations and $1.1 billion through GSA appropriations, for a total of over $1.5 billion. As part of this total, the DHS headquarters consolidation project received $650 million from the American Recovery and Reinvestment Act of 2009 (ARRA). In general, GSA funding was used for building construction and renovation, and other major infrastructure; DHS funding was used for tenant-specific capabilities, such as information technology (IT) infrastructure, furniture, and secure spaces, among other things. From fiscal year 2009—when construction began—through the time of the fiscal year 2014 appropriation, however, the gap between requested and received funding was over $1.6 billion. Figure 2 compares funds requested and received for the project for fiscal years 2006 through 2014. In 2007, DHS and GSA estimated that the total cost of construction at St. Elizabeths was $3.26 billion, with construction to be completed in 2015, with potential savings of $1 billion attributable to moving from leased to owned space. However, according to DHS and GSA officials, the lack of consistent funding has affected cost estimates, estimated completion dates, and savings. Table 3 shows changes over time to GSA cost estimates and scheduled completion dates, as well as projected savings associated with moving DHS staff from leased space into federally owned space. The majority of funding for the St. Elizabeths consolidation project through fiscal year 2013 has been allocated to the construction of a new consolidated headquarters for the U.S. Coast Guard (USCG) on the campus. In 2006, DHS and GSA projected that USCG would move to St. Elizabeths in 2011, but the move was delayed because sufficient funding for Phase 1 of the project was not available until fiscal year 2009. In 2009, DHS and GSA updated the projected completion date to the summer of 2013. Subsequently, USCG moved to the new building in August 2013. Figure 3 shows schedule slippage for the overall project. According to DHS and GSA officials, beginning in calendar year 2009, when construction commenced, Phase 1 of the overall project was successfully executed on schedule despite funding delays and shortfalls during fiscal years 2011 and 2012. GSA officials told us that, from fiscal years 2009 through 2013, DHS and GSA had requested about $1.6 billion to complete Phase 1 of the project but received only about $933 million for this purpose over the period. They said that they completed this phase of the project by deferring work planned to be completed in Phase 1 so that the USCG building could be occupied in 2013. GSA officials said that their efforts to save money by deferring work included reducing the scope of work needed to complete access road stonework and deferring landscaping and construction work on one building and the visitors’ center to future years. Figure 4 shows the entrance to the new U.S Coast Guard Headquarters Building on the St. Elizabeths Campus. Congress, the Office of Management and Budget (OMB), and GAO have all identified the need for effective capital decision-making among federal agencies. In addition, budgetary pressures and demands to improve performance in all areas put pressure on agencies to make sound capital acquisition choices. OMB’s Capital Programming Guide, a supplement to OMB Circular A-11, provides guidance to federal agencies in conducting capital decision-making. GAO also developed its Executive Guide: Leading Practices in Capital Decision-Making, which provides detailed guidance to federal agencies on leading practices for the four phases of capital programming—planning, budgeting, acquiring, and managing capital assets. These practices are, in part, intended to provide a disciplined approach or process to help federal agencies effectively plan and procure assets to achieve the maximum return on investment. DHS and GSA’s overall plans for headquarters consolidation do not fully conform with leading capital decision-making practices related to planning. DHS and GSA officials reported that they have taken some initial actions that could affect consolidation plans, such as adopting recent workplace standards at the department level and assessing DHS’s leasing portfolio. These types of actions may facilitate consolidation planning in a manner consistent with leading practices. However, the current collection of plans, which DHS and GSA finalized between 2006 and 2009, have not been updated to address these changes and funding instability that could affect future headquarters needs and capabilities. DHS and GSA have not conducted a comprehensive assessment of current needs, identified capability gaps, or evaluated and prioritized alternatives that would help officials adapt consolidation plans to changing conditions and address funding issues as reflected in leading practices. In addition, DHS has not consistently applied its acquisition guidelines to review and approve the project’s development. According to DHS and GSA officials, they have begun to work together to consider changes to the DHS headquarters consolidation plans. However, DHS and GSA have not announced when new plans will be issued, and it is unclear if they will fully conform with leading capital decision-making practices to help plan project implementation. In the overall capital decision-making framework, planning is the first phase—and arguably the most important—since it drives the remaining phases of budget, procurement, and management. The results from this phase are used throughout the remaining phases of the process; therefore, if key practices during this phase are not followed, there may be repercussions on agency operations if poor capital investment decisions are made. Given that some aspects of the project are complete, we compared DHS and GSA headquarters consolidation efforts to date with the 5 of 12 capital decision-making practices that are most applicable to planning for the remaining segments of the consolidation.practices are evaluating alternatives to best decide how to meet any gaps, prioritizing and selecting projects based on established criteria, and establishing a review and approval framework supported by analysis. conducting comprehensive assessments of needs to achieve results, identifying gaps between current and needed capabilities, Appendix II lists these 5 planning practices along with the remaining practices that focus on other aspects of the overall capital decision- making framework not included in the scope of this review, such as budgeting, procurement, and management. In addition to the above, one important aspect of capital decision-making is recognition of the dynamic nature of capital plans and the planning process. The following compares DHS and GSA planning and oversight for the remainder of the DHS headquarters consolidation project with the leading capital decision-making practices identified above. DHS and GSA capital planning efforts for DHS headquarters consolidation have not been updated to reflect changing workplace standards and inconsistent project funding. Leading practices in capital decision-making call for agencies to assess requirements and determine gaps between current and needed capabilities based on the results- oriented goals and objectives that flow from the organization’s mission. A comprehensive assessment of needs considers the capability of existing resources and makes use of an accurate and up-to-date inventory of capital assets and facilities, as well as current information on asset condition. Using this information, an organization can make decisions about where and how to invest in facilities. During the early stages of planning for the project, DHS and GSA developed various reports and planning documents (see table 1) to comprehensively assess DHS needs for a consolidated headquarters. DHS planning documents identified office space and DHS program requirements, and discussed which DHS functions needed to be colocated to achieve DHS’s mission. However, the plans, which were developed prior to the release of GSA’s Master Plan in 2009, have not kept pace with changes since then in workplace standards and do not account for delays attributable to inconsistent funding. Workplace standards. Leading organizations we studied developed comprehensive needs assessments that usually cover 5 or 6 years into the future and are updated frequently—for example, as a part of the organizations’ budget cycles. A needs assessment is to examine, among other things, external factors that affect or influence the organizations’ operations, such as workplace standards. However, the current plans for St. Elizabeths do not reflect changes in the workplace, such as telework and smaller standard work areas that could reduce the volume of space needed to house DHS employees. Furthermore, leading practices state that utilizing current and accurate information is essential when taking an inventory of current capabilities and assessing future needs. While DHS and GSA’s original plans called for a certain size and configuration to house employees at St. Elizabeths, changes in workplace standards could affect the overall footprint of the St. Elizabeths project or increase the number of staff designated to occupy space at the site, or both. This could ultimately reduce the number of DHS headquarters employees housed in leased space. Recent federal initiatives have been introduced to reduce federal agency space. For example, a June 2010 presidential memorandum directed agencies to explore how innovative approaches to space management and alternative work arrangements, such as telework, could help reduce the need for real estate and office space. Another alternative work arrangement, hoteling, would allow employees to work at multiple sites and use non-dedicated, nonpermanent workspaces assigned for use by reservation on an as-needed basis. Implementing hoteling could also reduce an agency’s need for office space. Subsequently, in May 2012, OMB issued a memorandum that, among other things, establishes the Freeze the Footprint policy. This policy directs agencies to restrict growth in their civilian real estate inventory. OMB supplemented that policy in March 2013 with implementing guidance for Freeze the Footprint that required agencies not to increase the total square footage of their office and warehouse inventory, using fiscal year 2012 as a baseline. OMB’s implementing guidance also directs agencies to use various strategies to maintain the baseline, including consulting with GSA about using technology and space management to consolidate, increasing occupancy rates in facilities, and eliminating lease arrangements that are not cost- or space effective. Inconsistent project funding. In addition to workplace standards, current funding for the St. Elizabeths project has not aligned with what DHS and GSA initially planned. As discussed earlier, from fiscal year 2009—when construction began—through the time of the fiscal year 2014 appropriation, the gap between what DHS and GSA requested and what was received was over $1.6 billion. According to DHS and GSA officials, this funding gap has created cost escalations of over $1 billion and schedule delays of 10 years, relative to their original estimates. DHS and GSA officials cited funding shortfalls as being disruptive in sequencing work, such as excavating soil for the DHS Operations Center and enabling repairs on the foundation of the St. Elizabeths Center Building.According to these officials, if funding had been available, excavation work associated with the new USCG building could have been extended to these other parts of the project without interruption. Officials said that if they had funds to do the excavation, they could have completed it while the site was under construction, instead of having to work around the full occupation and operation of the USCG building. DHS and GSA deemed this as a lost opportunity to purposely sequence the work to maximize construction efficiency and reduce the overall cost of development. OMB, Capital Programming Guide, Supplement to OMB Circular A-11. changes in workplace standards and delays attributable to funding shortfalls. A senior DHS project official stated that the basis of the consolidation plan remains the same and that it would be illogical to discard the plan mid-stream in favor of some unspecified alternative, given the years of comprehensive analysis that underpin the development and approval of the original 2009 Master Plan. However, DHS and GSA officials reported that they have begun to work together to consider changes to DHS headquarters consolidation plans. Specifically, in January 2014, DHS and GSA officials stated that they are currently reassessing requirements and alternatives for consolidation and colocation in recognition that workplace conditions have changed since the plan was formulated, beginning in 2006. The agencies have not announced when new plans will be issued. Furthermore, because final documentation of agency deliberations or analyses have not yet been developed, it is unclear if any new plans will be informed by an updated comprehensive needs assessment and capability gap analysis as called for by leading capital decision-making practices. Until DHS and GSA update their capital planning documents related to DHS headquarters consolidation—showing how DHS and GSA asset portfolios for the consolidation efforts meet the goals and objectives of the agencies’ strategic and annual performance plans and how these assets will be used to help agencies achieve their goals and objectives— agency managers and Members of Congress will be limited in their ability to fully understand how DHS and GSA intend to accomplish the consolidation and, consequently make informed decisions about future multi-billion dollar investments. Utilizing an updated comprehensive needs assessment and gap analysis of current and needed capabilities to inform revised headquarters consolidation plans can better position DHS and GSA to assure decision makers within both agencies and in Congress that consolidation is justified. Changes to workplace standards and funding instability provide GSA a commensurate opportunity to evaluate and prioritize alternative construction and leasing options to meet DHS space needs in the NCR. As stated earlier, leading capital decision-making practices call for agencies to determine how best to bridge performance gaps by identifying and evaluating alternative approaches. Before choosing to purchase or construct a capital asset or facility, leading organizations are to carefully consider a wide range of alternatives, such as using existing assets, leasing, or undertaking new construction. After evaluating alternatives, leading practices call for organizations to select projects based on a relative ranking of investment proposals. This prioritization of projects is important because limited resources require organizations to choose alternatives with the highest benefit or return. In the years leading up to 2009, when GSA issued the project Master Plan, DHS and GSA conducted alternatives analyses and used the results of these efforts to support the existing DHS headquarters consolidation plan and prioritize the individual projects that encompass the larger consolidation effort. For example, in 2007, we found that DHS examined various scenarios for housing DHS employees, such as a “campus” scenario, which would entail consolidation resulting in several campuses, including one large campus. Likewise, in 2008, GSA analyzed the feasibility of consolidating DHS headquarters at a variety of sites throughout the NCR, and determined that the only site with space available to accommodate DHS needs was the St. Elizabeths campus. After the site was selected, DHS and GSA worked together to prioritize the multiple construction phases that constitute the overall St. Elizabeths campus development. Given changes in workplace standards, among other things, as well as cost escalation and schedule slippage associated with funding instability, DHS and GSA would benefit from updating their alternatives evaluation and prioritizing the range of leasing and construction alternatives. One potential alternative, for example, would be for DHS to consider moving entire components from currently leased space to St. Elizabeths, rather than only the leadership of particular components as originally envisioned. Moving more staff than currently planned to the campus from leased space could potentially increase long-term cost savings and facilitate more effective collaboration. If DHS and GSA were to take such an action, this would require an overall change in their approach to housing staff in government-owned and -leased space—a change beyond that already considered within the context of DHS’s 2007 colocation plan. Even if DHS were to consider moving smaller portions of its workforce rather than entire components—for example, certain offices within components—to St. Elizabeths, DHS would need to consider the cascading effect of those changes and develop updated plans to reflect that. This would entail a reconsideration of space needs in both owned and leased space, and a commensurate reevaluation of funding needs, depending on, among other things, the volume and type of available or projected owned and leased space, and associated costs and benefits, as well as alternative estimates showing when owned or leased space might be available. DHS and GSA officials acknowledged that new workplace standards could create a number of new development options to consider, as the new standards would allow for more staff to occupy the current space at St. Elizabeths than previously anticipated. DHS officials told us that when the St. Elizabeths project was conceived, the standard office workspace was 200 square feet per person. In response to the 2012 Freeze the Footprint initiative described earlier, where applicable, DHS intends to reduce the average space per person for its employees across the department to 150 square feet of space per person. However, this potential change is not reflected in current headquarters consolidation plans. As a result, if DHS and GSA choose to keep the original 4.5 million square foot footprint they initially planned, they could increase the number of staff occupying the 14,000 seats at St. Elizabeths from 14,000 to about 20,000. Conversely, if DHS and GSA decide to keep 14,000 staff they initially planned to work at St. Elizabeths, they could shrink the overall footprint to about 3.7 million square feet. According to DHS and GSA, the agencies have taken steps to adapt to the changes in workplace standards. For example, flexible workspaces were incorporated into the build-out of the Coast Guard headquarters building during construction. Specifically, the internal build-out of the space has flexible configurations that can be easily and inexpensively changed to support changes in the workplace environment, in the event that DHS decided to expand or reduce the workforce or space. Table 4 shows the original estimates, and the effect of increasing the number of staff occupying 14,000 seats from 14,000 to 20,000 (scenario A), or reducing the footprint to about 3.7 million square feet by keeping the estimated number of staff at 14,000 (scenario B). Adopting either scenario or some variation between the two could have a significant impact on the scope and cost of the project and could change how DHS and components perform their missions. The following is a description of potential alternatives that DHS and GSA could consider in light of new workplace standards. DHS and GSA officials said they are considering these types of options, although they have not yet developed final documents or analysis. For example: Keep the current estimated number of staff (14,000) with a reduced square footage (3.7 million). GSA and DHS could, over the short term, reduce the overall cost of the project with a decrease in construction costs. Furthermore, DHS and the components slated to move to St. Elizabeths would likely carry out their mission at the new location as originally intended. Maintain the current square footage projection (4.5 million square feet) while increasing the number of staff occupying the 14,000 seats (from 14,000 to 20,000). This change could result in an increase of the overall short-term cost of the project because GSA might have to build out additional office space and meet requirements for additional services, such as computer and telecommunication lines and technological services. However, the increased staff at St. Elizabeths could also increase long-term savings because DHS would not need to lease space for an increased number of employees should DHS decide to move more to St. Elizabeths. In addition, as discussed earlier, from fiscal years 2009 to 2014, the gap between requested funding and funding received was over $1.6 billion. According to DHS and GSA officials, this funding gap has created cost escalations of over $1 billion. To help address the variation in funding requested and received, DHS and GSA have revised their funding strategy to focus on developing smaller construction segments that are intended to be more financially viable and less subject to uncertainty. For example, DHS and GSA may request full funding for a construction segment that will result in a functional, usable building and not be dependent on additional future funding to complete. This funding strategy is consistent with a leading practice related to the budgeting phase of capital decision-making, which calls for agencies to budget for projects in useful segments. Specifically, DHS and GSA would allocate funding for the remaining work at St. Elizabeths into usable segments that are independent of the overall consolidation, rather than incrementally over the length of the project, as has been done in the past. Developing a funding strategy into segments may be a viable approach in managing and overseeing a project with a scope and potential cost as large as St. Elizabeths, particularly in a constrained budgetary environment. Schedule delays—up to 10 years relative to original estimates, according to DHS officials—have also resulted from the gap between funding requested and funding received. These delays have posed challenges for DHS in terms of its current leasing portfolio. Specifically, DHS’s long-term leasing portfolio was developed based on the original expected completion date for St. Elizabeths development in 2016. However, according to DHS leasing data, 52 percent of DHS’s current NCR leases will expire in 2014 and 2015, accounting for almost 39 percent of its usable square feet. See figure 5 for DHS’s annual leasing costs and usable square feet by year of lease expiration for 2013 through 2023. DHS officials told us that, given delays moving the project forward and the expiration of existing leases, DHS is currently working with GSA to renegotiate leases where staff of individual components are currently housed. However, DHS acknowledged that a comprehensive analysis of its real property and leasing options in the NCR—which has a direct bearing on development options at St. Elizabeths—is ongoing, but not complete, and documents related to the analysis have not been finalized. Given uncertainties about the size, scope, and timing of the project moving forward as well as the overall cost of the project to the government—DHS and GSA would be better positioned to make choices about capital investments if they were to identify and analyze a broader range of alternatives and use this alternatives analysis to inform their prioritization and selection of efforts related to headquarters consolidation. For example, a comprehensive alternatives analysis could take into account (1) DHS’s actual and projected leasing costs for locations where employees are currently housed; (2) DHS and GSA costs to develop additional segments of the St. Elizabeths Campus, as well as any transportation and infrastructure improvements; and (3) a range of leasing and construction alternatives and their associated costs for the St. Elizabeths site, depending on a determination of usable square footage needed. After identifying and analyzing a range of alternatives that better reflects current conditions, DHS and GSA would be better positioned to prioritize the individual steps or projects that will compose the larger headquarters consolidation effort. Given that $1.5 billion has already been invested in the headquarters consolidation, a comprehensive analysis and prioritization of alternatives, including cost and benefit analyses for each of the alternatives being considered, that accounts for the complete costs and benefits to the federal government as a whole, would improve transparency and allow for more informed decision making by DHS and GSA leadership and Members of Congress. DHS has not consistently applied its major acquisition guidance for reviewing and approving the headquarters consolidation project. Leading practices call for agencies to establish a formal process for senior management to review and approve proposed capital assets. The cost of a proposed asset, the level of risk involved in acquiring the asset, and its importance to achieving the agency mission should be considered when defining criteria for executive review. Leading organizations have processes that determine the level of review and analysis based on the size, complexity, and cost of a proposed investment or its organization- wide impact. DHS has guidelines in place to provide senior management the opportunity to review and approve its major projects, but DHS has not consistently applied these guidelines to its efforts to work with GSA to plan and implement headquarters consolidation. As discussed below, DHS has sometimes, but not always, classified the consolidation project as a major acquisition, which has affected the extent to which the department has oversight of the project. By not consistently applying this review process to headquarters consolidation, DHS management risks losing insight into the progress of the St. Elizabeths project, as well as how the project fits in with its overall acquisitions portfolio. DHS programs designated as major acquisitions are governed by the policies and processes contained in DHS Acquisition Management Directive 102-01 (MD 102) and the accompanying DHS Instruction Manual 102-01-001. MD 102 establishes an acquisition life-cycle consisting of four phases. According to MD 102, an acquisition program is considered a Level 1 Major Acquisition if its life-cycle cost is at or above $1 billion, and a Level 2 Major Acquisition if its life-cycle cost is $300 million or more, but less than $1 billion. At predetermined points throughout the life-cycle—known as Acquisition Decision Events—a program deemed to be a major acquisition undergoes review by a designated senior official, referred to as the Acquisition Decision Authority, to assess whether the program is ready to proceed through each of the four phases. An important aspect of this process is the review and approval of key acquisition documents that, among other things, establish the need for a major program, its operational requirements, and an acquisition baseline and plan. MD 102 also requires that a DHS Investment Review Board (IRB) review major acquisitions programs at Acquisition Decision Events and other meetings, as needed. The IRB is chaired by the Acquisition Decision Authority and made up of other senior officials from across the department responsible for managing DHS mission objectives, resources, and contracts. DHS’s Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process, supports the IRB, and reports directly to the DHS Chief Acquisition Officer. PARM is to develop and update program management policies and practices, oversee the acquisition workforce, provide support to program managers, and collect program performance data. Our prior work has assessed MD 102 and found that it establishes a knowledge-based acquisition policy for program management that is largely consistent with key practices. DHS has designated the headquarters consolidation project as a major acquisition in some years but not in others. In 2010 and 2011, DHS identified the headquarters consolidation project as a major acquisition and included the project on DHS’s Major Acquisition Oversight List. Thus, the project was subject to the oversight and management policies and procedures established under MD 102. In 2012, the project as a whole was dropped from the list, and in 2013 DHS included the IT acquisition portion of the project on the list. DHS issued the 2014 list in June 2014, which again included the IT portion of the project, but not the entire project. Figure 6 shows the extent to which the headquarters consolidation project has or has not been considered a major acquisition by DHS under MD 102. PARM officials explained that they considered the St. Elizabeths project to be more of a GSA acquisition than a DHS acquisition because GSA owns the site and the majority of building construction is funded through GSA appropriations. Furthermore, they stated that DHS appropriations for the project are largely transferred to GSA through interagency mechanisms so that, in effect, GSA is responsible for managing contracts procured with DHS funding. PARM officials also explained that they did not believe that the IT portion of the St. Elizabeths project should be classified as a DHS major acquisition. They said that although the IT acquisitions are a DHS responsibility and funded with DHS appropriations, GSA is managing the IT contracts and therefore they believe those acquisitions’ oversight should reside with GSA. They said that the reason the IT component was placed on the Major Acquisition Oversight List in 2013 was because the DHS Office of Inspector General (OIG) recommended its inclusion. When asked why the overall headquarters consolidation program was previously identified by DHS as a major acquisition in earlier years, and what had changed, PARM officials said that it was likely included on past acquisition lists because it was a new program and DHS and GSA roles and responsibilities had yet to be firmly established. We recognize that GSA has responsibility for managing contracts associated with the headquarters consolidation project. However, a variety of factors, including the overall cost, scope, and visibility of the project, as well as the overall importance of the project in the context of DHS’s mission, make the consolidation project a viable candidate for consideration as a major acquisition. As noted above, an acquisition program is considered a Level 1 Major Acquisition if its life-cycle cost is at or above $1 billion. DHS and GSA were unable to provide an estimate of the life-cycle cost for the St. Elizabeths project. However, under the current plan, DHS reports it will need about $1.7 billion to complete the project by 2026, not including life-cycle cost, well above the normal threshold required for a major acquisition classification called for by MD 102. Furthermore, per MD 102, the Acquisition Review Board may consider a project a candidate for oversight under MD 102 if the project’s importance to DHS’s strategic and performance plans is disproportionate to its size, or if the project has high executive visibility, impacts more than one DHS component, has significant program or policy implications, or if the Deputy Secretary, Chief Acquisition Officer or Acquisition Decision Authority recommends an increase to a higher acquisition level. Given the size and scope of the project and the extent to which the completion of the project could impact the performance of DHS’s mission, the headquarters consolidation project should be considered a candidate for treatment as a major acquisition under MD 102. If DHS were to consider the headquarters consolidation project a major acquisition, consistent with MD 102 requirements, DHS would be better positioned to oversee the project and provide decision makers in DHS and Congress, and the taxpayer greater assurance that the project is being acquired on-time and on-budget. We also observed that, during the years (2010, 2011, 2013, and 2014) that the headquarters consolidation project, or portions of it, was on DHS’s major acquisition list and therefore subject to MD 102 requirements, the project did not comply with major acquisition requirements as outlined by DHS guidelines. Specifically, the project has not produced any of the required key acquisition documents requiring department-level approval: (1) mission need statement, (2) capability development plan, (3) operational requirements document, (4) integrated logistics support plan, (5) life-cycle cost estimate, (6) acquisition program baseline, and (7) test and evaluation master plan. Furthermore, one role of PARM is to conduct independent evaluations of major programs’ health and risks, but PARM has not assessed the St. Elizabeths project as of March 2014. PARM officials stated, however, that they informally monitor the project through regular communication with DHS officials who oversee the St. Elizabeths project and have no concerns about the project’s management. In accordance with MD 102, the IRB’s predecessor body—the Acquisition Review Board—reviewed the headquarters consolidation program with a focus on ARRA funding in 2009 and 2010, but has not reviewed the program since then, even though the program as a whole was considered a major acquisition in 2011, and likewise with the IT component in 2013 and 2014. In May 2010, the board generally expressed concern about the level of DHS oversight given that the project was highly visible and important for the department, and recommended senior leadership meetings between DHS and GSA. In September 2010, DHS and GSA signed a memorandum of understanding that defined certain roles for both agencies regarding project oversight. As noted earlier, DHS officials stated the headquarters consolidation has not undergone review by the IRB since 2010 because they viewed it as primarily a GSA project. DHS officials also stated that the program was under way before MD 102 was issued and that the directive requirements were not applicable. In addition, officials stated that they regularly briefed leadership on the status of the project and have produced some documentation, such as a needs assessment and baseline, which officials said are similar to documents required by MD 102. For example, the St. Elizabeths baseline contains some cost and schedule information for the project as required by MD 102. However, it does not contain other required information to help measure program performance. In addition, a 2010 update to MD 102 stated the directive requirements are to be applied to the maximum extent possible to all major acquisitions in existence when the update was issued. According to DHS acquisition officials, since 2010, the DHS Acquisition Officer has issued waivers for some legacy programs, but not for the headquarters consolidation program or its IT component. Although the St. Elizabeths program managers may provide visibility of the project to leadership and may have similar key documentation, this falls short of the requirements contained in MD 102. The MD 102 process provides a more consistent, transparent review process that would involve a greater cross section of departmental stakeholders, among other things, especially given the magnitude of the project and the numbers of components that would occupy space at St. Elizabeths. DHS has established through MD 102 an acquisitions policy for major capital assets that provides the agency with tools to better manage large projects. For example, regular project review and approval by a cross section of departmental leadership, along with having standardized project documentation, can help mitigate significant acquisitions challenges such as funding instability and capability changes. Furthermore, the MD 102 process provides oversight and facilitates program accountability. If the entire headquarters consolidation program were designated as a major acquisition, DHS would be better positioned to follow its own acquisition policy. Utilizing the MD 102 review framework could provide the structure for more efficient program management and provide DHS, Congress, and taxpayers greater assurance that government funds are being spent in a way that is consistent with sound acquisition practices, and the project is moving forward as intended. DHS and GSA cost and schedule estimates for the headquarters consolidation project at St. Elizabeths do not or only minimally or partially conform with leading estimating practices, and are therefore unreliable. Furthermore, in some areas, the cost and schedule estimates do not fully conform with GSA guidance relevant to developing estimates. Developing cost and schedule estimates consistent with leading practices could promote greater transparency and provide decision makers needed information about the St. Elizabeths project and the larger DHS headquarters consolidation effort. DHS and GSA cost and schedule estimates for the headquarters consolidation project at St. Elizabeths contain numerous deficiencies that do not reflect leading practices, which render the estimates unreliable. In 2013, DHS and GSA updated earlier estimates to produce the current St. Elizabeths cost and schedule estimates summarized in figure 7. These DHS and GSA estimates showed a total project cost of about $4.5 billion—$2.8 billion funded through GSA appropriations and the remaining $1.7 billion funded through DHS appropriations. According to the 2013 estimates provided by DHS and GSA, based on this level of funding, the project would be completed in 2026. We compared the 2013 cost estimate and the 2008 and 2013 schedule estimates with leading practices for developing such estimates and found that the estimates do not or only minimally or partially conform with key characteristics for developing reliable estimates. Specifically, we compared DHS and GSA overall project cost estimates with the GAO Cost Estimating and Assessment Guide (Cost Guide), which defines leading practices related to four characteristics—comprehensive, well documented, accurate, and credible—that are important to developing high-quality, reliable cost estimates. We also compared DHS and GSA overall schedule estimates with the GAO Schedule Assessment Guide, which defines leading practices related to four characteristics— comprehensive, well constructed, credible, and controlled—that are important to developing high-quality, reliable schedule estimates.5 and 6 describe the characteristics of high-quality, reliable cost and schedule estimates that served as the foundation of our comparative analysis. We have applied our leading cost and schedule estimation practices in past work involving federal construction projects, and the leading practices were developed in conjunction with numerous stakeholders from government and the private sector, including DHS and GSA. Furthermore, GSA acknowledged the value of our leading cost estimation practices in 2007 and issued an order to apply the principles to all cost estimates prepared in every GSA project, process, or organization. We established five descriptions for our assessments of leading practices and cost estimate characteristics: fully meets, substantially meets, partially meets, minimally meets, and does not meet. We consider a leading practice to be fully met when the associated tasks are completely satisfied, substantially met when a large portion of the associated tasks are satisfied, partially met when about half of the associated tasks are satisfied, minimally met when a small portion of the associated tasks are satisfied, and not met when none of the associated tasks are satisfied. Our assessment method weights each leading practice equally and bases the assessment of each characteristic on the average score of underlying leading practices. We assign each description a numerical value (5 for fully meets to 1 for does not meet) and round scores to the higher numerical value (i.e., a score of 4.5 would round up to 5). Assessments were conducted by an individual analyst, and then the results were independently traced and verified by a second analyst. reflect the characteristics of a high-quality estimate and cannot be considered reliable. The following two sections provide an overview of the results of our comparison of DHS and GSA cost and schedule estimates with the four characteristics for each of GAO’s cost- and schedule- estimating guidelines. Our overall comparison of the 2013 cost estimate for St. Elizabeths development with leading cost-estimating best practices showed that the estimate partially or minimally conforms with leading practices. Specifically, we found that the cost estimate for the headquarters consolidation at St. Elizabeths partially conforms with leading practices associated with the characteristics of comprehensive and well- documented estimates, and minimally conforms with leading practices associated with characteristics of accurate and credible estimates. We assessed the DHS and GSA cost estimate using the framework of the four characteristics above associated with high-quality, reliable cost estimates. Table 7 shows the overall results of our comparison along with examples of selected leading practices under each characteristic and our rationale for assessment. Appendix V provides greater detail on our comparison of the estimate with specific leading practices that constitute the four cost-estimating characteristics. A reliable cost estimate is critical to the success of any program. Such an estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. Accordingly, DHS and GSA would benefit from maintaining current and well-documented estimates of project costs at St. Elizabeths—even if project funding is not fully secured—and these estimates should encompass the full life-cycle of the program and be independently assessed. Among other things, OMB states that generating reliable program cost estimates is a critical function necessary to support OMB’s capital programming process. Without this capability, DHS and GSA are at greater risk of experiencing cost overruns, missed deadlines, and performance shortfalls related to the headquarters consolidation program. Our overall comparison of DHS and GSA schedule estimates with leading schedule-estimating practices showed that the most recent schedule estimate DHS and GSA prepared with sufficient detail for the entire project—in 2008—minimally conforms with leading practices. Specifically, the 2008 estimate minimally conforms with leading practices related to each of the characteristics of comprehensive, well-constructed, credible, and controlled estimates. We assessed DHS and GSA schedule estimates using the framework of the four characteristics above associated with high-quality, reliable schedule estimates. Table 8 shows the overall results of our analysis of the 2008 schedule estimate, along with examples of select leading practices under each characteristic and our rationale for assessment. We highlighted the results of the 2008 schedule comparison because the 2008 schedule was the most recent schedule that included logic necessary for identifying a critical path. As noted above, at the request of GSA, we also analyzed a schedule estimate updated in 2013. However, the 2013 estimate was incomplete and did not cover the overall consolidation program in sufficient detail. For example, the schedule depicts only high-level activities and does not provide details needed to understand the sequence of events, including work to be performed in fiscal years 2014 and 2015. As a result, the 2013 schedule satisfied fewer leading practices than the 2008 schedule, and is also unreliable. Appendix VI provides greater detail on our comparison of both the 2008 and 2013 estimates with 10 specific leading practices that compose the four schedule estimating characteristics. In accordance with leading schedule estimation practices, the success of a major program such as the consolidation project at St. Elizabeths depends in part on having an integrated and reliable master schedule that defines when work will occur and how long it will take and how each activity is related to the others. For example, the program schedule provides not only a road map for systematic project execution but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. A program schedule is also a vehicle for developing a time-phased budget baseline and an essential basis for managing trade-offs among cost, schedule, and scope. Accordingly, and despite current funding uncertainty for the project, DHS and GSA would benefit from developing a comprehensive schedule for the St. Elizabeths consolidation project in accordance with leading practices. DHS and GSA officials, as discussed in more detail below, generally did not agree with this overall assessment because they believe the leading practices are not well-suited for the type of complex construction projects occurring at the St. Elizabeths site. We compared the cost and schedule estimates prepared by DHS and GSA for the St. Elizabeths project with relevant GSA guidance and found that the estimates do not always conform with agency estimating requirements. In commenting on our assessments of the St. Elizabeths cost and schedule estimates, DHS and GSA officials acknowledged that the estimates do not fully conform with our leading practices, but said that the estimates do conform with GSA project estimation policies. GSA officials agreed with the underlying objectives of our leading cost- and schedule-estimating practices, but noted that other methodologies are valid and better suited to GSA projects like construction at St. Elizabeths. GSA officials cited GSA’s Project Estimating Requirements for the Public Buildings Service, also called P-120, and GSA’s Facilities Standards for the Public Buildings Service, also called P-100, as the key documents for estimating and managing building construction programs within GSA. Because P-120 and P-100 focus on cost estimation requirements and do not fully describe schedule estimation, GSA officials subsequently provided us with GSA’s Global Project Management (gPM) guidance as an additional source as it includes the Scheduling Fundamental Guide. We reviewed the GSA guidance listed above and noted several areas where GSA cost and scheduling-estimating policies align with our leading practices. More specifically, 8 of the 12 steps of a high-quality cost estimate were at least partially reflected in GSA guidance, and 5 of our 10 leading schedule-estimating practices were at least partially reflected. For example, GSA’s P-100 guidance fully reflects our leading cost estimation practice of “defining the program’s characteristics,” as it contains formal, detailed design standards and criteria for construction of new facilities and repairs or alterations to existing buildings. GSA’s gPM guidance fully reflects our leading schedule estimation practice of “capturing all activities,” as it states that the first step in building a schedule is to identify all activities required to complete the project and recommends incorporating the activities to the project’s Work Breakdown Structure (WBS)—a framework for documenting certain activities like estimating costs, identifying resources, determining where risks may occur, and providing the means for measuring program status. In cases where GSA guidance and our leading practices align, we compared the St. Elizabeths cost estimates with GSA guidance and found some areas where the project estimates were developed consistent with the guidance. For example, P-120 and P-100 call for GSA to establish a set of ground rules for estimating GSA construction projects, such as how inflation is applied or how budget constraints might affect the project. Our comparison of the St. Elizabeths project cost estimate with GSA guidance showed that the project estimate documents the ground rules consistent with the guidance. Likewise, P-120 recommends using a WBS for projects to be funded over more than 1 year. Our comparison showed that the estimate included a WBS that outlined the end product and major program effort. In contrast, we also found areas where the 2013 St. Elizabeth cost estimate was not prepared consistent with GSA guidelines. Specifically, our comparison showed that the project cost estimate: Does not include a life-cycle cost analysis. P-120 and P-100 both require that a life-cycle cost analysis be conducted to help determine the value of a project beyond just the cost of acquiring it, such as the cost of repairs, operations, preventive maintenance, logistic support utilities, and depreciation over the useful lifetime of the facility. P-100 states, for example, that “three characteristics distinguish GSA buildings from buildings built for the private sector: longer life span, changing occupancies, and the use of a life-cycle cost approach to determine overall project cost.” No life-cycle cost analysis for St. Elizabeths is reflected in the estimate, including the cost of repair, operations, and maintenance. Is not regularly updated to reflect significant changes in the program. P-120 guidance states that the cost estimates for design projects should be updated throughout the design process, and furthermore, where costs are included, design assumptions must be addressed in order to completely define the scope of the estimate. We found that the estimate was updated based on available funding, but was not regularly updated to reflect significant changes to the program including actual costs. Does not include an independent cost estimate. P-100 states that GSA will develop two separate independent government estimates (IGE) to aid in effective project controls and assist in tracking the budget. The GSA definition of IGE does not completely align with the GAO definition of an independent cost estimate, but no IGEs were conducted for the entire St. Elizabeths project. With regard to St. Elizabeths 2008 and 2013 schedule estimates, we found some instances where the project schedule estimates were partially consistent with GSA guidelines—the estimates covered elements of the guidance—and others where they did not. For example, P-120 states that schedules should realistically reflect how long each activity should take. Our comparison of the St. Elizabeths schedule estimates with GSA guidance showed that the project estimates partially establish the duration of all activities—a factor that is reflected in both the 2008 and 2013 schedule estimates. Likewise, GSA’s gPM states that activities should be sequenced after they have been defined and their duration has been estimated, tracked according to start and finish dates, and structured to show relationships between them to reflect their dependency on each other. Our comparison showed that the 2008 schedule estimate (not the 2013 estimate) partially reflected these gPM guidelines. In those instances where schedule estimates were not prepared in a manner consistent with GSA guidelines, we found that the 2008 and 2013 schedule estimates: Do not capture all project activities. GSA’s gPM states that the first step in building a schedule is to identify all activities required to complete the project. GSA scheduling guidance also recommends incorporating the activities into the project’s WBS as this feature helps to organize and define the total scope of the project. However, we observed that the St. Elizabeths schedules did not define in detail the work necessary to accomplish a project’s objectives, including activities both the government and contractors are to perform. Do not contain an updated Integrated Master Schedule (IMS). GSA’s gPM stresses the importance of regularly updating the schedule so that it represents the most up-to-date information on planned and completed activities, but we found no evidence that DHS and GSA maintained and regularly updated an IMS for the entire project. This includes providing a review of missed milestones, current expected completion dates, and actions needed to maintain/regain schedule progress. Do not include a complete schedule baseline document. GSA’s gPM states that establishing a baseline schedule that should be changed only with formal approvals from both the project team and the client is one of the four steps to establish a schedule. The gPM guidance also states that the baseline schedule should be maintained and that it should serve as the record of the “approved” schedule to allow the project manager to calculate variance. We found no evidence of a schedule baseline document that described the overall schedule, the sequencing of events, and the basis for activity durations, among other things, to help measure performance. Reliable cost and schedule estimates are critical to providing overall project transparency and also in providing information to a variety of decision makers. However, in commenting on our analysis of St. Elizabeths cost and schedule estimates, DHS and GSA officials said that it would be difficult or impossible to create reliable estimates that encompass the scope of the entire St. Elizabeths project. Officials said that given the complex, multiphase nature of the overall development effort, specific estimates are created for smaller individual projects, but not for the campus project as a whole. Therefore, in their view, leading estimating practices and GSA guidance cannot reasonably be applied to the high-level projections developed for the total cost and completion date of the entire St. Elizabeths project. In addition, DHS and GSA officials stated that given funding uncertainty for the St. Elizabeths project as a whole, they were reluctant to allocate resources to conduct more detailed cost and schedule estimates until additional appropriations were received. They described future project phases as “not real” until they are funded. They added that once funding for a project phase is secured, more complete estimates would be created as part of that segment’s design. For example, regarding the schedule estimates, a senior DHS official said that at the programmatic level, since future phases of construction have not been authorized, funded, or designed, it would be illogical to develop anything beyond a generalized milestone schedule. GSA officials also commented that planning estimates for future unfunded work is conceptual and milestone based and therefore is sufficient for planning Phases 2 and 3. GSA stated that the higher-level, milestone schedule currently being used to manage the program is more flexible than the detailed schedule GAO proposes, and has proven effective even with the highly variable funding provided for the project. We found, however, that this high-level schedule is not sufficiently defined to effectively manage the program. For example, our review of the schedule showed that project bars in the schedule that represent the two active Phase 1 and Phase 2A efforts do not contain detailed schedule activities that include current government, contractor, and applicable subcontractor effort. Specifically, there is no detailed program schedule that enables the tracking of key deliverables, and the activities shown in the schedule address only high-level agency square footage segments, security, utilities, landscape, and road improvements. While we understand the need to keep future effort contained in high-level planning packages, in accordance with leading practices, near-term work occurring in fiscal years 2014 and 2015 should have more detailed information. Further, there are no milestones identified that are consistent with the contract dates and other key dates established by management in the baseline schedule, and the project bars for near-term work are not mapped to a statement of work to ensure all effort is accounted for in the schedule. Finally, the project bars for near-term work also do not contain any risk mitigation activities. We recognize the challenges of developing reliable cost and schedule estimates for a large-scale, multiphase project like St. Elizabeths, particularly given its unstable funding history and that incorporating GAO’s cost- and schedule-estimating leading practices may involve additional costs. However, unless DHS and GSA invest in these practices, Congress risks making funding decisions and DHS and GSA management risk making resource allocation decisions without the benefit that a robust analysis of levels of risk, uncertainty, and confidence provides. In addition to stating that it is not feasible to develop cost and schedule estimates for the entire St. Elizabeths project that conform with leading practices, DHS and GSA officials pointed to the project’s performance to date as an indicator of sound overall management. Specifically, DHS and GSA officials maintained that Phase 1 of the overall consolidation project—primarily the USCG headquarters—was completed “on-schedule and near on-budget,” after taking into account delays in the project start and smaller than expected annual appropriations, thus proving that their estimation practices were sound. As noted earlier, some of the work that was originally planned for Phase 1, such as utility installation; security measures; landscaping; and work on the visitors’ center, historic auditorium, and access road, was deferred to later project stages. According to DHS and GSA officials, reducing the scope of Phase 1 enabled the project team to shift resources to more critical capabilities required for USCG occupancy, which resulted in on-schedule and near on-budget completion for the portion of Phase 1 funded by Congress. DHS and GSA officials maintained that the execution of Phase 1 was successful and that this should have factored into our analysis of cost and schedule estimates. However, our analysis of estimates is focused on the remaining work in the project, not the actual performance of work completed. Phase 1 results cannot be the sole basis used to forecast the reliability of cost and schedule estimates for the remaining phases of development at St. Elizabeths. As our analyses showed, the estimates were deficient in several areas, including comprehensiveness, accuracy, and credibility, which renders them unreliable in the context of future work. In addition to the planned work deferrals and reductions in scope described above, other unanticipated obstacles affected Phase 1 project cost and schedule as well, but DHS and GSA officials did not document these impacts. For example, GSA officials noted that the original design for the Coast Guard fitness center was complete and construction was set to begin, but the building had to be redesigned and sunk farther underground after a historic preservation stakeholder objected to the structure’s sight-lines. Also, toxic ash was unexpectedly discovered in the walls of one of the historic buildings, which required additional funds and time to remove. DHS and GSA officials were not able to tell us how much additional funding and time were required to redesign and construct the fitness center and to remediate the toxic ash. Overall, without documentation, we could not quantify the specific effects of these types of actions on Phase 1 cost growth and delays. Because DHS and GSA project cost and schedule estimates inform Congress’s funding decisions and affect the agencies’ abilities to effectively allocate resources across competing projects in their capital programs, there is a risk that funding decisions and resource allocations could be made based on information that is not reliable. Several factors have changed since DHS began planning its consolidated headquarters in 2005. New workplace standards allow more people to work in less space, and recent government-wide initiatives like Freeze the Footprint have prompted agencies to rethink their real property portfolios and lease arrangements. In addition, the DHS headquarters consolidation effort has not received the level of funding that DHS and GSA officials originally envisioned. By taking into account changing workplace standards and funding instability, assessing alternatives, prioritizing projects, and using the results of these analyses to inform the revised project plan in accordance with leading practices, DHS and GSA would be better positioned to assure decision makers within both agencies and in Congress that the consolidation project is justified. In addition, DHS has an acquisitions policy that generally aligns with leading capital decision-making practices, which applies to major acquisitions, as determined by cost criteria and other factors such as project visibility. However, DHS has moved the headquarters consolidation project or elements of the project on and off its list of major acquisitions over the last several years. Furthermore, during the periods when the project was identified by DHS as a major acquisition, the program did not fully comply with acquisition policy requirements, such as obtaining department-level approval of certain documents. Although GSA owns the site, funds the majority of the building construction, and oversees other contracts on behalf of DHS, given DHS’s significant monetary investment, along with the project’s visibility and potential impact on DHS missions, treating headquarters consolidation as a major acquisition and applying the policy to the maximum extent possible would provide greater assurance that government funds are being spent in a way that is consistent with sound acquisition practices and that the project is moving forward as intended. Creating reliable cost and schedule estimates for the headquarters consolidation project should be an integral part of DHS and GSA efforts to reassess the project. DHS and GSA current estimates do not conform with several leading practices, which make the estimates unreliable. Furthermore, in several instances, the cost and schedule estimates do not fully conform with GSA’s estimation policies. Although DHS and GSA maintain that more comprehensive estimates will be conducted as the project advances and funding is secured, decision makers could benefit now from accurate estimates that encompass the life-cycle of the project. Without this information, it is difficult for agency leadership and Members of Congress to make informed decisions regarding resource allocations and compare competing priorities. Pending the development of reliable cost and schedule estimates, the project risks potential cost overruns, missed deadlines, and performance shortfalls. In order to improve transparency and allow for more informed decision making by congressional leaders and DHS and GSA decision-makers, we recommend that, before requesting additional funding for the DHS headquarters consolidation project, the Secretary of Homeland Security and the Administrator of the General Services Administration work jointly to take the following two actions: conduct the following assessments and use the results to inform updated DHS headquarters consolidation plans: a comprehensive needs assessment and gap analysis of current and needed capabilities that take into consideration changing conditions, and an alternatives analysis that identifies the costs and benefits of leasing and construction alternatives for the remainder of the project and prioritizes options to account for funding instability. After revising the DHS headquarters consolidation plans, develop revised cost and schedule estimates for the remaining portions of the consolidation project that conform to GSA guidance and leading practices for cost and schedule estimation, including an independent evaluation of the estimates. We further recommend that the Secretary of Homeland Security designate the headquarters consolidation program a major acquisition, consistent with DHS acquisition policy, and apply DHS acquisition policy requirements. Congress should consider making future funding for the St. Elizabeths project contingent upon DHS and GSA developing a revised headquarters consolidation plan, for the remainder of the project, that conforms with leading practices and that (1) recognizes changes in workplace standards, (2) identifies which components are to be colocated at St. Elizabeths and in leased and owned space throughout the NCR, and (3) develops and provides reliable cost and schedule estimates. We provided a draft of this report to DHS and GSA for review and comment. In written comments, DHS concurred with all three of the recommendations, and GSA concurred with the two recommendations that applied to it. DHS and GSA comments are summarized below and reprinted in appendix VII and appendix VIII respectively. DHS and GSA concurred with our first recommendation that DHS and GSA conduct a comprehensive needs assessment and alternatives analysis. DHS and GSA commented that DHS and GSA have already completed a draft enhanced consolidation plan which DHS believes includes the needs assessment and gap analysis envisioned by GAO. In addition, DHS stated that that the cost-benefit analysis of leasing versus construction completed during development of the original project master plan has been updated to reflect current conditions and included as part of this draft plan. GSA reported that it is working closely with DHS and will share this plan with stakeholders upon completion. DHS and GSA concurred with our second recommendation that DHS and GSA develop revised cost and schedules that conform to GSA guidance and leading practices. DHS commented that a revised programmatic schedule and estimate was created in conjunction with development of the draft enhanced plan, which, according to DHS, is currently with the Office of Management and Budget for approval. DHS stated that it defers to GSA as to whether this estimate conforms to GSA or other criteria for cost and schedule estimation, since the project is being managed and executed by GSA and not DHS. GSA commented that it plans to update the cost and schedule estimates upon completion of the Enhanced Plan, and may adopt some of the leading practices referenced by GAO. DHS concurred with our third recommendation that DHS designate the headquarters consolidation program a major acquisition and apply DHS acquisition policy requirements. DHS reported that the Acting Under Secretary for Management determined in September 2014 that the DHS-funded portions of the St. Elizabeths project will come under the purview of the DHS Acquisition Review Board for oversight effective immediately to assure senior leadership visibility over DHS funds executed by GSA. DHS also requested that we consider this recommendation resolved and closed. Designating the DHS-funded portions of the St. Elizabeths project a major acquisition partially addresses our recommendation. It is still too early to assess the extent to which DHS is applying its acquisition policy to the project. As stated in our report, the St. Elizabeths headquarters consolidation project has been moved off and on the DHS master acquisition oversight list in prior years and in the years that it has been on the list, the project did not comply with major acquisition requirements as outlined by DHS guidelines. We will continue to monitor DHS’s actions to apply its acquisition policy to the project as part of our normal recommendation follow-up process. In its comments, DHS also expressed concern that our report did not sufficiently describe the roles and responsibilities of DHS and GSA. Specifically, DHS stated that, as a tenant agency, its role is to establish programmatic requirements; budget for and fund tenant responsible items; provide oversight on GSA's use of DHS funds; validate that GSA managed design and construction activities meet DHS operational and program requirements; and coordinate with GSA and other stakeholders throughout the process, as appropriate. DHS also noted that other activities are managed by GSA in accordance with GSA policies and under GSA supervision and oversight. DHS stated that, while DHS cooperates with GSA and helps facilitate completion of these activities as appropriate, DHS does not have any supervisory control over the activities. Specifically, DHS stated that it did not select the St. Elizabeths site, nor does not award or manage contracts for design and construction. We agree that GSA has responsibility for these activities and have added more detailed discussion of DHS and GSA roles to the report. However, cooperation and collaboration between DHS and GSA is essential for a variety of reasons, including the overall cost, scope, and visibility of the project; the overall importance of the project in the context of DHS’s mission; and in light of the fact that DHS has received $494.8 million to date for the project. This does not include the additional $1.2 billion that DHS expects that it will need to ensure that the project is completed by 2026. In this context, we believe that the management and implementation of the St. Elizabeths project is a shared responsibility between DHS and GSA, requiring them to work closely together to help provide greater assurance to decision makers in Congress, DHS, and GSA—as well as taxpayers—that the project is being appropriately managed and acquired on-time and on-budget. DHS also expressed concern that the report is overly focused on “leading practices” as opposed to being more outcome and results oriented. We believe that applying the leading practice cited in our draft report would better position DHS and GSA to manage the St. Elizabeths’ project and help ensure better outcomes and results. DHS stated that GSA, in concert with DHS, has already conducted sufficient analysis to support the best practices in our report. We disagree. As we note in the report, cost and schedule estimates for the project were deficient in several areas, including comprehensiveness, accuracy, and credibility. Estimates also failed to comply with GSA’s internal guidance for cost and schedule. In its written response, GSA also commented that several of the leading practices GAO identifies are better suited to non-real estate investments such as weapons systems, spacecraft, aircraft carriers, and software systems. We disagree. As stated in our report, we have applied our leading cost and schedule estimation practices in past work involving federal construction projects, and the leading practices were developed in conjunction with numerous stakeholders from government and the private sector, including DHS and GSA. Furthermore, GSA acknowledged the value of our leading cost estimation practices in 2007 and issued an order to apply the principles to all cost estimates prepared in every GSA project, process, or organization. We are sending copies to the Secretary of Homeland Security, Administrator of GSA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact either David Maurer at (202) 512-9627 or maurerd@gao.gov, or David Wise at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. We conducted our review to examine (1) the extent to which the Department of Homeland Security (DHS) and the General Services Administration (GSA) developed DHS headquarters consolidation plans in accordance with leading capital decision-making principles, and (2) the extent to which DHS and GSA have estimated the costs and schedules of the DHS headquarters consolidation project at St. Elizabeths in a manner that is consistent with leading practices. To determine the extent to which DHS and GSA developed DHS headquarters consolidation plans in accordance with leading capital decision-making principles, we reviewed and analyzed DHS and GSA capital planning documents and interviewed DHS and GSA officials responsible for the planning and management of the St. Elizabeths project, as well as DHS and GSA senior leadership. We compared DHS and GSA capital planning actions against Office of Management and Budget (OMB) and GAO leading practices (see app. II). Our analysis of DHS and GSA efforts using criteria for leading capital decision-making focused on planning for the remaining segments or phases of the project because, as we stated in previous reports, the planning phase is the crux of the capital decision-making process. The results from this phase are used throughout the remaining phases of the process; therefore, if key practices during this phase are not followed there may be repercussions on agency operations if poor capital investment decisions are made. To determine the extent to which DHS planned and implemented the DHS headquarters consolidation project at St. Elizabeths in accordance with departmental acquisition guidelines, we interviewed officials from DHS’s Office of Program Accountability and Risk Management (PARM) as well as DHS project managers. We reviewed and analyzed DHS Acquisition Management Directive 102-01 (MD 102) and DHS’s Major Acquisitions Oversight List for fiscal years 2010 through 2014. We then compared the acquisition standards detailed in MD 102 with DHS’s efforts to acquire a consolidated headquarters facility at the GSA-owned St. Elizabeths Campus. To determine the extent to which DHS and GSA have estimated the costs and schedules of the DHS headquarters consolidation project at St. Elizabeths in a manner that is consistent with leading practices, we interviewed DHS and GSA program officials and compared DHS and GSA overall project cost and schedule estimates with GAO leading practices. Specifically, the GAO Cost Estimating and Assessment Guide (Cost Guide) identifies leading practices that represent work across the federal government and are the basis for a high-quality, reliable cost estimate. A cost estimate created using the leading practices exhibits four broad characteristics: it is accurate, well documented, credible, and comprehensive. That is, each characteristic is associated with a specific set of leading practices. In turn, each leading practice is made up of a number of specific tasks (see app. III). Similarly, we compared DHS and GSA overall schedule estimates with the GAO Schedule Assessment Guide, which defines leading practices related to four characteristics— comprehensive, well constructed, credible, and controlled—that are important to developing high-quality, reliable schedule estimates (see app. IV). For our evaluations of the cost and schedule estimates, when the tasks associated with the leading practices that define a characteristic were mostly or completely satisfied, we considered the characteristic to be substantially or fully met. When all four characteristics were at least substantially met, we considered a cost or schedule estimate to be reliable. To analyze the St. Elizabeths schedule estimate, we asked DHS and GSA to provide the most recent Integrated Master Schedule (IMS) that included all related embedded project schedules. Although the 2013 cost estimate provided by DHS and GSA was complete, the 2013 schedule estimate did not cover the entire consolidation project in sufficient detail. As a result, we initially analyzed the most recent complete schedule available, which was created in 2008. The 2008 schedule estimate listed project completion in 2016. Subsequently, at the request of GSA, we also applied our leading practices criteria to the incomplete 2013 schedule. We shared our analysis with DHS and GSA officials to review, comment on, and provide additional information, and we adjusted our analysis where appropriate. Finally, we reviewed GSA guidance that DHS and GSA officials stated was relevant to cost and schedule estimating for the St. Elizabeths project: P-120: Project Estimating Requirements for the Public Buildings Service (PBS); P-100: Facilities Standards for the Public Buildings Service; and GSA’s Global Project Management (gPM) guidance, which includes the PBS Scheduling Fundamentals Guide. In areas where GSA estimating guidance aligned with our leading cost and schedule-estimating leading practices, we evaluated the extent to which the St. Elizabeths cost and schedule estimates conformed with GSA guidance. We conducted our work from August 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Congress, the Office of Management and Budget (OMB), and GAO have all identified the need for effective capital planning among federal agencies. GAO developed its Executive Guide: Leading Practices in Capital Decision-Making, which provides guidance to federal agencies on planning, budgeting, acquiring, and managing capital assets. Figure 8 illustrates how capital decision-making principles fit together. We developed the GAO Cost Estimating and Assessment Guide in order to establish a consistent methodology that is based on best practices and that can be used across the federal government for developing, managing, and evaluating capital program cost estimates. We have identified 12 steps under 4 characteristics that, followed correctly, should result in reliable and valid cost estimates that management can use for making informed decisions. The GAO Schedule Assessment Guide is a companion to the Cost Guide. A consistent methodology for developing, managing, and evaluating capital program cost estimates includes the concept of scheduling the necessary work to a timeline, as discussed in the Cost Guide. A well- planned schedule is a fundamental management tool that can help government programs use public funds effectively by specifying when work will be performed in the future and measuring program performance against an approved plan. Table 10 represents the 10 leading practices associated with a high-quality and reliable schedule and their concepts. We assessed the Department of Homeland Security (DHS) and General Services Administration (GSA) cost estimate using the framework of the four characteristics—comprehensive, well documented, accurate, and credible—associated with high-quality, reliable cost estimates. Table 11 provides greater detail on our comparison of the estimate with leading practices that constitute the four cost-estimating characteristics. We assessed Department of Homeland Security (DHS) and General Services Administration (GSA) schedule estimates using the framework of the four characteristics—comprehensive, well constructed, credible, and controlled—associated with high-quality, reliable schedule estimates. Table 12 provides greater detail on our comparison of both the 2008 and 2013 estimates with 10 specific leading practices that constitute the four schedule estimating characteristics. In addition to the contacts named above, John Mortin (Assistant Director), Karen Richey (Assistant Director), Juana Collymore, Giselle Cubillos, Daniel Hoy, Abishek Krupanand, Jennifer Leotta, and David Lutter made key contributions to this report. Also contributing to this report were Charles Bausell, Susan Hsu, Eric Hauswirth, Tracey King, Linda Miller, Jan Montgomery, and Cynthia Saunders.
DHS and GSA are managing an estimated $4.5 billion construction project at the St. Elizabeths Campus in Washington, D.C. The project, designed to consolidate DHS's executive leadership, operational management, and other personnel at one secure location rather than at multiple locations throughout the Washington, D.C., metropolitan area, has a projected completion date of 2026. GAO was asked to examine DHS and GSA management of the headquarters consolidation, including the development of the St. Elizabeths campus. This report addresses the extent to which DHS and GSA have (1) developed consolidation plans in accordance with leading capital decision-making practices and (2) estimated the costs and schedules of the St. Elizabeths project in a manner that is consistent with leading practices. GAO assessed various DHS and GSA plans, policies, and cost/schedule estimates, and interviewed DHS and GSA officials. The Department of Homeland Security (DHS) and General Services Administration (GSA) planning for the DHS headquarters consolidation does not fully conform with leading capital decision-making practices intended to help agencies effectively plan and procure assets. DHS and GSA officials reported that they have taken some initial actions that may facilitate consolidation planning in a manner consistent with leading practices, such as adopting recent workplace standards at the department level and assessing DHS's leasing portfolio. For example, DHS has an overall goal of reducing the square footage allotted per employee across DHS in accordance with current workplace standards. Officials acknowledged that this could allow more staff to occupy less space than when the campus was initially planned in 2009. DHS and GSA officials also reported analyzing different leasing options that could affect consolidation efforts. However, consolidation plans, which were finalized between 2006 and 2009, have not been updated to reflect these changes. According to DHS and GSA officials, the funding gap between what was requested and what was received from fiscal years 2009 through 2014, was over $1.6 billion. According to these officials, this gap has escalated estimated costs by over $1 billion—from $3.3 billion to the current $4.5 billion—and delayed scheduled completion by over 10 years, from an original completion date of 2015 to the current estimate of 2026. However, DHS and GSA have not conducted a comprehensive assessment of current needs, identified capability gaps, or evaluated and prioritized alternatives to help them adapt consolidation plans to changing conditions and address funding issues as reflected in leading practices. DHS and GSA reported that they have begun to work together to consider changes to their plans, but as of August 2014, they had not announced when new plans will be issued and whether they would fully conform to leading capital decision-making practices to help plan project implementation. DHS and GSA did not follow relevant GSA guidance and GAO's leading practices when developing the cost and schedule estimates for the St. Elizabeths project, and the estimates are unreliable. For example, GAO found that the 2013 cost estimate—the most recent available—does not include a life-cycle cost analysis of the project, including the cost of operations and maintenance; was not regularly updated to reflect significant program changes, including actual costs; and does not include an independent estimate to help track the budget, as required by GSA guidance. Also, the 2008 and 2013 schedule estimates do not include all activities for the government and its contractors needed to accomplish project objectives. GAO's comparison of the cost and schedule estimates with leading practices identified the same concerns, as well as others. For example, a sensitivity analysis has not been performed to assess the reasonableness of the cost estimate. For the 2008 and 2013 schedule estimates, resources (such as labor and materials) are not accounted for and a risk assessment has not been conducted to predict a level of confidence in the project's completion date. Because DHS and GSA project cost and schedule estimates inform Congress's funding decisions and affect the agencies' abilities to effectively allocate resources, there is a risk that funding decisions and resource allocations could be made based on information that is not reliable or is out of date. GAO recommends, among other things, that DHS and GSA develop revised DHS headquarters plans that reflect leading practices for capital decision making and reliable cost and schedule estimates. Congress should consider making future funding for the project contingent upon DHS and GSA developing plans and estimates commensurate with leading practices. DHS and GSA concurred with our recommendations.
FMCSA and state law enforcement agencies in partnership enforce safety standards for the more than 500,000 interstate motor carriers operating in the United States. States and, to a lesser extent, FMCSA staff, perform roadside inspections of vehicles to check for driver and maintenance violations and then provide the data from those inspections to FMCSA for analysis and determinations about a carrier’s safety performance. FMCSA also obtains data from the reports filed by state and local law enforcement officers when investigating commercial motor vehicle accidents or regulatory violations. FMCSA provides grants to states that may be used to offset the costs of conducting roadside inspections and improve the quality of the crash data the states report to FMCSA. In addition, FMCSA’s field offices in each state, known as divisions, have investigators who conduct safety reviews of carriers identified by state inspection and other data as unsafe or at risk of being unsafe. Most states augment FMCSA investigators’ efforts by reviewing carrier operations as well. Before CSA, FMCSA relied primarily on comprehensive compliance reviews on-site at carriers to determine whether they were operating safely. Carriers were selected for these reviews based on safety assessments generated by FMCSA’s statistical enforcement model— SafeStat—that used data obtained from accident reports and other safety data supplied by FMCSA’s state partners (see table 1). During these reviews, an investigator would visit a motor carrier to assess compliance with safety regulations by interviewing company officials and reviewing records that pertain to alcohol and drug testing of drivers, insurance coverage, crashes, driver qualifications, the number of hours a driver has worked within a certain time period, vehicle maintenance, prior inspections, and transportation of hazardous materials. FMCSA officials believe that such comprehensive compliance reviews are an effective way to assess a carrier’s safety performance. However, compliance reviews are extremely resource intensive; therefore, only a small percentage of the motor carrier industry can be evaluated in this manner, given limited federal and state resources. Annually, for example, FMCSA and its state partners have conducted compliance reviews of about 3 percent of registered motor carriers. As a result, FMCSA was not able to evaluate the vast majority of registered motor carriers and most were not assigned a safety rating. In 2004, FMCSA began to design and develop CSA, a program to better target resources toward unsafe carriers, deploy a more comprehensive array of interventions, and proactively evaluate safety performance based on data, rather than solely based on compliance reviews. Through implementation of CSA, FMCSA expects to assess a larger portion of the motor carrier industry and to increase the emphasis on driver safety. Additionally, FMCSA expects to use data to identify unsafe carriers and drivers earlier to address safety problems before crashes occur. In this way, FMCSA intends to create a culture of compliance, in which officials and carriers will work together to address safety issues early, and carriers will have access to information and resources that can help them better comply with safety regulations. FMCSA officials expect this approach will more efficiently use FMCSA and its state partners’ resources. FMCSA expects to significantly reach, or “touch,” more carriers—thus improving their safety—and ultimately reduce motor carrier crashes, injuries, and fatalities. To date, FMCSA has focused its implementation efforts on carriers— examining the safety performance of the company—whether it be a trucking company with hundreds of vehicles or a small company operating one or two trucks. FMCSA’s implementation efforts also include an increased assessment of the safety behavior of the drivers for carriers selected for intervention. FMCSA also intends to rate or determine the fitness of all drivers, regardless of whether the carriers they work for are selected for intervention. The rating would cover such things as whether the driver was driving while impaired by drugs or alcohol or received tickets for moving vehicle violations. SMS—the first oversight activity under CSA—is intended to allow FMCSA to more accurately assess a carrier’s safety performance. SMS is applied to safety data obtained primarily from roadside inspections as well as from crash reports. These data are sorted into six Behavior Analysis and Safety Improvement Categories (BASIC) that are associated with unsafe performance according to FMCSA’s analysis. In addition to the six BASICs, SMS also incorporates data based on a carrier’s crash involvement (see table 1). Once the data are sorted into the seven data categories, the SMS algorithm measures and generates scores for the carrier’s safety performance in each category. Carriers are placed into peer groups (i.e., other carriers with similar numbers of inspections or size) and ranked according to performance. The rankings determine which carriers are not operating with optimal safety practices and, therefore, will be prioritized for intervention. CSA is intended to improve upon SafeStat, which measured safety in only four safety evaluation areas: driver, vehicle, safety management, and accident (equivalent to the SMS Crash Indicator). CSA uses a wider array of safety data to create a more nuanced understanding of a carrier’s safety performance and presents that information using more refined categories. FMCSA has made carriers’ SMS scores available to carriers themselves as well as to the public, including shippers and insurers. Carriers are allowed to request reviews of any data they believe are incorrect through an FMCSA system known as DataQs. These requests for review can include moving violations reported by state authorities that carriers believe are invalid or mistakenly attributed to the wrong carrier. FMCSA forwards each request for review to the state in which the carrier was cited. States then research the issue, often by contacting the inspector who conducted the inspection and his or her supervisor. Based on this research, states decide if the violation is warranted and make changes if necessary. All of these safety data are collected and maintained in FMCSA’s existing Motor Carrier Management Information System (MCMIS). Our previous work assessed FMCSA data reliability and identified problems with the quality of crash data reported to FMCSA, including data that were inaccurate, incomplete, and not reported in a timely manner. FMCSA has been making efforts to improve crash data quality, including awarding Safety Data Improvement Program grants to states to improve their crash data. States’ efforts to improve crash data include expanding electronic reporting; improving the timeliness, completeness, and accuracy of reporting; and standardizing police accident report forms. The second oversight activity under CSA is the introduction of a variety of interventions for interceding with carriers when their SMS scores indicate safety deficiencies. The expanded array of interventions available under CSA offers FMCSA more flexibility and the opportunity to apply interventions commensurate with a carrier’s safety performance (see table 2). The new interventions were created to get carriers to improve behaviors linked to possible crash risk. As a result, these carriers have the opportunity to take corrective actions to avoid another intervention in the future. Under CSA, interventions that involve investigations follow a process known as the Safety Management Cycle which will expand investigations from simply identifying what violations occurred to determining why violations exist so that FMCSA can offer more constructive improvement recommendations. While some of the interventions, such as Notice of Violation and Notice of Claim, available under CSA are not new, FMCSA intends to apply them in a more systematic manner under CSA. For example, according to FMCSA, the agency only issued a handful of Notices of Violation over the past 5 years because prior FMCSA information technology systems did not provide the capacity to issue and track them. Under CSA, Notices of Violations can be issued in conjunction with Cooperative Safety Plans, giving carriers a framework in which to address the violations. In another example, the agency intends to increase its use of the Notice of Claim. The third oversight activity under CSA is determining a carrier’s fitness to operate motor vehicles, known as a Safety Fitness Determination. FMCSA plans to use SMS scores to make a Safety Fitness Determination to indicate whether a carrier should continue to operate or should be suspended from operating (i.e., be ordered “out-of-service”). Currently, FMCSA determines a carrier’s fitness to operate based only on the outcome of an onsite comprehensive investigation, similar to how it was done under SafeStat. If a review shows that a motor carrier is unfit to operate pursuant to governing regulations, FMCSA can issue an Out-of- Service order that prohibits the carrier from operating until the deficiencies are corrected. However, as part of CSA, FMCSA plans to initiate a rulemaking that will enable it to use SMS-generated scores to determine if carriers are unfit to operate. FMCSA has not determined if the same categories currently used to determine if a carrier is fit to operate—“satisfactory,” “conditional,” and “unsatisfactory”—will be used, but it does not plan to increase the number of categories. In 2008, FMCSA launched an operational-model test (pilot) of the CSA program in four states and later expanded the pilot to five more states over 30 months through June 2010. During Phase 1, four states (Colorado, Georgia, Missouri, and New Jersey) tested CSA on carriers with the exception of those with the poorest SafeStat ratings. Fifty percent of the non-excluded carriers in each state were subject to certain aspects of the CSA model—specifically a subset of the BASICs and the interventions— and the other 50 percent were subject to SafeStat. During Phase 2, the carriers subject to CSA in those four states, including those excluded from Phase 1, were then subjected to all of the BASICs and interventions. Later, FMCSA added Delaware, Kansas, Maryland, Minnesota, and Montana to the pilot testing, with 100 percent of the carriers in each state subject to all of the BASICs and interventions. UMTRI analyzed the results of Phase 1 of the pilot as well as supplementary results from Phase 2 and issued its final report in August 2011. In February 2011, we reported that FMCSA obligated more than $30 million for costs related to CSA from fiscal years 2007 through 2010. FMCSA used these funds to develop the SMS and new interventions, conduct and evaluate the pilot test, conduct travel and training related to CSA, and develop information technology related to CSA. Close to a year after the anticipated completion date, FMCSA has partially implemented two of the three planned CSA oversight activities—the SMS and an expanded set of interventions—in all states; however, it still cannot use CSA safety ratings to (1) use CSA to assess the fitness of motor carriers or (2) assign safety fitness determinations to individual drivers that would prohibit them from operating trucks and buses. Although it has been delayed, FMCSA has begun to implement the CSA oversight activities directed at carrier safety, including SMS and carrier interventions, such as Warning Letters and On-site Focused investigations. However, FMCSA has yet to issue the Notice of Proposed Rulemaking (NPRM), originally scheduled to be finalized in 2009, that would allow it to use CSA data to get unsafe carriers off the road. At present, it appears that FMCSA will not be issuing the rulemaking until later this year at the earliest. Furthermore, in implementing these CSA oversight activities, FMCSA has experienced issues that could affect CSA’s effectiveness. However, FMCSA has not provided comprehensive information to Congress and the public on the status of CSA as well as the risks associated with these delays and issues, and how it plans to mitigate those risks. Moreover, FMCSA has only recently taken steps to separately measure the fitness of drivers to operate trucks and buses, as research has shown that drivers—not vehicle problems—cause most carrier crashes. FMCSA has not specified time frames for developing this component or how it will ultimately be used. Although two of CSA’s three planned oversight activities for evaluating carriers are at least partly implemented and functional to varying degrees, implementation remains a work in progress. The first CSA oversight activity—developing SMS—was implemented in December 2010, as scheduled, and is functional (see table 3). For the second oversight activity, seven of the nine interventions—five of which are new—are generally functioning as intended. Two others—Off-site Investigations and Cooperative Safety Plans—have been delayed indefinitely because the technology needed to implement them is not yet operational. With respect to the third planned oversight activity, suspending unfit carriers on the basis of SMS scores, FMCSA originally intended to finalize the rulemaking by 2009 but this effort has been delayed; FMCSA now plans to issue the Notice of Proposed Rulemaking later this year and will not finalize the rulemaking until 2013. According to FMCSA officials, they delayed the rulemaking because of needed changes to SMS that arose during the pilot. In addition, they indicated that FMCSA has a backlog of other key rulemakings that has affected its ability to complete the CSA rulemaking. FMCSA fully implemented the system to measure the performance of carriers in all safety categories in 2010. This information is provided to carriers to help them identify and address their own safety issues. Additionally, FMCSA has made most carriers’ safety data publicly available since December 2010 (see fig. 1 for a sample screenshot of carrier information available to the public). Shippers and insurers, among others, can now use this information to make business decisions. However, as figure 1 shows, the Crash Indicator score and the Cargo- Related BASIC score are not being made publicly available. Stakeholders raised concerns that the Crash Indicator includes all crashes, including those in which the driver was not accountable. FMCSA took an interim step to make the Crash Indicator score available only to the carrier. FMCSA plans to contract with the Department of Transportation’s John A. Volpe National Transportation Systems Center (Volpe) to develop a system to allow states to determine if a driver is accountable for a particular crash. FMCSA expects Volpe to begin work on this effort in January 2012. Specifically, FMCSA intends to allow carriers to request changes to their violations data by providing a police accident report to demonstrate that the carrier should not be held accountable for a particular crash. Similarly, the motor carrier industry raised concerns about biases created by grouping different types of carriers together for the Cargo-Related BASIC, specifically grouping open deck carriers (flat bed carriers) with those that use enclosed trailers. FMCSA agreed with the industry that these biases may exist and decided not to make the Cargo-Related BASIC data publicly available. In addition, industry raised concerns about FMCSA’s original plans to base individual carrier crash rates on the number of power units, i.e., trucks they operate, as opposed to the number of vehicle miles traveled. FMCSA agreed that vehicle miles traveled is a more equitable measure of exposure when determining crash rates. After considering industry concerns, FMCSA modified the measurement system to now use a combination of power units and vehicle miles traveled to analyze crash risk. According to most trucking association officials we interviewed, FMCSA has been willing to listen to carriers’ concerns while implementing CSA and, according to several, has responded by making adjustments. Another issue that has arisen during the implementation of this part of CSA is that state enforcement agencies, such as state police or state highway patrol agencies, have experienced some difficulties handling motor carriers’ requests to review violations data through FMCSA’s DataQs system. In the months before FMCSA began implementing CSA nationwide, as well as after FMCSA began implementing CSA, carriers have been requesting reviews of violations data at a higher rate than in the past and, in some cases, straining states’ resources. Although carriers previously could request reviews of violations data through the DataQs system, carriers did not challenge the data as often because SafeStat focused on only certain violations. Because CSA uses all violations to determine carriers’ SMS scores and has made an expanded range of data about the motor carriers’ safety records available to them, carriers have taken a much greater interest in these data. Specifically, in August 2010, when FMCSA first made the violations data available for carriers’ review, the number of requests for review was about 2,600 per month. This number increased to a high of about 5,000 per month in October 2010, 2 months after FMCSA made carriers’ BASICs scores available for their review. Although this number has since decreased to about 3,700 per month by May 2011 and decreased further to about 3,000 by August 2011, it is still higher than when FMCSA first made violations data available for carriers’ review. Specifically, state officials in four of the eight states we visited told us they have experienced significant increases in the volume of these requests, which has strained their resources. For example, in Maryland, the volume of requests for data review has increased from 65 in August 2010 to 122 in May 2011 before decreasing to 78 by August 2011. To deal with the increase, the Maryland State Police added another person to handle the requests. Similarly, in Texas, the number of requests for data review increased from 195 requests in August 2010 to 285 by May 2011 before decreasing to 225 by August 2011. To handle the increase, Texas officials reassigned staff to handle the increased workload but planned to wait before hiring someone permanently. In addition to the impact on state resources, state officials in California said the increase in requests could affect their ability to resolve them within 10 days, FMCSA’s goal for responding to carriers. Although the volume of data review requests from carriers has been declining, it is unclear if this trend will continue as implementation of CSA progresses. Trucking associations have raised concerns about how states handle these requests, as well as about states’ willingness to change violations data. According to state law enforcement officials, states review the requests and correct violations that are in error. Officials also indicated that some requests reflect carriers’ efforts to have as many violations removed from carriers’ records as possible. In January 2011 FMCSA—in conjunction with its State Partners—developed and issued a guide to address issues concerning consistency among states in handling requests to review violations data. Thus far, FMCSA has fully implemented seven of the nine interventions nationwide. Of these seven, three are new—the Warning Letter, Targeted Roadside Inspection, and Onsite Focused Investigation. The Notice of Violation, Notice of Claim, Onsite Comprehensive Investigation, and Operations Out-of-Service Order existed before CSA and thus were already implemented nationwide. Together, as table 4 shows, these interventions provide a range of benefits. While FMCSA previously expected to implement two other new interventions—Off-site Investigations and Cooperative Safety Plans—nationwide by August 2011, it has delayed their implementation in the nonpilot states because it has not yet finished developing the key technology required to manage them. This technology, known as Sentri, is part of FMCSA’s ongoing information technology modernization effort and is intended to provide FMCSA enforcement and field staff easier access to carrier and driver information and to help FMCSA and states target unsafe carriers and drivers. FMCSA officials indicated that, although the agency’s current legacy systems contain the information investigators need to conduct Off- site Investigations and Cooperative Safety Plans, the systems do not interact very well. According to FMCSA, one of Sentri’s benefits is that it will create an environment with a single interface where users can conduct inquiries, inspections, investigations, and interventions, and create and review reports. Additionally, Sentri will align information technology systems with the changes to the investigative processes resulting from the interventions. FMCSA expects to complete this technology in April 2012. FMCSA officials indicated that the delays were due to communication problems between information technology and program offices—who are customers—as to the data requirements for the system. Specifically, officials said that program offices needed to better explain and define requirements so that everyone understands them. According to FMCSA, its information technology office has put in place new collaboration and communications methods with the sponsoring program units. We have reported in the past on the importance of establishing an agreed-upon set of requirements for customers and stakeholders. Until FMCSA completes this technology and can fully implement all of the interventions, it will not be able to reach the increased number of carriers originally intended. One issue that could influence the effectiveness of the interventions is training. As a result of the delay in completing Sentri and the decision to delay implementing off-site investigations and cooperative safety plans, FMCSA revised its training plans for nonpilot states. Originally, FMCSA planned to provide 1 week of classroom training to FMCSA division and state officials and staff in nonpilot states, as it had done in the pilot states. Instead, when FMCSA decided to roll out the interventions in a phased approach, FMCSA division management received 1 day of classroom training, while other FMCSA division and state investigators received a series of webinars on the first phase of the roll out. Additionally, FMCSA and state officials in pilot states are serving as mentors to assist their counterparts in nonpilot states. FMCSA and state officials we interviewed in nonpilot states had mixed opinions on the training. Six FMCSA and state officials in two of the nonpilot states we visited indicated that, because only certain interventions were implemented and pilot states were providing assistance, they felt the training prepared them to implement the interventions FMCSA initially rolled out. For example, officials in one state believed that, because the On-site Focused Investigations and Comprehensive Investigations were similar to the compliance reviews conducted in the past, they were comfortable with the training they have received. However, two of the FMCSA officials and one state official in the nonpilot states we visited felt the training lacked detail and was insufficient because CSA was still evolving. For example, officials in one state noted they were not yet conducting On-site Focused Investigations because they did not feel comfortable with the training they had received on this intervention. FMCSA officials indicated they were not aware of any other states that were not conducting On-site Focused Investigations. However, officials in two states said that while investigators were conducting On-site Focused Investigations, they were concerned about how effectively they were being conducted given limited training or because investigators were not yet comfortable with conducting focused reviews instead of comprehensive reviews. FMCSA is taking steps to improve training on interventions. FMCSA officials acknowledged that the training to date was insufficient and explained that when they decided to begin implementing CSA in the fall of 2010, they used the webinar approach to provide information quickly to FMCSA divisions and states. FMCSA provided 2 days of additional training during the summer of 2011 that consisted of classroom training in all 50 states and included both management and investigators in FMCSA divisions and state agencies. This training includes the Safety Management Cycle approach to interventions involving investigations which, as noted, FMCSA believes will allow investigators to determine why violations occur and offer recommendations for improvement. FMCSA expects that the Safety Management Cycle will be implemented by the end of 2011. FMCSA officials also indicated that, as they developed this training, they incorporated the suggestions from participant evaluations from earlier training classes and agency surveys from both pilot and nonpilot states. FMCSA is roughly 2 years behind its original target date for issuing and completing the rulemaking required to use SMS to determine a carrier’s fitness to operate. We reported in December 2007 that FMCSA planned to publish a NPRM for the carrier safety fitness determination in summer 2008 and expected the final rule to be in place in 2009. However, because of changes to SMS that arose during testing—such as the change in calculating crash rates—and a backlog of rulemakings for other FMCSA programs, officials now plan to issue the NPRM late in 2011 and finalize the rule in 2013. However, the date FMCSA can finalize the rule could also be delayed. FMCSA officials indicated they do not foresee any major challenges in meeting the current schedule because they have held public information sessions since 2008 to inform the motor carrier industry of the methodology they are considering for the safety fitness determination. On the other hand, others, such as the National Transportation Safety Board and the National Private Truck Council, noted that rulemakings could take much longer. Until the rulemaking is completed, FMCSA will not realize one of its most important goals for CSA—enhancing its ability to assign safety fitness determinations to a significantly greater portion of the motor carrier industry than it currently is able to do. In some areas, FMCSA performed well as it implemented CSA, most notably, in conducting extensive outreach to carriers. In December 2007, we reported that communicating needed information to key stakeholders would be critical to implementing a successful program. According to trucking association representatives, FMCSA has made considerable effort to provide information to carriers and associations and, according to one state trucking association, has probably done as much outreach as possible, given its resources. FMCSA’s efforts to reach out to carriers and make them aware of the program, if continued, could help FMCSA educate carriers about future developments in the program and forestall problems as it completes implementing the carrier component of CSA nationwide. Our interviews with 55 carriers indicated that 23 had learned about CSA from a variety of sources, including FMCSA’s and states’ outreach efforts and state trucking associations. However, 32 of the carriers indicated that they were not familiar with CSA. Of these carriers that had never heard of CSA, 12 were small carriers, 15 were medium and 5 were large. While the results of our interviews are not generalizable, they suggest that FMCSA should continue its outreach efforts. FMCSA has also been responsive to stakeholder concerns during CSA’s implementation. In our December 2007 report, we said that controlling the project by monitoring and providing feedback would be critical to CSA’s success. Throughout the pilot and implementation, FMCSA has made changes to CSA based on feedback from carriers and states. As noted previously, in addition to deciding to not make the Crash Indicator and Cargo-Related BASIC data public, FMCSA also expanded its basis for calculating crash rates to include both power units (i.e., trucks) as well as vehicle miles traveled after stakeholders raised concerns. After studying the issue, FMCSA determined that including vehicle miles traveled in addition to power units was a more accurate measure. Although FMCSA has managed CSA implementation well in these areas, the agency has experienced some difficulties in others. FMCSA conducted a workforce analysis study in 2009 to determine the staffing levels and skill sets necessary to implement CSA. Based on this study, FMCSA planned to hire additional staff, including staff to support the expected increase in investigations. For fiscal year 2012, FMCSA has requested $78 million from Congress to fully implement and integrate CSA into its operations. Of this request, $61 million is for 696 full time positions, including salary and benefits, which represents most of FMCSA’s existing field staff as well as 98 new full-time positions. These new positions include 30 investigators and 51 program analysts who would assist intervention managers and investigators throughout FMCSA’s divisions, among other staff. Notwithstanding the future of its funding request, FMCSA has not yet fully determined how it would allocate staff as it moves forward to implement CSA. FMCSA has not determined which divisions will receive the additional investigators and program analysts, although small states will likely share program analysts. FMCSA also has not performed a staffing analysis to determine how it would reallocate existing staff if it does not receive the funding in fiscal year 2012 for the new positions. We have identified key practices for workforce planning, including developing a process to determine staffing needs and allocate staff among offices and taking the budgetary process into account. Given the current budgetary environment, FMCSA officials realize they may not receive all funding requested and plan to re-examine current staff allocations if FMCSA does not receive authority for these positions. FMCSA officials have stated that CSA’s effectiveness would be impacted with less funding because investigators would not be able to conduct the same number of interventions and, consequently, FMCSA would not be able to reach as many carriers as originally expected. However, waiting to determine how to allocate a lesser number of staff could also delay FMCSA efforts to continue to implement CSA. In addition, FMCSA is still adapting to the changes required by the new interventions. CSA represents a shift to a new paradigm or way of thinking about safety that requires a cultural change among FMCSA Division and state staff, which can take time. CSA requires investigators to change from comprehensively investigating all aspects of a motor carrier’s operations to focusing only on weaknesses that SMS identifies (i.e., the on-site focused investigation). During our site visits, FMCSA division and state staff often reported that they appreciated the efficiencies gained by using data to identify carriers and areas to focus on during investigations. However, they also reported that this shift has been difficult, with some investigators still preferring to conduct comprehensive investigations. FMCSA officials noted that investigators can expand a focused review if they see evidence of problems in other areas and that the efficiency gains FMCSA intends will be negated if investigators continue to take a comprehensive approach when focused reviews are warranted. We have reported that major change initiatives and cultural changes take time to fully implement and take effect. In our 2003 report on the Architect of the Capitol, for example, we reported that the experiences of successful major change management initiatives in large private and public sector organizations suggest that they can often take at least 5 to 7 years until they are fully implemented and the related cultures are transformed in a sustainable manner. Additionally, we reported that fundamental changes in the Architect of the Capitol’s culture will require a long-term, concerted effort. The same may be true for CSA; much about CSA is new and, given the nature of this type of cultural transformation, it may simply take time for staff to adjust to the new paradigm. To address this issue, FMCSA, among other things, is using the pilot states as mentors for the states that did not participate in the pilot test, invited participants from pilot-test states to describe the new process to their peers in non-pilot states, and has put CSA on the agenda of annual in- service training sessions. Additionally, FMCSA plans to develop a systematic change management plan. As we have previously discussed, several steps and issues remain before FMCSA can fully implement CSA carrier oversight activities. Specifically, FMCSA has not completed a key technology to fully implement the interventions and provided training on interventions yet to be implemented,  developed and issued the NPRM to take action against unfit carriers based on CSA data,  addressed staffing issues and completed efforts to help staff shift to a new safety enforcement paradigm. FMCSA officials acknowledged delays in implementing CSA’s carrier oversight activities and the need to complete key tasks and address certain issues. However, they maintain that delays are to be expected when implementing a major program such as CSA and that, in their opinion, FMCSA has implemented the bulk of CSA’s oversight activities. They acknowledged that risks associated with FMCSA’s ability to complete these items and address budgetary issues could affect their ability to fully implement CSA, as well as CSA’s effectiveness, and noted that they track open issues and the associated risks and mitigation strategies. Although FMCSA officials indicated they have periodically briefed congressional staff of their progress in developing and implementing CSA, FMCSA has not developed any type of comprehensive document that specifically outlines its status, implementation delays, and other issues that need to be addressed, or identifies the risks associated with these problems and strategies to mitigate them. Our past work has shown that the early identification of risks and strategies to mitigate them can help avoid negative outcomes when implementing large-scale projects. For example, in our 2010 report examining the Federal Railroad Administration’s (FRA) efforts to implement a Positive Train Control (PTC) system, we reported that uncertainties about tasks, such as potential delays in developing PTC components, software, and subsequent testing and implementation of PTC systems, raise certain risks to successfully completing PTC on time. Specifically, potential delays in developing PTC components, software, and subsequent testing and implementation of PTC systems, raise the risk that railroads will not meet the implementation deadline and that the safety benefits of PTC will be delayed. We noted that FRA officials were aware of some of these risks, but said it was too early to know whether they were significant enough to jeopardize successful implementation. However, we also noted that, as FRA moves forward with monitoring railroad’s implementation of PTC, the agency will have more information regarding the risks to completing PTC on time and would thus be in a better position to inform Congress and other stakeholders of the risks and mitigation strategies associated with implementing the system. Similarly, our 2004 report examining an Amtrak project to manage improvements to the Northeast Corridor noted that early identification and assessment of problems would allow for prompt intervention, increasing the likelihood that corrective action could be taken to get the project back on track. Risk identification and management are also essential in the case of CSA, which FMCSA developed with the goal of significantly improving motor carrier safety. Regularly reporting information on what steps FMCSA needs to complete in order to implement CSA—including a timetable—as well as the risks and mitigation strategies associated with not completing each step or addressing each issue, would put FMCSA in a better position to respond to problems when they occur and thus better ensure that FMCSA could complete CSA’s implementation as planned. This would also provide Congress and other stakeholders with important information as to FMCSA’s status in implementing CSA and the associated risks, which would help Congress make decisions about the program. Although the implemented CSA oversight activities have provided FMCSA additional tools to provide information on drivers and assess their safety performance, FMCSA has only recently begun steps to develop the process to separately rate the safety fitness of all drivers under CSA. Since CSA’s initiation, FMCSA has prioritized implementation of the carrier oversight activities. FMCSA is seeking to clarify its authority to prohibit individual drivers, if determined to be unfit based on ratings, from operating in interstate commerce. FMCSA officials believe that arguably the agency currently has this authority, but acknowledge that seeking clarification from Congress would be prudent. FMCSA is seeking this authority as part of the next surface transportation reauthorization and has provided committees of Congress technical legislative drafting assistance to this effect. FMCSA officials also explained they now have access to more information on drivers than they previously had so that implementing the driver component is not as critical to CSA’s ability to improve safety as they believed when designing the program. For example, the Unsafe Driver BASIC provides additional oversight of drivers and allows FMCSA to address unsafe driver behaviors by intervening with carriers that employ unsafe drivers. Other systems also now allow FMCSA to evaluate drivers:  The Driver Safety Measurement System (DSMS) uses safety data from roadside inspections and crashes to measure drivers’ safety in a manner similar to that used under SMS and allows FMCSA and state partners to identify unsafe, or “red flag,” drivers. The red flag driver investigation process examines drivers receiving certain violations during the course of motor carrier investigations. However, since FMCSA has not implemented driver safety fitness determinations, the agency only uses DSMS internally and for law enforcement purposes.  The Pre-Employment Screening Program (PSP) allows carriers to view 5 years of individual drivers’ crash data from FMCSA’s MCMIS as well as 3 years of roadside violation data from MCMIS. Although PSP provides useful information, it was not intended to be a comparative tool and thus does not allow carriers to determine how safe or unsafe a driver is compared to other drivers. Also, participation in PSP is voluntary; motor carriers must pay a subscription fee for this service. Nonetheless, including a fitness determination would expand FMCSA’s oversight by measuring individual driver performance and systematically identifying unsafe commercial drivers for safety enforcement. It would allow carriers to determine an individual driver’s safety relative to other drivers and increase the usage of driver safety data among the motor carrier industry. FMCSA’s 2005 study of large truck crashes found that driver behavior is the single largest cause of crashes. FMCSA officials indicated that they still plan to assess driver fitness as part of CSA but have not developed a plan or set any timetable for doing so. FMCSA has also not determined how driver safety determinations will be used or assessed the safety risk of delayed implementation of them. CSA has the potential to identify higher-risk carriers under more precisely defined areas of safety performance, and FMCSA has an expanded range of interventions to follow up with them. Collectively, these changes offer the potential to improve safety. However, not all carriers are inspected, and larger sized motor carriers are likely to have more inspections and thus, more likely to be ranked under SMS than smaller sized motor carriers. Moreover, the technology FMCSA has developed to select carriers for inspection did not allow inspectors we observed to quickly determine if a carrier’s past history warranted an inspection. Instead, they used it to identify what needed to be inspected once a carrier was already selected for inspection. As a result, some states use other technologies that incorporate FMCSA’s system, or other methods to select carriers that may not be systematic. Furthermore, until FMCSA completes new performance metrics, gauging the extent to which CSA improves safety will be problematic. To improve safety, CSA makes better use of roadside inspection data in the following ways:  SMS makes greater use of the data available from roadside inspections than SafeStat did. Under SafeStat, only out-of-service violations and selected moving violations were used for estimating carriers’ scores under the Driver and Vehicle safety evaluation areas. In SMS, any violation found is used in calculating a carrier BASIC score. This should help FMCSA to improve overall safety by allowing it to identify carriers with recurring types of safety violations that may have been missed under the prior SafeStat system.  SMS allows for more precision in the measurement of safety, since, as we discussed previously, the BASIC scores and Crash Indicator measure carrier performance in seven areas, rather than the four used under SafeStat. For example, CSA measures driver performance at the motor carrier company level in several categories, including unsafe driving, fatigued driving, driver fitness, and the use of controlled substances and alcohol, whereas SafeStat calculated an overall rating based on all these driver factors combined. This breakdown not only allows for a more precise determination of motor carrier safety performance overall but also allows FMCSA to better identify specific areas of safety shortcomings. For example, CSA can indicate if a carrier is having a problem with driver fatigue, whereas SafeStat could not provide this level of detail. Thus, interventions can be targeted to the specific area of safety concern.  SMS creates percentile ranks for carriers within each BASIC and in the Crash Indicator, rather than producing just one total summed score, as SafeStat did. Thus, SMS has the potential to improve safety by reporting scores on the separate areas of safety problems and making carriers’ performance in this area explicit. For example, CSA can indicate that, although a carrier has a relatively poor ranking in the Cargo-Related BASIC, the carrier has a good ranking in the Unsafe Driving indicator, thereby enabling FMCSA to focus its interventions on carrier practices that have the greatest impact on safety. SMS also allows FMCSA to conduct interventions with a greater number of motor carriers. SMS identifies about 45,000 motor carriers each month that exceeded the thresholds in one or more BASICs or the Crash Indicator. By comparison, under SafeStat, a similar number of about 45,000 carriers per month were identified as exceeding the threshold on one or more safety evaluation areas to varying degrees, on a scale of A to G. However, under SafeStat, only those carriers with a SafeStat rating of A, B, or C were prioritized for SafeStat’s intervention—a full compliance review—resulting in a smaller percentage of motor carriers with an identified safety problem receiving the intervention. For example, during all of fiscal year 2009, 16,512 compliance reviews were carried out by FMCSA and state partners on motor carriers rated under SafeStat. Under CSA, any carrier exceeding a threshold in even one BASIC or in the Crash Indicator will receive an intervention of some type. The reason FMCSA can contact carriers with a wider range of violations—including less severe violations—than it did under SafeStat is that CSA provides a wider range of intervention tools, some of them requiring few resources to implement. CSA’s range of interventions—from the resource-intensive On-site Comprehensive Investigation to the relatively low-resource Warning Letter—provide FMCSA with more tools for contacting carriers, calibrating the intervention to the severity of the violation. Under CSA, all carriers newly identified as exceeding the threshold in one or more safety areas in a given month are subject to some type of safety intervention by FMCSA, most commonly a Warning Letter. During the first 6 months of fiscal year 2011, FMCSA sent 19,470 Warning Letters and, along with state partners, conducted 3,190 CSA On- site Focused Investigations in addition to completing 5,684 compliance reviews through May of 2011, for 28,344 total safety interventions. Preliminary evidence from the pilot test suggests that even the warning letters have an effect on safety. Twelve months after receiving only a Warning Letter, 17 percent of test carriers exceeded at least one SMS threshold as opposed to 45 percent of the control carriers who did not receive Warning Letters. Reaching more carriers with enforcement actions should enable FMCSA to improve safety. While the pilot test suggests that SMS has the potential to improve safety over the prior SafeStat system, SMS’s ability to calculate BASIC scores for carriers is dependent upon sufficient roadside inspection data for that carrier, which are not always available for a significant segment of carriers. Analysis of the data from the pilot test states found that a substantial proportion of motor carriers lack sufficient data for ranking in the six BASICs and Crash Indicator: Specifically, the Fatigued Driving and Unsafe Driving BASIC both require a minimum of three relevant inspections and at least one relevant violation for a motor carrier over the past 24 months; the Vehicle Maintenance, Driver Fitness, and Cargo- Related BASICs each require a minimum of five relevant inspections and at least one relevant violation over the preceding 24 months. Table 5 shows the percentage of carriers in the pilot test states that have sufficient data for ranking in a BASIC or the Crash Indicator. While most large motor carriers have enough data to be considered and rated under SMS, the majority of smaller carriers do not. For example, about 48 percent of carriers with 51 to 500 vehicles and about 71 percent of carriers with 501 or more vehicles have sufficient ranking in the Unsafe Driving BASIC but only about 1 percent of carriers with 5 or fewer vehicles do. The majority of companies in operation are small motor carriers with 5 or fewer vehicles; the lack of sufficient data for ranking on a BASIC is greatest in this segment of the carrier fleet. Those carriers with 2 or fewer roadside inspections are only potentially ranked by SMS through the Controlled Substances/Alcohol BASIC or the Crash Indicator. Those with 3 to 4 inspections are below the minimum data sufficiency requirements for the Vehicle Maintenance and Driver Fitness BASICs. This data limitation will continue to prevent the SMS from functioning at full capability until efforts to expand roadside inspection measurement coverage across the motor carrier fleet succeed. In the meantime, the effect of this data sufficiency limitation is that safety ranking by SMS is more concentrated among the large sized motor carriers than it is among the more numerous smaller sized motor carriers. Based on visits to inspection stations and interviews with inspection officials in eight states, we found that not all states use methods that systematically select trucks for roadside inspections, which can limit CSA’s ability to improve motor carrier safety. FMCSA provides all states with its Inspection Selection System (ISS-2010) software, designed to systematically identify carriers with known poor safety performance. Vehicle selection methods that factor in safety performance offer more assurance that roadside inspections will ultimately prevent crashes by focusing resources on higher-risk carriers. The ISS-2010 software also systematically identifies carriers that have not been ranked in any of the BASICs by the SMS, so that inspectors can inspect those carriers’ trucks to determine their compliance. Because of the pace at which trucks move through the scales at inspections stations, however, inspectors we observed rarely had time to access the ISS-2010 on FMCSA’s website before deciding which trucks to inspect. Thus, inspectors mainly used ISS-2010 to obtain information about trucks that have already been selected for inspection by other means (see below). Many states use software that allows inspectors to bypass some low-risk trucks from the inspection station, thus allowing them to select carriers for inspection from a group with a history of safety problems or unknown safety performance. These third-party software products incorporate the ISS-2010 algorithms to allow trucks belonging to carriers with good safety performance to bypass inspection stations. When these trucks are allowed to bypass the weigh station, inspection resources can be expended on carriers with riskier or unknown safety performance, according to the software. For example, in 30 states, inspectors rely on a product called PrePass. PrePass incorporates the ISS-2010 selection algorithms with other proprietary criteria to gauge a carrier’s safety performance, including crash risk, before its truck enters an inspection station. Carriers that participate in PrePass receive transponders for their trucks; weigh stations are fitted with equipment that receives signals from the transponders. The transponder sends a signal to the inspection station that alerts inspectors as to whether the participating truck can bypass the inspection station or if it must come in for an inspection. All nonparticipating trucks must enter the station when it is open. Inspectors in states that use such software products then employ a combination of other methods, some noted below, to select trucks for inspection from among those that enter the weigh station. FMCSA officials stated that states are encouraged to use federal roadside inspection grant funds to purchase technology to assist their inspectors in systematically selecting trucks for inspection. FMCSA currently does not require states to use ISS-2010 software or products like PrePass, although it encourages them to do so. While some of the selection methods we observed being used at inspection stations take some aspects of crash risk into account, none are as systematic as would be the case if inspectors were able to use the ISS-2010 algorithms for truck selection. Some also may, by chance, select for inspection the trucks of carriers previously unranked by CSA, thereby broadening the base of carriers the SMS can potentially rank. We observed the following selection methods:  Weight as an initial selection factor. All trucks entering a weigh station will be weighed. If the overall weight or individual weight on a particular axle exceeds the allowed weight, inspectors put the truck and driver out of service. This method addresses safety performance and may by chance select trucks of carriers previously unranked by the SMS because all trucks must cross the scales. The truck may not resume its journey until its weight issues are resolved, and the inspector has discretion to conduct further inspection.  Obvious problems. When inspectors notice obvious problems related to safety performance on a truck as it moves across the scale—such as a flat tire, unattached hoses, incorrect or damaged placards, etc.— they may pull the vehicle over for inspection. Many of these problems could involve safety performance issues and may result in selecting trucks of carriers previously unranked by the SMS.  Random selection. Inspectors choose trucks for inspection randomly from among those not put out of service for weight issues. This method does not gauge crash risk or other aspects of safety performance but could select trucks of carriers previously unranked by the SMS.  Local discretion. These methods may focus inspectors’ efforts on particular types of inspections, carriers, trucks, or loads for a period of time. Local discretion selection methods can be guided by the certification level of inspectors available at the station, the training needs of those inspectors, or news stories about crashes of particular types of vehicles or loads, among other things. In some cases, inspectors may focus their efforts on factors that influence safety, perhaps in response to public opinion about the safety performance of particular types of vehicles or loads. These methods could also result in selecting trucks of carriers previously unranked by the SMS. All of these methods are limited in identifying higher-risk trucks and carriers. For example, a truck belonging to a carrier with a history of driver fatigue issues would not be readily identifiable to an inspector unless a software product employing the ISS-2010 BASIC-supported algorithm flagged it. No inspection selection method can assist weigh station inspectors in selecting trucks if drivers avoid the weigh station entirely. Our observations at state inspection stations and discussions with inspection officials revealed that some drivers attempt to evade roadside inspection in different ways, allowing some carriers to potentially operate entirely beyond the scope of CSA. For example, drivers may avoid driving past a weigh station during its regular hours of operation. Inspection facilities in many states are open limited hours, and state officials told us there is a significant level of truck traffic when stations are closed. Because of physical or staffing constraints at some weigh stations, we observed that staff may close a station periodically during its standard hours of operation to relieve crowding or avoid back ups of trucks that could present a safety hazard on the freeway. State police and other officials in a number of states also indicated that budgetary constraints may force them to reduce weigh stations’ hours of operation, decreasing the number of trucks they can inspect and increasing the travel-time flexibility of drivers seeking to avoid inspection. State police officials also told us that some drivers seek to evade inspection by pulling over to the side of the road until a station closes or by altering their routes to drive around weigh stations, either on other highways or on smaller roads, sometimes within sight of staff at the weigh station. Depending on resources available at the station, troopers may or may not be able to leave the station to stop drivers whose trucks should be inspected. According to a number of state inspection officials, when inspectors do inspect trucks of drivers seeking to avoid inspection, they often find serious safety violations, reiterating the potential importance of appropriately targeting inspection resources to road safety. FMCSA has begun to develop performance measures to assess CSA’s nationwide performance in improving safety, but has not yet set a timetable for their completion. Indications that CSA may improve safety exist. Specifically, the UMTRI evaluation of the pilot test indicates that CSA’s SMS and new, expanded set of interventions increased FMCSA’s ability to improve safety in the four pilot states. However, performance measures are needed to gauge the effectiveness of CSA in improving safety as it is implemented nationwide. We have previously reported that agencies need to set quantifiable outcome-based performance measures for significant agency activities, such as CSA, to demonstrate how they intend to achieve their program goals and measure the extent to which they have done so. Performance measures allow an agency to track its progress in achieving intended results, which can be particularly important in the implementation stage of a new program. Performance measures can also help inform management decision making, such as the need to redirect resources or shift priorities. In some of our prior work we have recommended that agencies develop methods to accurately evaluate and measure the progress of implementation, and develop contingency plans if the agency does not meet its milestones to complete tasks. In addition, performance measures can be used by stakeholders, such as state law enforcement partners, carrier associations, and the public who use the nation’s highways, to hold FMCSA accountable for results. With performance measures, FMCSA Divisions and state partners will be able to set priorities and measure results by state or overall. FMCSA has been working on developing performance measures for CSA results and program implementation progress. FMCSA has proposed several performance measures for CSA, but they have not yet been approved within the agency. Two of the proposed measures would assess outcomes of CSA. The first would determine the number of carriers that received a specific CSA intervention in 1 year and then showed improvement in the next year. The second would measure the level of compliance from all inspections in a baseline year before CSA was implemented (e.g., 2007) and compare that level against compliance in subsequent years to quantify improvements in compliance across the entire industry. FMCSA is also considering output measures, such as the increase in the number of carriers reviewed once off-site and focused investigations are fully implemented. According to FMCSA officials, these proposed measures have not yet been approved by the Administrator, and implementation will depend on accumulating relevant CSA intervention data once the carrier oversight activities are fully deployed in 2012, as expected. Under this timeline, 2012 would become the baseline year, which means 2013 would become the first year in which FMCSA could begin to develop CSA performance targets such as the percentage of carriers that showed safety improvements after being subject to CSA interventions. FMCSA has also begun efforts to track its progress in implementing CSA. FMCSA has identified the specific steps it has taken to implement CSA, as well as the states in which the various CSA oversight activities have been implemented (i.e., pilot states vs. nonpilot states, and, for those oversight activities that have not been implemented, when FMCSA plans to implement them). When ultimately developed and implemented, such measures will help provide CSA managers with information on the status of CSA implementation and allow them to make adjustments, if necessary, to meet established timeframes. FMCSA’s CSA program has been partly implemented and shows the potential for improving motor carrier safety. However, key aspects of the initiative, including using safety data from the new SMS system to take unsafe carriers and drivers off the road and enforcing other safety regulations, are indefinitely delayed. In the case of drivers, the plan for when and how to determine a driver’s fitness to operate vehicles based on the new measurement system has yet to be developed, and the safety implications of delayed implementation of drivers’ fitness ratings for FMCSA’s current goals to improve safety are unclear. FMCSA has also encountered several problems during implementation, including delays in developing technology needed for new interventions, and resistance from staff to shift to a new paradigm of more focused and less time-consuming reviews of carrier operations. Further, FMCSA has not established a process for regularly reporting to Congress and the public on CSA’s status, problems it has encountered in implementing CSA, the risks they pose to full implementation and its strategy for mitigating these risks. This type of information is essential to assist Congress in making decisions about funding or authorizations for the program and assure Congress and stakeholders that CSA is being successfully implemented. To this end, FMCSA has made progress in developing performance measures for determining the extent to which investigative staff are using new CSA interventions and the safety outcomes of these interventions. However, until these measures are completed and are being implemented, the extent of CSA’s effectiveness in improving safety will remain unclear to FMCSA management, Congress, and the public. We recommend that the Secretary of Transportation direct the FMCSA Administrator to take the following two actions  develop a plan for implementing driver fitness ratings that prioritizes steps that need to be completed and includes a reasonable timeframe for completing them. The plan should also address the safety implications of delayed implementation of driver fitness ratings. regularly report to Congress on CSA’s status; the problems that FMCSA has encountered during the implementation of CSA and the risks they pose to full implementation of CSA; its strategy for mitigating these risks; and a timetable for fully implementing CSA and reporting the progress made in developing and implementing CSA performance measures. We provided a draft of this report to the Department of Transportation for review and comment. The Department did not agree or disagree with our recommendations but said it would consider them. The Department provided technical comments and clarifications, which we incorporated as appropriate. At a meeting on September 23, 2011, to discuss the Department’s comments, FMCSA officials confirmed that they intend to continue implementing the driver fitness ratings. Previously, FMCSA officials indicated that they were considering implementing these ratings but had made no final decision. In response to this new information, we modified the language of our recommendation regarding driver fitness ratings. Our recommendation originally focused on having FMCSA determine the safety implications of not fully and expeditiously implementing the driver fitness ratings and, if it determined that full implementation was necessary, to then develop an implementation plan. To reflect that FMCSA has decided to proceed with implementing the driver fitness ratings, we modified our recommendation to focus instead on an implementation plan. We are sending copies of this report to the appropriate congressional committees; the Secretary of Transportation, the Administrator, Federal Motor Carrier Safety Administration; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-2834 or at flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. GAO was directed by a 2009 Senate Committee Report, adopted by the conference committee, to conduct a study as part of the continued monitoring of the Compliance, Safety, and Accountability program (CSA) implementation. Specifically, this report addresses (1) the status of the CSA rollout and what issues, if any, could affect the full and effective implementation of the program and (2) CSA’s potential to improve safety. To determine the status of the CSA rollout and challenges that could affect the full implementation of the program, we analyzed Federal Motor Carrier Safety Administration (FMCSA) documentation, including information on FMCSA’s website (www.fmcsa.dot.gov), periodic outreach e-mails from CSA program officials, and CSA training materials. Additionally, we reviewed congressional testimony provided by FMCSA’s Administrator. We also reviewed an evaluation of the pilot test conducted by the University of Michigan’s Transportation Research Institute (UMTRI). We interviewed FMCSA and National Transportation Safety Board headquarters’ officials as well as national representatives of carrier industry associations (see table 6) We also attended the Commercial Vehicle Safety Alliance (CVSA) annual conference in September 2010 and interviewed representatives of several State Partners to discuss CSA implementation, as indicated in table 7. We also attended two FMCSA-sponsored outreach sessions discussing different aspects of CSA and carrier motor vehicle safety, one from the carrier’s perspective. Additionally, we visited eight states (four that participated in the pilot program: Georgia, Maryland, Minnesota, and Missouri, and four that did not: California, Mississippi, Texas, and Utah) to interview FMCSA Division and State Partner officials as well as industry groups and some carriers. (See table 8 for criteria we used to select these states.) We collected and reviewed other CSA implementation and background documentation during these visits. (See table 9 for agency and industry organizations we interviewed during state visits.) To obtain information on motor carriers’ knowledge of and experiences with CSA, we selected a nongeneralizable random sample of motor carriers from the Motor Carrier Management Information System (MCMIS) carrier census file and conducted brief, structured telephone interviews. We screened the population from which we selected the sample to remove foreign carriers and those carriers that had not updated their census (MCS-150) forms with FMCSA in the prior 2 years. We divided the carriers into three size categories—small, medium, and large—based on the number of vehicles associated with the company and randomly selected a group of carriers within each size category to participate in the structured interviews. During the interviews, we asked about the interviewees’ knowledge and understanding of CSA; interviewees were owners, safety managers, or others who would have knowledge of a carrier’s safety practices and performance. We obtained responses from 55 motor carriers out of the 270 we attempted to contact. To determine CSA’s potential to improve safety, we analyzed FMCSA documents describing the design and function of the Safety Measurement System (SMS), how the severity of violations were weighted, and other design documentation as it was released, particularly comparing the SMS with SafeStat. We also analyzed UMTRI’s pilot test study findings. We reviewed UMTRI’s statistical methodology and its reliability assessment of the FMCSA data used for the study and determined that the results of UMTRI’s pilot evaluation study were sufficiently reliable for our purposes. We obtained a copy of the May 2011 MCMIS inspection data, upon which five publicly available Behavior Analysis and Safety Improvement Categories (BASIC) scores for carriers were based, and analyzed it to determine the extent to which motor carriers lacked a sufficient number of roadside inspections for measurement under the BASICs in SMS. We electronically tested the data for completeness and coding accuracy, and found it sufficiently reliable for the purposes of our engagement. We also analyzed the function of FMCSA’s Inspection Selection System software, which is designed to select trucks for inspection and thereby guides data collection for the SMS. We did not model the SMS in order to test its function ourselves, as it was modified several times during the course of our review. During our state visits, we also visited weigh stations or other truck inspection sites to interview inspectors about how they select trucks for inspection and how CSA has affected their work, and the data they obtain during inspections. We observed truck inspections during these visits. We also obtained information on crash data quality by analyzing studies UMTRI conducted on states’ MCMIS crash data reliability as well as FMCSA’s publicly available crash data evaluation tools. We conducted this performance audit from June 2010 to September 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Susan Fleming (202 512-2834) In addition to the individual named above, Ed Laughlin, Assistant Director; Lynn Filla-Clark, Analyst-in-Charge; Carl Barden; Lauren Calhoun; Alison Hoenk; Delwen Jones; Elke Kolodinski; Kirsten Lauber, Sara Ann Moessbauer; Rebecca Rygg, Amy Rosewarne; and Larry Thomas made key contributions to this report.
Over 3,600 people in this country died in 2009 as a result of crashes involving large commercial trucks and buses. Until recently the Federal Motor Carrier Safety Administration (FMCSA) and its state partners tracked the safety of motor carriers--companies that own these vehicles--by conducting resource-intensive compliance reviews of a small percentage of carriers. In 2004, FMCSA began its Compliance, Safety, and Accountability (CSA) program. CSA is intended to identify and evaluate carriers and drivers posing high safety risks. FMCSA has focused on three key CSA oversight activities to evaluate carriers: a new Safety Measurement System (SMS) using more roadside inspection and other data to identify at-risk carriers; a wider range of "interventions" to reach more at-risk carriers; and using SMS data to suspend unfit carriers. FMCSA expected to fully implement CSA by late 2010. FMCSA also plans to separately use data to rate drivers' fitness. In this report, GAO assessed: (1) the status of the CSA rollout and issues that could affect it and (2) CSA's potential to improve safety. GAO reviewed CSA plans and data, visited eight states, and interviewed FMCSA, state, and industry officials. Close to a year after the anticipated completion date, FMCSA has partially implemented two of the three planned CSA carrier oversight activities--the new SMS and an expanded set of interventions--in all states; however, it still cannot use CSA safety ratings to get unsafe carriers off the road because it has not completed a rulemaking needed to do so. Specifically, (1) FMCSA implemented SMS in 2010, as scheduled, to replace the prior system, known as SafeStat. The system allows FMCSA to evaluate, score and rank the safety of carriers and identify at-risk carriers needing intervention. However, states have had to expend resources to respond to carriers that have requested reviews of inspection violations shown in the system. (2) FMCSA has implemented most of the expanded array of enforcement interventions for at-risk carriers, including issuing warning letters and initiating focused reviews of carriers' safety operations that allow FMCSA to reach more at-risk carriers; however, it has delayed implementation of two interventions--Off-site Investigations and Cooperative Safety Plans--because the technology needed to implement them will not be completed until at least 2012. (3) FMCSA has not yet begun using SMS data to suspend unfit carriers, and is 2 years behind in issuing and completing the rulemaking needed to use these data instead of a time-consuming compliance review. FMCSA expects to finalize the rulemaking in 2013. In addition, FMCSA has had mixed success managing implementation of CSA oversight activities thus far. FMCSA performed well in conducting outreach to carriers and responding to stakeholder concerns, but experienced difficulties in realigning its workforce for CSA and adapting staff to CSA's new safety paradigm. FMCSA has not provided comprehensive information to Congress and the public on the risks associated with either the delayed carrier intervention activities or operational and management issues that arose during implementation and its plans to mitigate these risks; thus Congress may lack information needed to make decisions about CSA. Moreover, FMCSA has taken initial steps to separately measure drivers' fitness to operate trucks and buses by seeking new legislative authority to prohibit unsafe drivers from operating in interstate commerce. However, FMCSA has not specified time frames for developing this measurement, how it will ultimately be used, or whether delaying the implementation will affect safety. It is too early to definitively assess the extent to which CSA will improve truck and bus safety nationwide. Data from a pilot test suggest that SMS and the expanded range of intervention tools provides a more effective means of contacting these carriers and addressing their safety issues. However, CSA's success depends on the availability of sufficient inspection data for carriers. For example, small carriers are less likely to receive enough roadside inspections to be scored and ranked in SMS. FMCSA has begun but not finished performance measures for CSA and has not yet collected the data needed to use them, so the extent that it can show CSA improves safety is unclear. GAO recommends that FMCSA (1) develop a plan to implement driver fitness ratings in a reasonable timeframe and (2) regularly report to Congress on problems and delays in implementing CSA and plans to mitigate risks. FMCSA provided technical comments and agreed to consider the recommendations.
Designated uses are the purposes that a state’s waters are intended to serve. Some waters, for example, serve as a drinking water source, while others are designated to serve as a source of recreation (swimming or boating) and/or to support aquatic life. The state must also develop water quality criteria, which specify pollutant limits that determine whether a water body’s designated use is achieved. These water quality criteria can be expressed, for example, as the maximum allowable concentration of a given pollutant such as iron, or as an important physical or biological characteristic that must be met, such as an allowable temperature range. To develop water quality criteria, states rely heavily on EPA-developed “criteria documents.” These documents contain the technical data that allow states to develop the necessary pollutant limits. EPA is responsible for developing and revising criteria documents in a manner that reflects the latest scientific knowledge. States may adopt these criteria as recommended by EPA, adapt them to meet state needs, or develop criteria using other scientifically defensible methods. States are also required to periodically review both their waters’ designated uses and associated criteria, and make changes as appropriate. Before those changes can take effect, the state must submit them to EPA and obtain approval for them. EPA is required to review and approve or disapprove standards changes proposed by a state within 60 to 90 days. Figure 1 illustrates how states use water quality standards to make key decisions on which waters should be targeted for cleanup. States generally determine if a water body’s designated use is achieved by comparing monitoring data with applicable state water quality criteria. If the water body fails to meet the applicable standards, the state is required to list that water as “impaired”; calculate a pollution budget under EPA’s Total Maximum Daily Load program that specifies how compliance with the standard can be achieved; and then eventually implement a cleanup plan. Thus, as noted in 2001 by the National Academy of Sciences’ National Research Council, water quality standards are the foundation on which the entire TMDL program rests: if the standards are flawed, all subsequent steps in the TMDL process will be affected. We asked the states to report the total number of designated use changes they adopted from 1997 through 2001. While some states made no use changes, others made over 1,000 changes. At the same time, nearly all states told us that designated use changes are needed. Twenty-eight states reported that between 1 to 20 percent of their water bodies need use changes; 11 states reported that between 21 and 50 percent of their water bodies need use changes; and 5 states reported that over 50 percent of their water bodies need use changes. These percentages suggest that future use changes may dwarf the few thousand made between 1997 and 2001. For example, Missouri’s response noted that while the state did not make any use changes from 1997 through 2001, approximately 25 percent of the state’s water bodies need changes to their recreational designated uses and more changes might be needed for other use categories as well. Similarly, Oregon’s response noted that while the state made no use changes from 1997 through 2001, the state needs designated use changes in over 90 percent of its basins. Many states explained their current need to make designated use changes by noting, among other things, that many of the original use decisions they made during the 1970s were not based on accurate data. For example, Utah’s response noted that because of concerns that grant funds would be withheld if designated uses were not assigned quickly, state water quality and wildlife officials set designated uses over a 4- to 5-day period using “best professional judgment.” As states have collected more data in ensuing years, the new data have provided compelling evidence that their uses are either under- or over-protective. In addition to changing designated uses for individual waters to reflect the new data, some states are seeking to develop more subcategories of designated uses to make them more precise and reflective of their waters’ actual uses. For example, a state may wish to create designated use subcategories that distinguish between cold and warm water fisheries, as opposed to a single, more general fishery use. Developing these subcategories of uses has the potential to result in more protective uses in some cases, and less protective uses in others. According to responses to our survey, a key reason state officials have not made more of the needed designated use changes is the uncertainty many of them face over the circumstances in which use changes are acceptable to EPA and the evidence needed to support these changes. EPA regulations specify that in order to remove a designated use, states must provide a reason as to why a use change is needed and demonstrate to EPA that the current designated use is unattainable. To do this, states are required to conduct a use attainability analysis (UAA). A UAA is a structured, scientific assessment of the factors affecting the attainment of the use, which may include physical, chemical, biological, and economic factors. The results of a state’s analysis must be included in its submittal for a use change to EPA. States that want to increase the stringency of a designated use are not required to conduct a UAA. UAAs vary considerably in their scope and complexity and in the time and cost required to complete them. They can range from 15-minute evaluations that are recorded on a single worksheet to more complex analyses that might require years to complete. A Virginia water quality official explained, for example, that some of the state’s UAAs are simple exercises using available data, while others require more detailed analysis involving site visits, monitoring, and laboratory work. In their responses to our survey, states reported that the UAAs they conducted in the past 5 years have cost them anywhere from $100 to $300,000. In 1994, EPA published guidance regarding use changes that specifies the reasons states may remove a designated use. Nonetheless, our survey shows that many states are still uncertain about when to conduct UAAs, or about the type or amount of data they need to provide to EPA to justify their proposed use changes. Forty-three percent of states reported that they need additional clarifying UAA guidance. Among them, Oregon’s response explained that water quality officials need guidance on whether a UAA is required to add subcategories of use for particular fish species. Virginia’s response indicated that the state needs guidance on what reasons can justify recreational use changes, noting further that state water quality officials would like to see examples of UAAs conducted in other states. Louisiana’s response similarly called for specific guidance on what type of and how much data are required for UAAs in order for EPA to approve a designated use change with less protective criteria. EPA headquarters and regional officials acknowledge that states are uncertain about how to change their designated uses and believe better guidance would serve to alleviate some of the confusion. Of particular note, officials from 9 of EPA’s 10 regional offices told us that states need better guidance on when designated use changes are appropriate and the data needed to justify a use change. Chicago regional officials, for example, explained that the states in their region need clarification on when recreational use changes are appropriate and the data needed to support recreational use changes. In this connection, an official from the San Francisco regional office suggested that headquarters develop a national clearinghouse of approved use changes to provide examples for states and regions of what is considered sufficient justification for a use change. A 2002 EPA draft strategy also recognized that this type of clearinghouse would be useful to the states. The strategy calls on EPA’s Office of Science and Technology to conduct a feasibility study to identify ways to provide a cost-effective clearinghouse. According to EPA, the agency plans to conduct the feasibility study in 2004. EPA headquarters officials have also formed a national working group to address the need for guidance. According to the officials, the group plans to develop outreach and support materials addressing nine areas of concern for recreational uses that states have identified as problematic. In addition, the group plans to develop a Web page that includes examples of approved recreational use changes by the end of 2004. The national work groups’ efforts may also help address another concern cited by many states—a lack of consistency among EPA’s regional offices on how they evaluate proposals by their states to change designated uses. Some states’ water quality officials noted in particular that the data needed to justify a use change varies among EPA regions. For example, Rhode Island’s response asserted that the state’s EPA regional office (Boston) requires a much greater burden of proof than EPA guidance suggests or than other regional offices require. The response said that EPA guidance on UAAs should be more uniformly applied by all EPA regional offices. Several EPA regional officials acknowledged the inconsistency and cited an absence of national guidance as the primary cause. EPA headquarters officials concurred that regional offices often require different types and amounts of data to justify a use change and noted that inconsistency among EPA regional offices’ approaches has been a long- standing concern. The officials explained that EPA is trying to reduce inconsistencies while maintaining the flexibility needed to meet region- specific conditions by holding regular work group meetings and conference calls between the regional offices and headquarters. While EPA has developed and published criteria documents for a wide range of pollutants, approximately 50 percent of water quality impairments nationwide concern pollutants for which there are no national numeric water quality criteria. Because water quality criteria are the measures by which states determine if designated uses are being attained, they play a role as important as designated uses in states’ decisions regarding the identification and cleanup of impaired waters. If nationally recommended criteria do not exist for key pollutants, or if states have difficulty using or modifying existing criteria, states may not be able to accurately identify water bodies that are not attaining designated uses. Sedimentation is a key pollutant for which numeric water quality criteria need to be developed. In addition, nutrient criteria are currently being developed, and pathogen criteria need to be revised. Together, according to our analysis of EPA data, sediments, nutrients, and pathogens are responsible for about 40 percent of impairments nationwide. (See fig. 2.) Not surprisingly, many states responding to our survey indicated that these pollutants are among those for which numeric criteria are most needed. Recognizing the growing importance of pathogens in accounting for the nation’s impaired waters, EPA developed numeric criteria for pathogens in 1986—although states are having difficulty using these criteria and are awaiting additional EPA guidance. EPA is also currently working with states to develop nutrient criteria and has entered into a research phase for sedimentation. EPA explained that the delay in developing and publishing key criteria has been due to various factors, such as the complexity of the criteria and the need for careful scientific analysis, and an essentially flat budget accompanied by a sharply increased workload. EPA also explained that for several decades, the agency and the states focused more on point source discharges of pollution, which can be regulated easily through permits, than on nonpoint sources, which are more difficult to regulate. Even when EPA has developed criteria recommendations, states reported that the criteria cannot always be used because water quality officials sometimes cannot perform the kind of monitoring that the criteria documents specify, particularly in terms of frequency and duration. Our survey asked states about the extent to which they have been able to establish criteria that can be compared with reasonably obtainable monitoring data. About one-third reported that they were able to do so to a “minor” extent or less, about one-third to a “moderate” extent, and about one-third to a “great” extent. Mississippi’s response noted, for example, that the state has adopted criteria specifying that samples must be collected on 4 consecutive days. The state noted, however, that its monitoring and assessment resources are simply insufficient to monitor at that frequency. Mississippi is not alone: a 2001 report by the National Research Council found that there is often a “fundamental discrepancy between the criteria used to determine whether a water body is achieving its designated use and the frequency with which water quality data are collected.” To address this discrepancy, regional EPA officials have suggested that EPA work with the states to develop alternative methods for determining if water bodies are meeting their criteria, such as a random sampling approach to identify and set priorities for impaired waters. If a state believes that it can improve its criteria, it has the option of modifying them—with EPA’s approval. In fact, states are required to review and modify their criteria periodically. A state might modify a criterion, for example, if new information becomes available that better reflects local variations in pollutant chemistry and corresponding biological effects. In response to our survey, 43 states reported that it is “somewhat” to “very” difficult to modify criteria. Not surprisingly, a vast majority of states reported that a lack of resources (including data, funding, and expertise) complicates this task. Nevada’s response, for example, explained that, like many states, it typically relies on EPA’s recommended criteria because of limited experience in developing criteria as well as limited resources; in many instances, developing site-specific criteria would better reflect unique conditions, allowing for better protection of designated uses. Significantly, however, more than half of the states reported that EPA’s approval process serves as a barrier when they try to modify their criteria. In this connection, respondents also noted that EPA’s regional offices are inconsistent in the type and amount of data they deem sufficient to justify a criteria change. Some regional officials told us that this inconsistency is explained, in part, by staff turnover in the regional offices. Likewise, a 2000 EPA report found that less tenured staff in some regional offices often lack the technical experience and skill to work with the states in determining the “scientific feasibility” of state-proposed criteria modifications. Our report concluded that additional headquarters guidance and training of its regional water quality standards staff would help facilitate meritorious criteria modifications while protecting against modifications that would result in environmental harm. Because designated uses and criteria constitute states’ water quality standards, a change in either is considered a standards modification. We first asked the states whether an improvement in the process of changing designated uses would result in different water bodies being slated for cleanup within their states, and 22 states reported affirmatively. We then asked the states whether an improvement in the process of modifying criteria would result in different water bodies being slated for cleanup within their states, and 22 states reported affirmatively. As figure 3 shows, when we superimposed the states’ responses to obtain the cumulative effect of improving either designated uses or the process of criteria modification, a total of 30 states indicated that an improvement in the process of modifying standards (whether a change in their designated uses, their criteria, or both) would result in different water bodies being slated for cleanup. Importantly, the 30-state total does not reflect the impacts that would result from EPA’s publication (and states’ subsequent adoption) of new criteria for sedimentation and other pollutants, nor does it reflect states’ ongoing adoption of nutrient criteria. As these criteria are issued in coming years, states will adopt numeric criteria for these key pollutants, which, in turn, will likely affect which waters the states target for cleanup. To help ensure that both designated uses and water quality criteria serve as a valid basis for decisions on which of the nation’s waters should be targeted for cleanup, we recommended that the Administrator of EPA take several actions to strengthen the water quality standards program. To improve designated uses, we recommended that EPA (1) develop additional guidance on designated use changes to better clarify for the states and regional offices when a use change is appropriate, what data are needed to justify the change, and how to establish subcategories of uses and (2) follow through on its plans to assess the feasibility of establishing a clearinghouse of approved designated use changes by 2004. To improve water quality criteria, we recommended that EPA (1) set a time frame for developing and publishing nationally recommended sedimentation criteria, (2) develop alternative, scientifically defensible monitoring strategies that states can use to determine if water bodies are meeting their water quality criteria, and (3) develop guidance and a training strategy that will help EPA regional staff determine the scientific defensibility of proposed criteria modifications. According to officials with EPA’s Water Quality Standards Program, the agency agrees with our recommendations, has taken some steps to address them, and is planning additional action. They note that, thus far, EPA staff have already met with a large number of states to identify difficulties the states face when attempting to modify their designated uses. The officials also noted that, among other things, they plan to release support materials to the states regarding designated use changes; develop a Web page that provides examples of approved use changes; and develop a strategy for developing sedimentation criteria by the end of 2003. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony included Steve Elstein and Barbara Patterson. Other contributors included Leah DeWolf, Laura Gatz, Emmy Rhine, Katheryn Summers, and Michelle K. Treistman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Water quality standards comprise designated uses and water quality criteria. These standards are critical in making accurate, scientifically based determinations about which of the nation's waters are most in need of cleanup. GAO examined the extent to which (1) states are changing designated uses when necessary, (2) EPA is assisting states toward that end, (3) EPA is updating the "criteria documents" states use to develop the pollutant limits needed to measure whether designated uses are being attained, and (4) EPA is assisting states in establishing criteria that can be compared with reasonably obtainable monitoring data. The extent to which states are changing designated uses varies considerably. Individual states made anywhere from no use changes to over 1,000 use changes during the 5-year period, from 1997 through 2001. Regardless of the number of use changes states made, nearly all states report that some water bodies within their states currently need changes to their designated uses. To do so, many states said they need additional EPA assistance to clarify the circumstances in which use changes are acceptable to EPA and the evidence needed to support those changes. While EPA has developed and published criteria for a wide range of pollutants, the agency has not updated its criteria documents to include sedimentation and other key pollutants that are causing approximately 50 percent of water quality impairments nationwide. In addition to needing new criteria documents, states need assistance from EPA in establishing criteria so that they can be compared with reasonably obtainable monitoring data. Changing either designated uses or criteria is considered a standards modification. Twenty-two states reported that an improvement in the process for changing designated uses would result in different water bodies being slated for cleanup; 22 states also reported that an improvement in the process for modifying criteria would have that effect. Collectively, 30 states would have different water bodies slated for cleanup with an improvement in the process of modifying standards.
To provide, consistent with its mission, postal services to everyone in the United States, the Postal Service operates an expansive network of facilities throughout the nation. Our analysis of Postal Service data shows there are approximately 34,000 active owned or leased postal facilities, most of which provide retail services. The Postal Service 2006 annual report indicates that its facilities are valued at approximately $21 billion. In addition, according to the Postal Service, it paid approximately $860 million for capital projects and maintenance repairs in fiscal year 2006. About three-fourths of postal facilities are leased, and in fiscal year 2006, the Postal Service paid over $1 billion to lease these facilities. The Postal Service is responsible for maintaining all of its owned facilities and many of its leased facilities, but maintenance responsibilities vary with the specific lease and can change when a lease is renegotiated. Employees at postal retail facilities provide services related to First-Class Mail, Insured and Registered Mail, Parcel Post, Priority Mail, and other services, such as post office box rentals and money-order purchases. Some retail facilities also provide space for other functions, such as receiving and sorting mail for delivery. Retail facilities—whether owned or leased— fall into one of three categories: (1) main post offices, where local postmasters oversee retail operations in the geographic area; (2) postal stations located within a municipality’s corporate limits; and (3) postal branches located outside a municipality’s corporate limits. According to Postal Service data, main post offices account for almost 75 percent of all retail facilities, but in large communities there may be more stations and branches. The Postal Service also operates nonretail facilities, such as mail processing facilities, vehicle maintenance facilities, and administrative offices. Figure 1 provides information on the different types of postal facilities. Besides the retail facilities it owns or leases, the Postal Service reported in its 2006 annual report that it operates about 3,950 privately owned and operated facilities, known as either “contract postal units” or “community post offices,” which provide retail postal services. Contract postal units are operated by nonpostal employees in privately operated businesses, such as convenience stores, grocery stores, greeting card stores, and pharmacies. Community post offices are contract postal units that are located in small communities and function as main post offices. The Postal Service reported that there were 3,014 contract postal units and 937 community post offices throughout the nation. The Postal Service has no responsibility for maintaining these privately operated retail facilities. Responsibility for managing postal facilities is distributed across the Postal Service. Three Postal Service departments in headquarters share responsibility for managing data on postal facilities—(1) Delivery and Retail, (2) Facilities, and (3) Intelligent Mail and Address Quality. Each of these departments tracks different data on postal facilities depending on its needs. Delivery and Retail manages FDB, which was developed to consolidate information on all postal facilities. Facilities manages the Facility Management System, which is used to manage facility acquisitions and capital and expense projects. Intelligent Mail and Address Quality administers the Address Management System, which contains the addresses of about 160 million delivery locations nationwide, including postal facilities. Both of these systems feed information directly into FDB. The Postal Service’s Vice President for Facilities oversees the maintenance of facilities nationwide in conjunction with eight regionally based Facilities Service Offices, which manage the maintenance activities in the Postal Service’s areas. The Postal Service’s Vice President for Delivery and Retail—in conjunction with the nine area vice presidents, 80 district managers, and almost 24,000 local postmasters throughout the country—has nationwide responsibility for aligning the Postal Service’s retail network with customer needs (see fig. 2). The district managers are responsible for making recommendations to open or close a postal retail facility, while the Vice President for Delivery and Retail is responsible for acting on the recommendations. In 2001, we placed the Postal Service on our list of agencies and programs designated as high risk due to, among other factors, financial and operational challenges. The Postal Service responded by issuing a plan to “transform” its operations—the 2002 Transformation Plan. Specific strategies outlined in the plan called for the Postal Service to align its retail network with customer needs by promoting alternative access to retail services and closing “low-value, redundant” postal retail facilities in overserved areas. To obtain information needed to align access to retail services with customer needs, the Postal Service indicated that it would establish a national facility database and develop a criteria-based methodology to determine which facilities to close. The criteria were to include factors, such as a facility’s proximity to other postal facilities, the number of households and delivery points in a community, and indicators of retail productivity. In 2004, we reported that the Postal Service had not yet developed criteria for making changes to its retail facility network, including facility closures and consolidations. The Postal Service updated its Transformation Plan in 2005 for fiscal years 2006 through 2010, indicating that it intended to reduce its long-term facility repair costs by doing more focused, routine facility assessments and preventive maintenance. The postal reform legislation enacted in December 2006 provided additional opportunities to address the challenges the Postal Service faces in adapting to an increasingly competitive environment. The act provides tools and mechanisms to help control costs, including the costs of aligning its facility network with its customer needs. The act also requires the Postal Service to develop a plan by June 2008 to “rationalize” its network of facilities, remove excess capacity from the network, and identify anticipated cost savings and other benefits associated with network rationalization. In 2007, we removed the Postal Service’s high-risk designation because the Postal Service addressed several concerns we raised when we originally placed the Postal Service on the high-risk list in 2001 and because of the passage of postal reform legislation. Developed in 2003, FDB has not achieved the Postal Service’s anticipated goal or conformed to a leading federal practice that calls for a consolidated source of accurate facility data because the data entered into FDB are not reliable. As a result, the Postal Service indicated that several major Postal Service departments do not use FDB for aggregate facility information, partly because of concerns about its reliability. These concerns result from, among other things, inaccurate data entered into the systems that feed into FDB, problems with how the systems are linked to form an FDB facility entry, and mistakes in entering data directly into FDB. The Postal Service has taken steps to improve the database, but systemic problems remain. In addition, FDB does not meet leading federal practices for tracking facility management performance and trends because FDB does not include facility management performance measures or provide for tracking trends over time. While the Postal Service created FDB in 2003 to achieve its goal for a consolidated source of accurate facility data, it has not achieved that goal due to data reliability problems. According to the Postal Service, it established FDB because its prior use of multiple databases caused confusion, data inaccuracies, problems in decision making, and higher costs. However, based on our analysis of selected data fields, FDB data problems make it unreliable as a centralized source of aggregate facility data even after performing data-cleaning techniques on the raw data. Specifically, our assessment revealed the following nationwide problems with the data’s reliability as of October 5, 2006: 145 facility entries were exact duplicates of another facility entry (all data fields were the same); 1,931 facility entries had multiple retail facilities listed at the same address (e.g., a main post office and a station were listed with the same address); 1,288 facility entries had different amounts of square footage listed for the 892 facility entries had conflicting information on whether the facility is 1,216 facility entries had conflicting data on the amount of the facility’s 509 facility entries listed as having staffed more retail windows than reportedly exist at the facility. The Lubbock (Texas) Main Post Office illustrates several of the data reliability problems we found. Specifically, in FDB data reported as of July 2007, the main post office is listed four times—each with different (1) square footage amounts, (2) ownership information, and (3) lease payment amounts. Except for information on the annual rent—which the Postal Service considers sensitive—table 1 displays these FDB fields for the Lubbock Main Post Office. We discussed several specific examples of the data reliability problems we found with postal officials who, after researching multiple data sources, provided explanations for the specific examples we found. For example, with respect to the Lubbock Main Post Office, postal officials said that the first entry in table 1 is correct while the other three entries actually represent other postal facilities at different locations in Lubbock. In addition, using another data source, they told us that entry 2 is actually a carrier annex; entry 3 is a warehouse; and entry 4 is a postal vending machine. Using other data sources, the Postal Service officials also provided explanations for other specific problems we identified. However, without additional site visits, we cannot determine whether the explanations provided were accurate. We also identified incorrect FDB information for 24 of the 58 postal facilities we visited. For example, during our sites visits, we found that information on vacant leasable square footage, which the Postal Service asks local employees to document, was inaccurate in FDB. At least six of the facilities we visited had vacant space that local employees said could be leased, but these facilities were not listed as having vacant, leasable space in FDB. These facilities and their vacant space are shown in figure 3. Postal officials acknowledged our examples and noted there are few incentives for local officials to report facilities’ vacant, leasable space in FDB. Postal officials acknowledged that the Postal Service has not analyzed the reliability of FDB data but expressed confidence that the problems we found affect a small percentage of the Postal Service’s facilities. While we cannot definitively determine the overall magnitude of FDB’s data reliability problems, the cumulative effect of the problems we found could significantly distort the reporting of aggregate facility statistics. For example, conflicting ownership data in FDB, as illustrated for the Lubbock facility in table 1, could cause the Postal Service’s annual rent obligation in FDB to vary by as much as $82 million, or more than 8 percent of the Postal Service’s reported rent obligation for fiscal year 2006. Inaccurate ownership data are of particular concern because the Postal Service used FDB for its aggregate ownership statistics in its 2006 annual report. Although the Postal Service developed FDB over 5 years ago to provide a consolidated source of facility data, the Postal Service continues to operate and use various facility data sources. Furthermore, while postal officials acknowledged that postal staff did not use FDB initially because the data were not reliable, our analysis demonstrates that FDB is still not sufficiently reliable for use as a consolidated data source for postal facilities. The Postal Service indicated that several major Postal Service departments do not use FDB for aggregate facility information, partly because of concerns about its reliability. Thus, as shown in figure 4, instead of exclusively relying on FDB for facility information in its 2006 annual report, the Postal Service used multiple sources of data, including its Address Management System, for quantifying its retail and delivery facilities by type. We found that FDB’s data reliability problems were caused by (1) errors in the systems that feed into FDB, (2) problems with how the systems are linked to form an FDB facility entry, and (3) errors in inputting FDB data. First, as stated earlier, some information is fed directly from both the Address Management and Facility Management systems into FDB, causing errors in either system to automatically feed into FDB. While we did not fully evaluate the Address Management System—the Postal Service’s database for all delivery points in the country—we found errors in the Address Management System data that the Postal Service used in its 2006 annual report. Specifically, we found instances of (1) duplicate entries for the same facility; (2) multiple facilities with the same function (e.g., main post office) listed at the same address; and (3) contractor and Postal Service-operated facilities listed at the same address. During our site visits, we also found that the address contained in the Address Management System and fed into FDB for the O’Hare Terminal 2 Finance Station in Chicago, Illinois, was incorrect. In addition to containing errors, in December 2006, the Postal OIG reported that the Address Management System was incomplete because it did not contain information on all postal facilities. During our site visits, we also found errors that were fed directly into FDB from the Facility Management System. For example, the square footage in FDB for the Colleyville Main Post Office in Texas was incorrect because it did not reflect the sale of a significant portion of land that occurred 3 years prior to the entry. Postal Service officials acknowledge that errors in the Address Management and Facility Management systems feed directly into FDB. Second, many of the FDB errors we found resulted when facility entries in the Address Management and Facility Management Systems were incorrectly linked to create an FDB facility entry. FDB facility entries are created by linking facility entries in the Address Management and Facility Management Systems. However, these systems do not use the same convention for naming facilities, and therefore, they cannot be linked automatically to create an FDB facility entry. Instead, according to Postal Service officials, postal employees must manually link facility entries in the two systems. In some instances, the manual process resulted in linkage errors which, according to postal officials, caused some of the duplicate facility entries and contradictory information in FDB that we identified. For example, the Cumberland Main Post Office in Maryland, which we visited, had four entries in FDB—one for the Main Post Office and the other three for other postal facilities in Cumberland that were incorrectly linked to the main post office’s address. The Postal Service corrected the FDB entry for this facility when we brought this problem to the Postal Service’s attention, but the problem remains at other locations. For example, the Lubbock Main Post Office, which was discussed previously (see table 1), was incorrectly linked to multiple facility entries within the Facility Management System. The error occurred because FDB lacks internal controls, such as edit checks, to prohibit postal staff from linking an FDB facility to multiple facility entries in the Facility Management System. According to Postal Service officials, there were 762 facilities in FDB as of September 2007 that were mistakenly linked to multiple Facility Management System facility entries. Third, we found that some of the errors in FDB were caused by local employees entering incorrect information directly into FDB. Although information on a facility’s address, size, and ownership is fed automatically into FDB from the Address Management and Facility Management Systems, other data are entered manually into FDB by local postal employees. For example, local employees enter the number of retail windows (e.g., sales areas) at each facility and the number of those windows that are typically staffed. Our analysis revealed that local officials make numerous mistakes entering this and other information into FDB. For example, 509 retail postal facilities are listed in FDB as staffing more retail windows than reportedly exist at the facility. Implementing an edit check in FDB would eliminate this type of reporting error. As shown in figure 3, FDB data on vacant, leasable space entered by local employees are also often in error. While postal officials are aware of errors in FDB and have taken several actions to improve the quality of the data, these actions have not yet corrected all of the problems we identified. First, in response to a 2006 recommendation by the Postal OIG, the Postal Service has started requiring local employees to validate and correct FDB data periodically for their facilities. While this validation process could, in our view, help identify and correct some of the errors we found, the Postal Service’s validation completed in February 2007 was not entirely successful for a number of reasons. Specifically, mistakes that could have been corrected locally were often not corrected. For example, the retail postal facility data available as of May 2007, that had been validated by a postal employee on February 7, 2007, indicated the facility staffed twice as many retail windows as exist at the facility. We discussed this data discrepancy with Postal Service officials, and they corrected the facility’s FDB entry. While this specific discrepancy was corrected, our assessment of FDB data as of July 7, 2007, identified 354 instances of this problem. However, even if local employees corrected these and other inaccuracies, problems would remain because local employees cannot correct errors in FDB fed directly from other systems, such as Facility Management System data on square footage and ownership status. Errors of this type can only be corrected by personnel administrating the Facility Management or Address Management Systems. Second, to avoid manual linking errors and to improve the accuracy of the linkage used to create FDB facility entries, postal officials are planning to automatically link the Address Management and Facility Management Systems’ facility entries. According to the Postal Service, Address Management System administrators must first apply the standard facility naming convention used by the Facility Management System and thus create a unique identifier for linking facility entries in the Address Management and Facility Management Systems. Automation could help reduce the frequency of future linking mistakes, but the Postal Service must correct existing errors before automating the process. Postal officials had expected to begin correcting existing errors in October 2006; however, as of September 2007, the effort has not yet begun. Finally, the Postal Service responded to a Postal OIG recommendation for improving the quality and completeness of data entered into FDB by completing field training sessions for FDB users and issuing a new FDB user’s guide in 2007. These actions are too recent to gauge their effectiveness. Even if the data in FDB were reliable, they would not meet leading federal practices for facility data because FDB (1) does not contain fields for the four performance measures recommended by the Federal Council—a facility’s importance, utilization rate, condition, and annual operating costs—and (2) does not allow for tracking trends. The federal leading practices are intended to, among other things, help agencies measure their progress in managing their facilities and identify properties for disposal or investment. Postal Service officials said none of the Postal Service’s facility databases, including FDB, were designed for these purposes. In addition, they noted the Postal Service is not bound by the executive order on federal real property asset management and, consequently, is not required to adopt leading federal practices, such as the implementation of performance measures. Even if the Postal Service collected data on its performance, it could not measure its performance over time because it does not retain or archive FDB data at regular intervals (e.g., annually). The Postal Service has initiated actions to assess the condition of its facilities, but has not yet assessed the magnitude of its maintenance backlog or strategically prioritized its maintenance projects—a leading federal practice. According to postal officials, the Postal Service has historically underfunded its maintenance needs, resulting in the deterioration of its facilities. While there is some evidence that many postal facilities are in less than acceptable condition, the magnitude of the challenge is unknown. To learn more about the condition of all its facilities, the Postal Service has started implementing self-assessments conducted by local employees for small facilities and more intensive assessments for larger facilities. Numerous Postal Service officials told us that insufficient funding has caused the Postal Service to focus solely on urgent repairs—instead of routine, preventive maintenance—which could lead to more costly repairs over time. Consistent with a leading federal practice, agencies can maximize the value of maintenance funding by using facility management performance data to identify and prioritize their greatest maintenance needs, but the Postal Service cannot adopt this practice because it does not systematically capture the necessary data. The Postal Service has not comprehensively assessed the condition of its facilities, but the amount it has recently spent on facility maintenance— $712 million from fiscal year 2003 through fiscal year 2006—has been insufficient, postal officials said, to address its facility maintenance needs. Postal officials also said several years of underfunding have caused postal facilities to deteriorate and many are in need of repairs. Evidence supporting the officials’ statements includes a 2005 assessment conducted by a contractor of 651 randomly selected owned and leased postal facilities. According to this assessment, two-thirds of the facilities were in less than “acceptable” condition, including 22 percent that were in “poor” condition. However, the Postal Service will not know the magnitude of the deterioration or the extent of its maintenance backlog until it fully assesses the condition of all its facilities. In 2007, to begin assessing the condition of its facilities, the Postal Service requested local employees to conduct self-assessments of their facilities. Local employees responded to the request by assessing over 29,000 facilities and identifying 73,500 maintenance needs estimated to cost $236 million. While these assessments provide additional information on the condition of postal facilities, they are not comprehensive because thousands of facilities were not assessed and the local employees are not formally trained to conduct facility assessments. Over the next 3 years, the Postal Service plans to conduct a more comprehensive, three-part program to assess the condition of all its facilities. The first part of the program involves facilities that are less than 6,500 square feet. For these facilities, the Postal Service plans to ask local employees to complete annual self-assessments similar to the ones completed in 2007. The second part of the program involves facilities with 6,500 to 100,000 square feet of interior space. For these facilities, the Postal Service plans to conduct more detailed condition assessments once every 3 years using contract building inspectors beginning in the summer of 2007. According to Postal Service officials, the Postal Service established the 6,500-square-foot threshold because the most complicated, important facilities are generally larger. The third part of the program involves the Postal Service’s largest mail processing facilities. For these facilities, the Postal Service plans to use on-site postal maintenance staff to conduct annual physical assessments of the buildings and, once every 5 years, employ architectural and engineering firms to conduct more thorough assessments. According to the plans for the program, local Postal Service employees or facility inspectors will enter information collected from each assessment into a new database called the Infrastructure Condition Assessment Model, which will allow contract facility inspectors to estimate the urgency and cost of each identified repair project. The new database will feed into the Postal Service’s existing maintenance tracking system and, according to postal officials, will be used to budget and prioritize urgent maintenance projects for repair. Figure 5 illustrates the Postal Service’s three-part facility condition assessment program. Postal Service officials with responsibility for facility maintenance at the national, area, and district levels said that the Postal Service has underfunded its maintenance for years and suspects that this underfunding has resulted in deteriorating facilities and a large maintenance backlog. Postal officials told us that this insufficient funding has caused the Postal Service to focus exclusively on reactive maintenance—that is, “emergency” and “urgent” repairs—at the expense of routine repairs. In addition, according to the Postal OIG, insufficient funding for repairs and maintenance may be hampering the Postal Service’s ability to adopt a preventive maintenance approach. A 2000 Postal OIG report described the Dallas Downtown Station in Texas as deteriorated and attributed its deterioration to deferred maintenance that “increased the risk of injury to Postal Service employees and customers, and has compromised Postal Service property and the safety and security of the mail.” The Postal OIG recommended immediate evacuation of the facility until needed repairs could be made. When we visited the Dallas Downtown Station in 2007, the Postal Service had repaired the facility at a cost of $12 million. The Postal Service may have avoided some of those costs if it had done more preventive maintenance. Other facilities we visited had not yet been repaired—including facilities with chronically leaking roofs and visible interior and exterior damage. Figure 6 illustrates maintenance issues we observed during our site visits In its Strategic Transformation Plan for 2006 through 2010, the Postal Service established a goal of reducing its repair costs “through more focused routine building assessments and better planning to fix small problems as soon as possible.” In a May 2007 report, the Postal OIG concurred with this approach, recommending that the Postal Service adopt more preventive maintenance practices by, among other things, regularly assessing postal facilities to identify repair needs and better leverage the Postal Service’s limited financial resources. This approach is consistent with a recommendation made by the National Research Council of the National Academies in 1990 that suggested federal agencies regularly assess the condition of their facilities and do preventive maintenance to avoid costly future repairs. A reactive maintenance approach is ultimately more expensive, partly because it shortens the useful life of equipment and facilities and necessitates, among other things, more costly future repairs. To that point, a 2004 National Research Council study cited an estimate that each dollar in deferred maintenance results in a long-term liability of $4 to $5 for future repair costs. The Postal Service bases maintenance priorities on urgency. For example, roof issues take priority over nonstructural interior maintenance needs. While urgency is important for prioritizing maintenance spending, leading federal practices consider other important measures, such as a facility’s (1) importance to an agency’s mission, (2) utilization rate, (3) condition, and (4) annual operating costs. The Postal Service’s three-part assessment program will provide data on the condition of its facilities, but the Postal Service will not be able to prioritize repairs strategically since it does not capture data on its facilities’ importance, utilization rate, and annual operating costs to inform its maintenance decisions. For example, a Postal Service official who is responsible for managing maintenance throughout a large geographic area told us that he cannot consider a facility’s importance to the Postal Service’s mission when prioritizing maintenance projects because the Postal Service does not capture this information. Furthermore, the Postal Service does not currently know the replacement value of its facilities—essential information for evaluating a facility’s overall condition. Adopting federal facility management performance measures would help the Postal Service establish a strategic approach to facility maintenance by allowing the agency to better identify its most important facilities, prioritize its maintenance needs, and allocate its maintenance funds accordingly. The consequences of not considering a facility’s importance, utilization rate, and annual operating costs were evident at the Downtown Fort Worth Station in Texas, which we visited. The Postal Service spent about $1 million to repair it in fiscal year 2006 even though the station remains in deteriorating condition, is largely vacant, and does not appear critical to the Postal Service’s mission since the remaining retail and carrier functions could be housed elsewhere in a smaller facility. Local postal officials said the Postal Service has considered disposing of the station for years but, instead, repaired it because no decision had been made on whether to retain it. Figure 7 illustrates the size, condition, and utilization of the Downtown Fort Worth Station at the time of our visit. To address the challenge of aligning retail access with customer needs, the Postal Service has expanded alternative access to its services in underserved areas but has done less to curtail services in overserved areas. Our analysis shows wide variation in the number of postal retail facilities among counties of similar population, land area, and degree of urbanization—demonstrating the Postal Service’s challenge of placing facilities where they are needed. To address this challenge in underserved areas, the Postal Service has expanded access to retail services through alternative access options and has set goals for the use of these options. However, the Postal Service has not actively pursued a goal in its 2002 Transformation Plan to proactively identify and close unneeded retail facilities in overserved areas. This inaction does not conform to leading federal practices, which suggest that an agency consider closing facilities that are not critical to achieving the agency’s mission, are in poor condition, are not fully utilized, or are costly to operate relative to their revenue. The number of facilities that provide postal retail services varies widely among counties of similar population, land area, and degree of urbanization, according to our statistical analysis of the distribution of postal retail facilities nationwide. Specifically, we developed a regression model to determine the average number of post offices for counties of comparable population, land area, and degree of urbanization. To determine the extent of variation among counties, we compared the actual number of post offices in each county with the averages derived from our regression analysis. While our analysis was not intended to consider all relevant factors, including retail sales volumes or the capacity of postal retail facilities, it does explain almost 80 percent of the variation in the number of retail postal facilities between counties. According to our analysis, some counties have far fewer or far more postal retail facilities than other counties of comparable population, land area, and degree of urbanization. For example, Hoke County, North Carolina, has 1 retail postal facility, whereas the average comparable county has 10 such facilities. Conversely, Fayette County, Pennsylvania, has 63 postal retail facilities—over 400 percent more retail facilities than the average county of comparable population, land area, and degree of urbanization. The wide variability in the number of retail postal facilities in comparable counties suggests that access to postal services among comparable counties also varies. Such variation is inconsistent with the Postal Service’s Transformation Plan goal and the leading federal practice of aligning access to facilities with customer and service needs. Figure 8 shows the distribution of owned, leased, and contracted postal retail facilities by county based on population, land area, and degree of urbanization. Aware that some communities, particularly in growing areas, have insufficient access to postal retail services, the Postal Service established a goal in 2002 for increasing access to postal services through alternative options. The Postal Service favors alternative retail options over building new postal-operated facilities because the Postal Service does not incur construction or operating costs when providing services through these alternative access options. Thus, in its 2006 annual report, the Postal Service indicated it had contracts with almost 4,000 private operators to provide access to postal retail services. While contract postal units are not always located in underserved areas, our analysis shows that their presence increased access to postal services in hundreds of counties nationwide. Without contract postal units, these counties would have had fewer retail facilities than the average for counties of comparable population, land area, and degree of urbanization. For example, Butler County, Ohio, had 16 retail facilities operated by the Postal Service—7 fewer than the average for comparable counties. However, the Postal Service’s agreement to operate seven contracted facilities there brought the total number of retail postal facilities in the county to 23, which was the average for comparable counties. Figure 9 shows a variety of examples of privately owned businesses that contract to provide postal services. Other alternative retail access options include the Postal Service’s stamps on consignment program, which allows businesses, such as drug stores and grocery stores, to purchase stamps from the Postal Service and retain a share of the proceeds; package pickup and stamps sold online, which are available through the Postal Service’s Web site; and automated postal centers, which provide access to most postal services from 2,500 centers located in postal-operated retail facilities; some automated postal centers are available 24 hours a day, 7 days a week, as indicated by the signage for the automated center shown in figure 10. To improve its efforts to increase retail access for communities with insufficient access, the Postal Service is developing a model that will identify underserved areas—called the Model to Optimize Retail Effectiveness. According to Postal Service officials, the Postal Service developed the model in response to our 2004 recommendation that it develop criteria for making changes to its retail network and expects to finalize the model by the end of 2007. Postal Service officials also said the model will take several factors into consideration—including the relative location of competitors, costs, the extent of customer satisfaction, and population growth—in order to target areas that would benefit most from increased access to postal services. According to postal officials, FDB is an important source of information for this effort, which suggests FDB data reliability problems could adversely affect the model’s output. Postal officials said the local Postal Service employees would verify the model’s findings to ensure their accuracy. Once it identifies and verifies the underserved areas, the Postal Service could decide to, among other actions, increase facility hours, expand advertising of existing alternative access options, add automated postal centers, or contract with a private business to open a contract postal facility. While the Postal Service could also acquire new postal retail facilities, Postal Service officials said this is the least preferable option given the high costs of traditional postal facilities. To emphasize the importance of expanding alternative options for postal services, the Postal Service set a goal to accomplish at least 40 percent of its retail transactions through alternative access options by 2010. From 2002 through the end of 2006, the Postal Service closed 795 facilities after placing them on emergency suspension. The Postal Service places a facility on emergency suspension when, among other reasons, severe maintenance problems create health or safety risks to employees and customers that require the Postal Service to vacate the facility. Other reasons for an emergency suspension include the retirement or resignation of a community’s sole postal employee; a building owner’s decision not to renew the Postal Service’s lease; or a forced evacuation due to fire, flood, or other natural disaster. When a postal retail facility is placed on emergency suspension, the district has 90 days to decide whether to reopen, close, or consolidate the facility. However, the actual closure can take years. Specifically, 44 of the 159 facilities under emergency suspension that were slated for closure as of May 2007 have had their operations suspended for more than 5 years. In addition, it is not clear what criteria local managers apply when deciding whether to reopen or close a facility that is on emergency suspension. We visited one of these stations, the McKeesport Central Station in Pennsylvania, which had been on emergency suspension for over a year following a partial roof collapse in December 2005 (see fig. 11). The local postmaster told us the Postal Service planned to reopen the station once the landlord repaired the roof, even though the sole employee assigned to the station has been reassigned, customers have not complained about the lack of service, the station had low revenue, and the station is located approximately 1 mile from the McKeesport Main Post Office. In locations where demand for postal services is low (as measured by low revenue), the Postal Service sometimes chooses to reduce operating costs by reducing the staffing to just one postal employee—an approach that could place postal employees at risk unless the Postal Service installs needed security upgrades. We visited some single-employee facilities that had not received upgrades the Postal Service identified as necessary in order to protect the safety of its employees and customers. For example, when staffing at the two postal stations we visited in Indiana was reduced to one employee, the Postal Service inspected the facilities and identified numerous security deficiencies at both. More than 6 months after the inspections, however, none of the identified security upgrades had been completed and several of the upgrades were listed as deferred. The sole employee at another postal station we visited in Indiana complained in a 2001 letter to her supervisor about the lack of postal-identified security measures, such as security cameras, at her facility. Six years later, no actions have been taken to install additional security measures, such as a video-monitoring system or a pull-down gate to help secure the front register when the employee goes to other areas of the station. According to the manager of the station, who is located at a nearby postal retail facility, installing security systems at the station would increase costs—a result that would be incompatible with the goal of decreasing the station’s operating costs. In 2002, the Postal Service established a goal of reducing the number of “redundant, low-value” retail facilities in order to lower its operating costs. Establishing such a goal suggests that these facilities are less important to the Postal Service’s mission of providing universal access to postal services than other facilities. To implement this goal, in 2002, the Postal Service lifted a moratorium on closing retail postal facilities but has not (1) provided a definition for “redundant, low-value” retail facilities; (2) established a goal for their reduction; or (3) identified unneeded facilities for possible closure, including those with low revenue. According to Postal Service officials, pursuing retail facility reductions is difficult because of legal restrictions on and political pressures against closing retail facilities. For example, legal restrictions preclude the Postal Service from closing a small post office solely because it is operating at a deficit. To close a post office, the Postal Service is required to, among other things, formally announce its intention to close the facility, analyze the impact of the closure on the community, and solicit comments from the community. While the Postal Service closes some retail facilities placed on emergency suspension, its reliance on other factors, such as the loss of a lease or severe maintenance problems, to drive decisions about closing retail facilities is inconsistent with leading federal practices which call for a targeted, criteria-based approach to closure decisions. More specifically, leading federal practices require applicable agencies to “rightsize” their facility holdings by, among other things, closing facilities that are (1) not critical to their mission, (2) in poor condition, (3) not fully utilized, or (4) costly to operate relative to their revenue. While considering these criteria is essential for rightsizing a facility network, the Postal Service cannot consider them because it does not capture the data needed to do so. During our review, we visited a number of postal facilities that appeared to merit consideration for closure based on one or more of these criteria. Furthermore, none of these facilities housed carriers or mail processing functions and each had low sales and was located near other retail facilities. Our site visits included the following types of facilities in urban locations: Facilities that contribute little to the Postal Service’s mission of providing universal access to postal services. For example, Station C in downtown Dallas, Texas, provides access to postal services only for the people that work in the secured federal building, and the station is located just 1 half-mile from another retail facility in downtown Dallas. Consequently, Station C’s annual retail sales of about $282,000 in fiscal year 2007 ranked among the lowest in the Dallas District. Facilities in poor condition. We observed maintenance issues at a number of postal stations that do not appear critical to the Postal Service’s mission based on their low revenue and proximity to other retail facilities. For example, the Postal Service recently renewed the lease for the Wilkinsburg Station in Pittsburgh, Pennsylvania, even though the facility’s façade is starting to pull loose from the building, the roof has numerous leaks, and sewage backs up throughout the facility’s plumbing system. Postal Service officials said the Postal Service did not consider closing the station before renewing the lease, even though it does not appear critical—as it earns below-average revenue and is located about 2 miles from another retail facility in Wilkinsburg. Facilities not fully utilized. One example is the Downtown Station in Fort Worth, Texas—a large, four-story building where the Postal Service conducts retail and carrier operations on the main floor but does not use the other floors (see fig. 6). Facilities that are costly to operate relative to their value. We visited several postal stations with annual sales below $200,000, which is approximately what the Postal Service requires from each of the 2,500 automated postal centers located within postal retail facilities. For example, the Gary Downtown Finance Station, Indiana, which is one of seven postal retail units operating in the city, had annual revenue of $163,000 in fiscal year 2006. Exclusively retail, this station does not support any mail delivery functions beyond some post office boxes. Facilities with high operating costs relative to their revenue. For example, the “Store of the Future” in the Pittsburgh International Airport, Pennsylvania (see fig. 12), costs more per square foot ($95) for the Postal Service to lease than any other facility in the Pittsburgh District, yet it has below-average sales revenue. Postal Service officials said the facility’s revenue is low because it is located behind airport security check points. The Postal Accountability and Enhancement Act has significant near- and long-term financial implications for postal operations. Consequently, it is imperative that the Postal Service manage its facilities as efficiently and cost-effectively as possible. However, the Postal Service cannot begin to overcome its facility management challenges until it improves the quality of its facility data. To date, FDB has failed to serve as a reliable system for inventorying postal facilities or for measuring their performance, and its weaknesses are so great that it cannot be reliably used for basic facility management purposes, such as the Postal Service’s annual reporting. While efforts are under way to improve FDB, it remains unreliable and unusable for measuring performance. Instead of solving the Postal Service’s problem of having multiple, overlapping databases, FDB appears to have compounded the problem by adding an additional data source to those already available. Postal officials with responsibility for facility maintenance at the national, area, district, and local levels said that the Postal Service has underfunded its maintenance for years and suspects that this underfunding has resulted in deteriorating facilities and a large maintenance backlog. However, the Postal Service does not know the extent of the problem because it has not comprehensively tracked and analyzed data on the condition of its facilities. This situation may be changing because the Postal Service has recognized the importance of compiling data for facility maintenance and has initiated a comprehensive facility assessment. Conducting a comprehensive facility assessment is a necessary first step toward improving the condition of postal facilities, but it will initially add tens of thousands of new maintenance projects to the Postal Service’s maintenance backlog. Funding constraints will require the Postal Service to take an incremental approach in order to reduce the backlog. The Postal Service will have to make difficult choices about what repairs to make and what repairs to defer. These choices would be easier if the performance data on the Postal Service’s facilities, such as a facility’s importance, utilization rate, or costs, were reliable. One way to minimize maintenance costs is to reduce the number of facilities that must be maintained. In locations where new services are needed, the Postal Service is developing alternative access options to avoid new facility costs, but it has not identified or closed unnecessary postal retail facilities. Moreover, although the Postal Service has set a goal of shifting 40 percent of its retail sales to alternative access options, it has not set any similar targets for reducing its vast network of post offices, stations, and branches. Instead, it relies on emergency suspensions and staffing reductions to curtail operations at some facilities. However, this approach does not conform to leading federal practices because the closures are not linked to the facilities’ performance. In addition, the staffing reductions potentially place the remaining postal employees at risk. The expected increase in the use of alternative access options, combined with financial necessity, suggests the need to consider additional closures of brick-and-mortar postal facilities. To properly consider the closure opportunities, the Postal Service will need to know which retail facilities to retain and which facilities are no longer important to its retail mission. With improved facility data, the Postal Service could assess a facility’s importance and information on other relevant factors, such as a facility’s utilization rate and condition, to identify closure possibilities and justify any closure decisions. To improve the Postal Service’s management of its facilities, we are making the following six recommendations: To strengthen the reliability and usefulness of the Postal Service’s facility data, the Postmaster General should (1) direct the Vice President of Delivery and Retail to determine, in consultation with the Vice Presidents of Facilities and Intelligent Mail and Address Quality, whether it is more cost-effective to make FDB a reliable source for consolidated data on its facilities or to replace it with a new, more reliable database. If the Postal Service decides to retain FDB, the (2) Postal Service should take steps to improve its reliability and usefulness by establishing internal controls, such as edit checks, to preclude obvious mistakes. To conform to leading federal practices, the Postmaster General should direct the Vice President for Delivery and Retail to (3) measure facility management performance consistent with the spirit of those developed by the Federal Council; and (4) begin tracking facility management performance trends. To improve facility management and reduce long-term facility costs, the Postmaster General should, consistent with leading federal practices, (5) direct the Vice President for Facilities to prioritize maintenance projects based on a facility’s overall performance, including measures consistent with the spirit of those developed by the Federal Council; and (6) direct the Vice President for Delivery and Retail to institute a proactive, criteria- based approach to assist in identifying unneeded retail facilities for possible closure as part of the June 2008 facility plan required by the Postal Accountability and Enhancement Act. The Postal Service provided its written comments on a draft of this report by letter dated November 19, 2007. These comments are summarized below and are included, in their entirety, as appendix III to this report. The Postal Service agreed with our two recommendations to improve FDB’s reliability; agreed, in principle, with our recommendation to prioritize maintenance projects based on a facility’s overall performance; but disagreed with our three remaining recommendations. In separate correspondence, the Postal Service also provided minor technical comments, which we incorporated, as appropriate. In agreeing with our two recommendations regarding FDB, the Postal Service indicated that it had already determined that it is in its best interest and more cost-effective to retain FDB rather than to replace it with a new, more reliable database. Second, while the Postal Service believes FDB is, as a whole, very reliable, it agreed to establish additional controls to improve its reliability and usefulness. Furthermore, although the Postal Service stated that it prioritizes maintenance projects adequately, it agreed, in principle, with our recommendation to prioritize maintenance projects based on a facility’s overall performance. The Postal Service also reiterated actions it has taken, including the initiation of its Facility Condition Assessment program, to better identify and prioritize the agency’s maintenance workload and budget. The Postal Service disagreed with our three remaining recommendations, including our two recommendations to conform to leading federal practices by measuring facility management performance and tracking performance trends. Regarding these two recommendations, the Postal Service noted that it considers leading federal practices developed by the Federal Council, but reiterated that it is not required to adopt them. The Postal Service also noted that its mandate to provide universal mail service on an almost daily basis and its provision of services and products that are in direct competition with the private sector pose unique challenges for the Postal Service. Thus, the agency indicated that it had concerns about adopting leading federal practices. We continue to believe that measuring and tracking performance in such areas as a facility’s importance, utilization, condition, and operating costs are critical to effective facility management. We also recognize the uniqueness of the Postal Service’s business and mission. Consequently, we revised these recommendations to emphasize that the Postal Service should develop and track performance measures that may better meet its needs—as long as the measures are consistent with the spirit of those developed by the Federal Council. Finally, the Postal Service disagreed with our recommendation to institute a criteria-based approach to identify and close unneeded retail facilities as part of the June 2008 facility plan required by the Postal Accountability and Enhancement Act. The Postal Service stated that it had purposely removed mention of retail facility closures from its 2005 update of its Transformation Plan due to its difficulty in establishing criteria that could be applied to all retail locations. Instead, the Postal Service indicated that it intends to continue to assess its retail facilities on a case-by-case basis. We agree that developing criteria for identifying unneeded retail facilities is difficult and that such criteria cannot be uniformly applied to the universe of all postal retail facilities. As a result, we clarified our recommendation to indicate that the results of a criteria-based approach would “assist” the Postal Service in identifying candidate retail facilities for possible closure. The Postal Service further indicated that the focus of its forthcoming facility plan is on its mail processing network, not its retail facilities. While the Postal Service does not currently envision developing such a criteria-based approach as part of its congressionally required facility plan, we continue to believe that doing so would provide an excellent opportunity for the agency to begin pursuing its 2002 goal of identifying and closing redundant, low value retail facilities. We are sending this report to the congressional requesters and their staffs. We are also sending copies to the Postmaster General and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at siggerudk@gao.gov or (202) 512-2834. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. In 2004, the President signed Executive Order 13327: Federal Real Property Asset Management in response to our designation of federal real property as a governmentwide high-risk area. The executive order and the initiative to implement the order were intended to ensure that agencies maintain facility inventories appropriate to their needs, cost, and physical condition to support each agency’s mission and objectives. This order applies to 24 executive departments and agencies, but not to the Postal Service. The executive order established the Federal Real Property Council (Federal Council), which includes representatives from the Office of Management and Budget and applicable agencies. The Federal Council develops guidance, serves as a clearinghouse for leading federal practices, and facilitates agency efforts to implement the executive order. Although not bound by the executive order, the Postal Service has voluntarily adopted some of the Federal Council’s leading practices. Specifically, the Postal Service submits some data to the General Services Administration for its annual real property inventory and has designated an individual to represent the Postal Service on the Federal Council. The Federal Council has developed guidance that, among other actions, requires applicable agencies to collect 24 data elements for transmission to the General Services Administration to use in compiling an annual inventory of federal facilities. This data collection is intended to, among other things, provide agencies with data needed to assess their performance based on four key performance measures and determine whether their facilities are properly aligned with customer and service needs. The following four performance measures are intended to help agencies determine if the continued use of each of their facilities is justified based on their 1. importance to achieving the agency’s mission; 2. utilization rate (extent to which the facility is used); 3. physical condition as measured by a “condition index”; and 4. annual operating costs, including the recurring maintenance and repair costs for each facility in an agency’s inventory. Our overall objectives were to identify the Postal Service’s efforts to address three key facility challenges related to (1) capturing and maintaining accurate facility data, (2) adequately maintaining postal facilities, and (3) aligning access to retail services with customer needs. Specifically, this report identifies the Postal Service’s goals and actions for managing each of these challenges and assesses its progress in overcoming the challenges and, as applicable, in implementing leading federal practices. To address these objectives, we visited 58 postal facilities and 4 contract postal facilities in nine districts covering three of the Postal Service’s nine areas. The observations derived from our site visits cannot be generalized to the population of postal facilities nationwide. We chose these locations to achieve geographic balance and to include areas with growing and declining populations. We chose the specific facilities within these areas to achieve a range in the type, revenue, condition (indicated by the number of open maintenance requests in the Postal Service’s maintenance tracking system), and size. This selection process led us to visit of each of the following types of postal facilities: main post office, postal station, postal branch, contract postal facility, carrier annex, processing and distribution center, air mail center, bulk mail processing center, vehicle maintenance facility, and administrative facility. We toured each facility, interviewed local employees, and collected data on facility operations. Table 1 identifies the specific areas, districts, and cities we visited. To assess the Postal Service’s progress in overcoming the challenge of capturing and maintaining accurate facility data, we analyzed all facility entries (over 36,000) contained in the Facility Database (FDB) as of October 5, 2006, and assessed the reliability of the data it contains. To assess the reliability of FDB data, we (1) interviewed Postal officials to obtain an understanding of the data, the database structure, the sources of the data, known issues or limitations in the database, and relevant Postal Service documentation; (2) reviewed related reports by the Postal Service Office of Inspector General (Postal OIG); and (3) performed electronic testing of the database for completeness, obvious errors, and inconsistencies. We also attempted to verify the data in FDB during our 58 facility visits. Prior to our visits, we identified potential data reliability problems and, during our visits, documented information related to the existence and causes of those problems. In addition, we corroborated our observations on FDB reliability issues through interviews with and documentation obtained from postal officials in the Delivery and Retail department. To determine the reasons for the data reliability issues in FDB, we interviewed Postal Service personnel in the Delivery and Retail department responsible for managing the facilities data, local postal officials, and postal officials from the Postal OIG. We also reviewed key Postal Service documents related to the facilities data, including the FDB user’s guide, the Postal Service’s 2002 and 2006 Transformation Plans, and reports issued by the Postal OIG and GAO. Our analysis was based principally on the FDB data as of October 5, 2006, but we also analyzed FDB data obtained on July 7, 2007, in part, to assess the impact of the Postal Service’s efforts to improve FDB since October 6, 2006. To identify examples of leading federal practices related to our objectives, we reviewed Executive Order 13327: Federal Real Property Asset Management, related documentation from the Federal Council, and previous GAO reports. We also interviewed a Federal Council official and attended a Federal Real Property Association conference in 2006 that focused on the implementation of the executive order. To assess the Postal Service’s progress in overcoming the challenge of maintaining its facilities in adequate condition, we analyzed data drawn from the Postal Service’s maintenance tracking system (the Facility Single Source Provider) as of March 7, 2007. We assessed the reliability of the data by (1) interviewing knowledgeable Postal Service officials and (2) validating the accuracy of the data at the facilities we visited. We also observed conditions at the 58 facilities we visited and interviewed local postal officials. Additionally, we analyzed key Postal Service documents related to facility maintenance, including the Postal Service’s Strategic Transformation Plan for 2006 through 2010, presentations related to its facility assessment effort, and an audit report issued by the Postal OIG. To assess the Postal Service’s progress in overcoming the challenge of aligning access to retail services with customer needs, we assessed the variance in access to retail postal facilities among similar counties using an ordinary-least squares regression model. We designed the model to predict the number of postal retail facilities (main post offices, stations, branches, and contract units) in each U.S. county based on each county’s population (in 2000), land area (in square miles), and degree of urbanization (as defined by the U.S. Census Bureau). First, we determined the number of retail facilities in each county by conducting a unique address analysis, which counted only one retail facility at each address within FDB to avoid counting the same retail facilities more than once. The regression coefficients from the regression model are presented in table 2. The R-squared statistic, an indicator of model fit or explanatory power, was 0.79. This means the model explains 79 percent of the variation in the number of post offices between counties. We performed various model specification and diagnostic tests to determine the appropriateness of our model specifications (e.g., interaction effects and degree of urbanization as dummy variables), as well as to determine the existence and influence of outliers. We identified two potential outliers, but did not eliminate them from our analysis because they did not materially affect our results. To determine the variation in the number of postal retail facilities between counties, we examined the residuals from the ordinary-least squares regression model. The residuals consist of the difference between the actual number of postal retail facilities and the number predicted by the model. The number predicted by the model can be considered the average number of facilities offering postal retail services in counties of similar population, land area, and degree of urbanization because the model, in deriving the estimates, considers these factors across all 3,213 counties in the United States and Puerto Rico. We then rank ordered the counties on the difference between the actual and the predicted number of postal retail facilities from low to high (i.e., from fewer than predicted to more than predicted). For the map in figure 7, we considered the top 10 percent of counties in this rank order as having more post offices than comparably sized and populated counties and the bottom 10 percent of counties as having fewer post offices than comparable counties. We limited this portion of our analysis to counties with an urban center of at least 10,000 people—defined as metropolitan and micropolitan areas—because the Postal Service may need to maintain more postal retail facilities in these areas to fulfill its mission of providing universal access to postal services. We interviewed postal officials at the facility, district, area, and headquarters levels about how retail facilities are aligned with customer needs and obtained documentation relating to alternative service options and emergency suspensions that resulted in facility closures. We also reviewed key Postal Service documents, including the policies and procedures for closing postal facilities, federal regulations governing postal facility closures, the Postal Service’s 2002 and 2005 Transformation Plans, audit reports issued by the Postal OIG, and previous GAO reports. We conducted our work from July 2006 through December 2007 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Postal Service’s letter dated November 19, 2007. 1. The Postal Service stated that FDB is a repository of facility information and that it was never intended to be a single, consolidated source for information on facilities. The agency’s September 2006 FDB briefing to us appears to dispute this point. At that time, postal officials indicated that the agency had intended to create “one source of information for tracking all aspects of our facilities” to address, among other matters, the high cost of maintaining inaccurate and redundant databases. Notwithstanding our prior understanding, we clarified our report, indicating that FDB was intended to be a “consolidated” source for accurate facility data—not a “single,” consolidated data source. 2. The Postal Service questioned a statement in our report which indicated that several internal (postal) organizations “do not use FDB for aggregate facility information, partly because of concerns about its reliability.” More specifically, the agency inquired whether we had drawn this conclusion on the basis of the various data sources used to compile the Postal Service’s 2006 annual report. While the variety of data sources used in the annual report—none of which were exclusively FDB—is illustrative of our point, our statement was principally based on interviews with numerous officials throughout the Postal Service’s major departments. 3. The Postal Service correctly noted that, after researching multiple data sources, its staff was able to provide explanations for specific examples of data reliability issues—including reasons for identified mistakes—that we brought to their attention. Although these explanations appeared plausible, as discussed in our report, without additional site visits, we cannot determine whether their explanations were accurate. Most importantly, however, regardless of whether the identified mistakes could be explained using other data sources—the fact remains that FDB contains errors that need to be corrected if it is to retained and used as a reliable source of agency data. 4. The Postal Service indicated that it would benefit tremendously if we shared information on all of the specific examples of data reliability issues we used in this report. We appreciate the Postal Service’s desire to improve the reliability and usefulness of FDB and, therefore, suggest that the Postal Service replicate our statistical analyses on current data by, among other standard analytical methods, sorting its current FDB data according to each facility’s name, address, and type in order to identify duplicate facility entries. If needed, we could also meet to further discuss our analytical methods for analyzing the Postal Service’s data. 5. As discussed in the body of this report, we continue to believe that measuring and tracking performance in such areas as a facility’s importance, utilization, condition, and operating costs are critical to effective facility management. Nevertheless, because we also recognize that the Postal Service’s business and mission are unique, we revised these recommendations to emphasize that the Postal Service should develop performance measures that may better meet its needs—as long as the measures are consistent with the spirit of those developed by the Federal Council. 6. The Postal Service disagreed with our statement that it focuses “solely on urgent repairs at the expense of routine, preventive maintenance.” While we removed “solely” from the applicable sentence in this report, this statement was based on evidence obtained during interviews with numerous postal employees involved with facility maintenance at the national, area, and local levels. All of these individuals stated that resource constraints have forced the Postal Service to focus on urgent repairs and to defer routine or preventive maintenance projects. Furthermore, the interviewees’ comments were consistent with findings in a prior Postal OIG review which found that insufficient budgeting for repairs and maintenance may be hampering the Postal Service’s ability to proactively manage its maintenance. 7. As discussed in the body of this report, we agree that developing criteria for identifying unneeded retail facilities is difficult and that such criteria cannot be uniformly applied to the universe of all postal retail facilities. As a result, we clarified our recommendation to indicate that the results of a criteria-based approach would “assist” in identifying candidate retail facilities for closure. Furthermore, although the Postal Service does not currently envision developing such a criteria-based approach as part of its congressionally required facility plan, we continue to believe that doing so would provide an excellent opportunity for the agency to begin pursuing its 2002 goal of identifying and closing redundant, low value retail facilities. 8. We are not intending to imply that in order to be viable, a facility must meet minimum revenue thresholds established for the Postal Service’s Automated Postal Centers. As discussed in our report, this threshold was $198,000 in fiscal year 2007. We included information on this threshold simply as context for the amount of revenue generated by some of the postal retail facilities we visited—all of which were in urban areas. While we agree that the threshold the Postal Service applied to the deployment of automated postal centers is not necessarily applicable to its retail facilities, the Postal Service has not established criteria based on revenue or any other criteria for analyzing the performance of its retail facilities. The absence of such action is among the reasons we are recommending that the Postal Service institute a proactive, criteria-based approach to help identify and close unneeded retail facilities. In addition to the individual named above, Kathleen Turner, Assistant Director; Michael Armes; Richard Bakewell; Keith Cunningham; Bess Eisenstadt; Kathy Gilhooly; Brandon Haller; Anne Izod; Dorothy Yee Leggett; Josh Ormond; and Minette Richardson made key contributions to this report.
Continued financial challenges and increased competition call for the U.S. Postal Service to manage its 34,000 facilities as efficiently and cost-effectively as possible. GAO and others have identified key facility management challenges, including the need to (1) capture and maintain accurate facility data, (2) adequately maintain facilities, and (3) align retail access with customer needs. This report assesses Postal Service efforts to overcome these challenges and implement leading federal practices. To conduct this study, GAO analyzed postal data and documents, visited 58 facilities, and interviewed postal officials. To address the challenge of capturing and maintaining accurate facility management data, the Postal Service developed the Facility Database, but the database does not conform to the Postal Service's goals or to leading federal practices; specifically, it does not include data needed to measure performance on managing facilities or have the capacity to track such data over time. Further, a database analysis by GAO revealed data reliability problems, including duplicative and contradictory data. In addition, major Postal Service departments do not use the database as a consolidated data source for managing postal facilities. The Postal Service has attempted to improve the database, but many problems remain. To address the challenge of maintaining its facilities, the Postal Service has begun assessing the condition of the facilities but has neither determined the extent of its maintenance projects nor strategically prioritized the projects. A Postal Service inspection of 651 randomly selected postal facilities revealed that two-thirds were in less than "acceptable" condition, but the Postal Service had not documented the full extent of its maintenance projects backlog. After the inspection, the Postal Service initiated a program to assess the condition of all of its facilities--a necessary first step to improving their condition. In addition, the Postal Service lacks the data needed to implement leading federal practices, such as considering a facility's importance and value when prioritizing its maintenance projects. Due to funding constraints, the Postal Service currently focuses exclusively on emergency and urgent repairs--at the expense of a less costly preventive maintenance approach. To address the challenge of aligning access to postal retail services with customer needs, the Postal Service has expanded access in underserved areas but has done less to address overserved areas. Leading federal practices identify criteria for "rightsizing" facility networks--such as considering facilities' importance and utilization--but the Postal Service does not consider these criteria. GAO's analysis shows wide variation in the number of postal retail facilities among comparable counties, and a number of facilities GAO visited appeared to merit consideration for closure based on one or more of the federal criteria. If the Postal Service begins collecting data that reflects criteria based on leading federal practices, it may be able to close facilities and adjust access to retail services according to customer needs.
Since 2005, there have been several efforts to inventory federal STEM education programs and reports that call for the need to better coordinate and evaluate STEM education programs. In 2005, for example, GAO identified a multitude of agencies that administer such programs. The primary missions of these agencies vary, but most often, they are to promote and enhance an area that is related to a STEM field or enhance general education. In addition, the National Science and Technology Council (NSTC) was established in 1993 and is the principal means for the administration to coordinate science and technology with the federal government’s larger research and development effort. The America COMPETES Reauthorization Act of 2010 sought to address coordination and oversight issues, including those associated with the coordination and potential duplication of federal STEM education efforts.the law required the Director of the Office for Science and Technology Policy (OSTP) to establish a committee under the NSTC to inventory, review, and coordinate federal STEM education programs. The law also directed this NSTC committee to develop a 5-year governmentwide STEM education strategic plan, which must specify and prioritize annual and long-term objectives for STEM education. Moreover, the Director of OSTP is required to send a report to Congress annually on this strategic plan, which must include, among other things, an evaluation of the levels of duplication and fragmentation of STEM programs and activities. In our January 2012 report on STEM education, we defined a federally funded STEM education program as a program funded in fiscal year 2010 by congressional appropriation or allocation that included one or more of the following as a primary objective: attract or prepare students to pursue classes or coursework in STEM areas through formal or informal education activities, attract students to pursue degrees (2-year, 4-year, graduate, or doctoral degrees) in STEM fields through formal or informal education activities, provide training opportunities for undergraduate or graduate students attract graduates to pursue careers in STEM fields, improve teacher (preservice or in-service) education in STEM areas, improve or expand the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields, or conduct research to enhance the quality of STEM education programs provided to students. In addition, a program was defined as an organized set of activities supported by a congressional appropriation or allocation. Further, we defined a program as a single program even when its funds were allocated to other programs as well. We asked agency officials to provide a list of programs that received funds in fiscal year 2010. In our January 2012 report, we examined the extent to which federal STEM education programs were fragmented, overlapping, and duplicative. overlap, and duplication work, key terms were defined as follows: Using our framework established in previous fragmentation, Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need. Overlap occurs when multiple programs offer similar services to similar target groups in similar STEM fields to achieve similar objectives. Duplication occurs when multiple programs offer the same services to the same target beneficiaries in the same STEM fields. GAO-12-108. As we reported in 2012, 13 agencies administered 209 STEM education programs in fiscal year 2010. Agencies reported that they developed the majority (130) of these programs through their general statutory authority and that Congress specifically directed agencies to create 59 of these programs. The number of programs each agency administered ranged from 3 to 46 with three agencies—the Department of Health and Human Services, the Department of Energy, and the National Science Foundation (NSF)—administering more than half of all programs—112 of 209. (See fig. 1) Agencies obligated over $3 billion to STEM education programs in fiscal year 2010, ranging from $15,000 to hundreds of millions of dollars per program. NSF and the Department of Education programs accounted for over half of this funding. Almost a third of the programs had obligations of $1 million or less, with five programs having obligations of more than $100 million each. Beyond the 209 programs identified in our review, federal agencies carried out other activities that contribute to the overall federal STEM education effort. Having multiple agencies, with varying expertise, involved in delivering STEM education has both advantages and disadvantages. On the one hand, it could allow agencies to tailor programs to suit their specific missions and needs to attract new employees to their workforce. On the other hand, it could make it challenging to develop a coherent federal approach to educating STEM students and creating a workforce with STEM skills. Further, it could make it difficult to identify gaps and allocate resources across the federal government. As we reported in 2012, and as figure 2 illustrates, in fiscal year 2010, 83 percent of STEM education programs overlapped to some degree with another program in that they offered at least one similar service to at least one similar target group in at least one similar STEM field to achieve at least one similar objective. These programs ranged from being narrowly focused on a specific group or field of study to offering a range of services to students and teachers across STEM fields. This complicated patchwork of overlapping programs has largely resulted from federal efforts to both create and expand programs across many agencies in an effort to improve STEM education and increase the number of students going into STEM fields. Program officials reported that approximately one-third of STEM education programs funded in fiscal year 2010 were first funded between 2005 and 2010. We believe the creation of new programs during that time frame may have contributed to overlap and, ultimately, to inefficiencies in how STEM programs across the federal government are focused and delivered. Overlap among STEM education programs is not new. In 2007, the Academic Competitiveness Council (ACC) identified extensive overlap among STEM education programs, and, in 2009, we identified overlap among teacher quality programs, which include several programs focused on STEM education. Overlapping programs can lead to individuals and institutions being eligible for similar services in similar STEM fields offered through multiple programs and, without information sharing, could lead to the same service being provided to the same individual or institution. Our analysis found that many programs provided services to similar target groups, such as K-12 students, postsecondary students, K-12 teachers, and college faculty and staff. The vast majority of programs (170) served postsecondary students. Ninety-five programs served college faculty and staff, 75 programs served K-12 students, and 70 programs served K-12 teachers. In addition, many programs served multiple target groups. In fact, 177 programs primarily served two or more target groups. We also found many STEM programs providing similar services. To support students, 167 different programs provided research opportunities, internships, mentorships, or career guidance. In addition, 144 programs provided short-term experiential learning opportunities and 127 long-term experiential learning opportunities. Short-term experiential learning activities included field trips, guest speakers, workshops, and summer camps. Long-term experiential learning activities last a semester in length or longer. Furthermore, 137 programs provided outreach and recognition to generate student interest, 124 provided classroom instruction, and 75 provided student scholarships or fellowships. To support teachers, 115 programs provided curriculum development, 83 programs provided teacher in-service, professional development, or retention activities, and 52 programs provided preservice or recruitment activities. To support STEM research, 68 programs reported conducting research to enhance the quality of STEM education. To support institutions, 65 programs provided institutional support to management and administrative activities, and 46 programs provided support for expanding the facilities, classrooms, and other physical infrastructure of institutions. In addition to serving multiple target groups, our analysis found that most programs also provided services in multiple STEM fields. Twenty-three programs targeted one specific STEM field, while 121 programs targeted four or more specific STEM fields. In addition, 26 programs indicated not focusing on any specific STEM field, and instead provided services eligible for use in any STEM field. Five different STEM fields had over 100 programs that provided services. Biological sciences and technology were the most selected STEM fields that programs focused on. Agricultural sciences, which was the least commonly selected, still had 27 programs that provided services specifically to that STEM field. While our 2011 survey data also show that many programs overlapped, it is important to compare programs’ target groups and STEM fields of focus to get a better picture of the potential target beneficiaries that could be served within a given STEM discipline. For example, both the National Oceanic and Atmospheric Administration’s National Environmental Satellite, Data, and Information Service (NESDIS) Education program and the Department of Energy’s Graduate Automotive Technology Education program provided scholarships or fellowships to postsecondary students, but NEDSIS focused on students in earth sciences programs, and the other on engineering; therefore, the target beneficiaries served by these two similar programs are quite different. Nevertheless, we found that 76 programs served postsecondary students in physics. As table 1 illustrates, many programs offered services to similar target groups in similar STEM fields of focus. We also found that many STEM education programs had similar objectives. In response to our 2011 survey, the vast majority (87 percent) of STEM education program officials indicated that attracting and preparing students throughout their academic careers in STEM areas was a primary objective. In addition to attracting and preparing students throughout their academic careers in STEM areas, officials also indicated the following primary program objectives: improving teacher education in STEM areas (teacher development)— 26 percent, improving or expanding the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields (institution capacity building)—24 percent, and conducting research to enhance the quality of STEM education provided to students (STEM education research)—18 percent. Many programs also reported having multiple primary objectives. While 107 programs focused solely on student education, 82 others indicated having multiple primary objectives, and 9 programs reported having 4 or more primary objectives. Few programs reported focusing solely on teacher development, institution capacity building, or STEM education research. Most of these objectives were part of a larger program that also focused on attracting and preparing students in STEM education. However, even when programs overlapped, we found that the services they provided and the populations they served may differ in meaningful ways and would therefore not necessarily be duplicative. There may be important differences between the specific field(s) of focus and a program’s stated goals. For example, both Commerce’s National Estuarine Research Reserve System Education Program and the Nuclear Regulatory Commission’s Integrated University Program provided scholarships or fellowships to doctoral students in the field of physics; however, the Commerce program focuses on increasing environmental literacy related to estuaries and coastal watersheds, while the Nuclear Regulatory Commission program focuses on supporting education in nuclear science, engineering, and related fields with the goal of developing a workforce capable of designing, constructing, operating, and regulating nuclear facilities and capable of handling nuclear materials safely. In addition, programs may be primarily intended to serve different specific populations within a given target group. For example, of the 34 programs that we surveyed in 2011 that provided services to K-12 students in the field of technology, 10 were primarily intended to serve specific underrepresented, minority, or disadvantaged groups, and 2 were limited geographically to individual cities or universities. Furthermore, individuals may receive assistance from different programs at different points throughout their academic careers that provide services that complement or build upon each other, simultaneously supporting a common goal rather than serving cross purposes. In 2012, we reported that in addition to the fragmented and overlapping nature of federal STEM education programs, agencies’ limited use of performance measures and evaluations may hamper their ability to assess the effectiveness of their individual programs as well as the overall STEM education effort. Understanding program performance and effectiveness is key in determining where to strategically invest limited federal funds to achieve the greatest impact in developing a pipeline of future workers in STEM fields. Program officials varied in their ability to provide reliable output measures—for example, the number of students, teachers, or institutions directly served by their program. In some cases, the program’s agency did not maintain databases or contracts that would track the number of students served by the program. In other cases, programs may not have been able to provide information on the numbers of institutions they served because they provided grants to secondary recipients. In 2012, we reported that the inconsistent collection of output measures across programs makes it challenging to aggregate the number of students, teachers, and institutions served and to assess the effectiveness of the overall federal effort. In addition, most agencies did not use outcome measures in a way that is clearly reflected in their performance plans and reports—publicly available documents they use for performance planning. These documents typically lay out agency performance goals that establish the level of performance to be achieved by program activities during a given fiscal year, the measures developed to track progress, and what progress has been made toward meeting those performance goals. The lack of performance outcome measures may hinder decision makers’ ability to assess how agencies’ STEM education efforts contribute to agencywide performance goals and the overall federal STEM effort. For our 2012 report, we reviewed fiscal year 2010 annual performance plans and reports of the 13 agencies with STEM programs and found that most agencies did not connect STEM education activities to agency goals or measure and report on the progress of those activities. We define “evaluation” as an individual systematic study conducted periodically or on an ad hoc basis to assess how well a program is working, typically relative to its program objectives. In our January 2012 report, we made four recommendations to the Director of OSTP to direct the NSTC to: 1. Work with agencies, through its strategic-planning process, to identify programs that might be candidates for consolidation or elimination, which could be identified through an analysis that includes information on program overlap and program effectiveness. As part of this effort, OSTP should work with agency officials to identify and report any changes in statutory authority necessary to execute each specific program consolidation identified by NSTC’s strategic plan. 2. Develop guidance to help agencies determine the types of evaluations that may be feasible and appropriate for different types of STEM education programs and develop a mechanism for sharing this information across agencies. This step could include guidance and sharing of information that outlines practices for evaluating similar types of programs. 3. Develop guidance for how agencies can better incorporate each agency’s STEM education efforts and the goals from NSTC’s 5-year STEM education strategic plan into each agency’s own performance plans and reports. 4. Develop a framework for how agencies will be monitored to ensure that they are collecting and reporting on NSTC strategic plan goals. This framework should include alternatives for a sustained focus on monitoring coordination of STEM programs if the NSTC Committee on STEM terminates in 2015 as called for in its charter. OSTP agreed with our conclusions and, as figure 4 shows, NSTC has made some progress in addressing recommendations from our January 2012 report. Subsequently, OSTP has stated that NSTC’s 5-Year Federal STEM Education Strategic Plan, originally scheduled to be released in spring 2012, would address our recommendations; however, the release of NSTC’s Strategic Plan has been delayed. In February 2012, NSTC published Coordinating Federal Science, Technology, Engineering, and Mathematics (STEM) Education Investments: Progress Report, which identified a number of programs that could be eliminated in fiscal year 2013. By identifying programs for consolidation, elimination, and other actions, the administration could increase the efficient use of scarce government resources to achieve the greatest impact in developing a pipeline of future workers in STEM fields. Although NSTC said it planned to create a small working group to develop guidance on the appropriateness of different types of evaluations for different types of STEM education programs, OSTP has not released the findings of this working group. Agency and program officials would benefit from guidance and information sharing within and across agencies about what is working and how to best evaluate programs. This could help improve individual program performance and also inform agency and governmentwide decisions about which programs should continue to be funded. We continue to believe that without an understanding of what is working in some programs, it will be difficult to develop a clear strategy for how to spend limited federal funds. In addition, STEM education was named as an interim crosscutting priority goal in the President’s 2013 budget submission; however, it will be important for NSTC to finalize its strategic plan, which should include guidance for how agencies can better align their performance plans and reports to new governmentwide goals. Although OSTP agreed to develop milestones and metrics to track the implementation of NSTC strategic goals by each agency, it has not taken action to develop a framework for how agencies will be monitored to ensure that they are collecting and reporting on NSTC strategic plan goals. A framework for monitoring agency progress towards NSTC’s strategic plan is necessary to improve transparency and strengthen accountability of NSTC’s strategic planning and coordination efforts. In conclusion, if NSTC’s 5-year strategic plan is not developed in a way that aligns agencies’ efforts to achieve governmentwide goals, enhances the federal government’s ability to assess what works, and concentrates resources on those programs that advance the strategy, the federal government may spend limited funds in an inefficient and ineffective manner that does not best help to improve the nation’s global competitiveness. Chairman Rokita, Ranking Member McCarthy, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact George A. Scott at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include: Bill Keller, Assistant Director; Susan Baxter; James Bennett; Karen Brown; David Chrisinger; Melinda Cordero; Elizabeth Curda; Karen Febey; Jill Lacey; Ben Licht; Dan Meyer; Amy Radovich; James Rebbe; Nyree Ryder Tee; Martin Scire; Ryan Siegel; and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
STEM education programs help to enhance the nation's global competitiveness. Many federal agencies have been involved in administering these programs. Concerns have been raised about the overall effectiveness and efficiency of STEM education programs. This testimony discusses (1) the number of federal agencies and programs that provided funding for STEM education programs in fiscal year 2010; (2) the extent to which STEM education programs overlap; and (3) the extent to which STEM education programs measured effectiveness and were aligned to a governmentwide strategy. This testimony is based on several previously published GAO reports and includes updates on actions taken in response to these reports. In fiscal year 2010, 13 federal agencies invested over $3 billion in 209 programs designed to increase knowledge of science, technology, engineering, and mathematics (STEM) fields and attainment of STEM degrees. The number of programs within agencies ranged from 3 to 46, with the Department of Health and Human Services, Department of Energy, and the National Science Foundation administering more than half of the 209 programs. Almost a third of all programs had obligations of $1 million or less, while some had obligations of over $100 million. Beyond programs specifically focused on STEM education, agencies funded other broad efforts that contributed to enhancing STEM education. Eighty-three percent of the programs GAO identified overlapped to some degree with at least 1 other program in that they offered similar services to similar target groups in similar STEM fields to achieve similar objectives. Many programs have a broad scope--serving multiple target groups with multiple services. However, even when programs overlap, the services they provide and the populations they serve may differ in meaningful ways and would therefore not necessarily be duplicative. Nonetheless, the programs are similar enough that they need to be well coordinated and guided by a robust strategic plan. Agencies' limited use of performance measures and evaluations may hamper their ability to assess the effectiveness of their individual programs as well as the overall STEM education effort. Specifically, program officials varied in their ability to provide reliable output measures--for example, the number of students, teachers, or institutions directly served by their program. Further, most agencies did not use outcomes measures in a way that is clearly reflected in their performance planning documents. In addition, a majority of programs did not conduct comprehensive evaluations since our prior review in 2005 and the time of our survey in 2011 to assess effectiveness, and the evaluations GAO reviewed did not always align with program objectives. Finally, GAO found that completed STEM education evaluation results had not always been disseminated in a fashion that facilitated knowledge sharing between both practitioners and researchers. In naming STEM education as a crosscutting goal, the administration is taking the first step towards better governmentwide coordinated planning; however, it will be important to finalize a governmentwide strategic plan so agencies can better align their performance plans and reports to new governmentwide goals. GAO previously recommended that the Office of Science and Technology Policy (OSTP) should direct the National Science and Technology Council (NSTC) to work with agencies to better align their activities with a governmentwide strategy, develop a plan for sustained monitoring of coordination, identify programs for consolidation or potential elimination, and assist agencies in determining how to better evaluate their programs. Since GAO's report, OSTP released a progress report that identified some programs for elimination, and the Office of Management and Budget (OMB) named STEM education one of its interim cross-cutting priority goals. GAO previously recommended that the Office of Science and Technology Policy (OSTP) should direct the National Science and Technology Council (NSTC) to work with agencies to better align their activities with a governmentwide strategy, develop a plan for sustained monitoring of coordination, identify programs for consolidation or potential elimination, and assist agencies in determining how to better evaluate their programs. Since GAO’s report, OSTP released a progress report that identified some programs for elimination, and the Office of Management and Budget (OMB) named STEM education one of its interim cross-cutting priority goals.
Geospatial information describes entities or phenomena that can be referenced to specific locations relative to the Earth’s surface. For example, entities such as houses, rivers, road intersections, power plants, and national parks can all be identified by their locations. In addition, phenomena such as wildfires, the spread of the West Nile virus, and the thinning of trees due to acid rain, can also be identified by their geographic locations. A geographic information system (GIS) is a system of computer software, hardware, and data used to capture, store, manipulate, analyze, and graphically present a potentially wide array of geospatial information. A GIS combines the disciplines of geography, cartography, computer science, and mathematics to permit users to query and analyze the attributes of any entity or phenomenon that has been identified by its geographic location, providing a powerful ability to integrate different kinds of location-based information. A fully functional GIS includes hardware and software to support data input, output, storage, retrieval, display, and analysis. A variety of platforms support GIS processing, ranging from large mainframe computers and minicomputers to scientific workstations and personal computers. In many cases, hardware used to support other applications (e.g., payroll, accounting, and digital image processing) can also be used. A variety of technologies, including remote sensing systems and the Global Positioning System (GPS), are used to collect the geospatial data in a GIS. Remote sensing systems collect data that are either emitted or reflected by the Earth and the atmosphere from a distance—such as from a satellite, airplane, or balloon. The GPS is a constellation of orbiting satellites that provides navigational data to military and civilian users around the world. With the proper equipment, users can receive signals from these satellites to calculate time, location, and velocity. GPS equipment is now being used on aircraft, ships, and land-based vehicles, and mobile hand-held units provide individuals with these capabilities as well. The primary function of a GIS is to link multiple sets of geospatial data and display the combined information as maps with many different layers of information. Assuming that all of the information is at the same scale and has been formatted according to the same standards, users can potentially overlay spatial information about any number of specific topics to examine how the layers interrelate. Each layer of a GIS map represents a particular “theme” or feature, and one layer could be derived from a data source completely different from the others. For example, one theme could represent all of the streets in a specific area. Another theme could correspond to all of the buildings in the same area, and others could show vegetation or water resources. As long as standard processes and formats have been used to facilitate integration, each of these themes could be based on data originally collected and maintained by a separate organization. Analyzing this layered information as an integrated whole can significantly aid decision makers in considering complex choices, such as where to locate a new department of motor vehicles building to best serve the greatest number of citizens. Typical geospatial data layers (or themes) include cadastral—describing location, ownership, and other information about real property; digital orthoimagery—containing images of the Earth’s surface that have the geometric characteristics of a map and image qualities of a photograph; and hydrography—describing water features such as lakes, ponds, streams and rivers, canals, oceans, and coastlines. Figure 1 portrays the concept of data themes in a GIS. State and local government agencies rely on geographic information systems to provide vital services to their customers. For example, local fire departments can use geographic information systems to determine the quickest and most efficient route from a firehouse to a specific location, taking into account changing traffic patterns that occur at various times of day. Highway departments use geographic information systems to identify intersections that have had a significant number of personal injury accidents to determine needs for improved traffic signaling or signage. The usefulness of a GIS in disaster response situations was also demonstrated in connection with the Space Shuttle Columbia recovery effort. After the loss of Columbia on February 1, 2003, debris was spread over at least 41 counties in Texas and Louisiana (see fig. 2). Analysis of GIS data was critical to the efficient recovery and documentation of that debris. The Texas state GIS program provided authorities with precise maps and search grids to guide field reconnaissance and collection crews. Officials in charge of the effort used maps of debris fields, combined with GIS data about the physical terrain, to carefully track every piece of debris found. A GIS can also be an invaluable tool in helping to ensure homeland security by facilitating preparedness, prevention, detection, and recovery and response to terrorist attacks. For example, according to a March 2002 Gartner report, New York City’s GIS system was pivotal in the rescue, response, and recovery efforts after the September 11, 2001, terrorist attacks. The city’s GIS provided real-time data on the area around the World Trade Center, so that the mayor, governor, federal officials, and emergency response agencies could implement critical rescue, response, and recovery efforts. Specifically, daily flyovers were performed to monitor changes in the elevation of the site to detect weaknesses in the underground structure. In addition, thermal imagery was compared with underground infrastructure maps to determine the locations where fires were still smoldering and to help the New York City Fire Department and emergency crews in detecting potential new explosion sites from nearby flammable substances. Further, maps generated by geospatial information systems were used to transmit critical information to the public and emergency personnel and provided the Army and Police Department with critical data on other potential terrorist targets such as bridges, tunnels, and reservoirs. Another use for GIS is in the tracking and responding to natural disasters such as hurricanes. For example, the Federal Emergency Management Agency (FEMA) used its GIS capabilities and those of the National Oceanic and Atmospheric Administration (NOAA) to generate maps to track hurricane Isabel in September 2003. FEMA officials generated maps that estimated Isabel’s track, and used a hurricane wind model to produce maps of projected damage-prone areas in affected states. These officials also produced wind damage estimates for structures and infrastructures, such as sewage treatment plants, nursing homes, schools, and hospitals. Further, the officials performed various demographic analyses that estimated the population and number of housing units in affected counties or other areas. Figure 3 shows an example of a hurricane-tracking map. Similarly, many other federal departments and agencies use GIS technology to help carry out their primary missions. Examples include the following: The Department of Housing and Urban Development worked with the Environmental Protection Agency (EPA) to develop an enterprise geographic information system, which combines information on community development and housing programs with other types of data, including environmental and transportation data. The program provides homeowners and prospective home buyers with ready access to detailed local information about environmental hazards and other information that otherwise would likely be difficult to obtain. The Department of Health and Human Services (HHS) uses GIS technology for a variety of public health functions, such as reporting the results of national health surveys. In addition, there are a variety of GIS-based atlases of national mortality from causes such as injury, cardiovascular disease, cancer, and reproductive health problems. Other GIS activities focus on disease surveillance and prevention of infectious diseases that are caused by environmental exposure. A variety of mapping tools are published on the Web to facilitate citizen access to public health resources and other information. The Census Bureau maintains the Topologically Integrated Geographic Encoding and Referencing (TIGER) database to support its mission to conduct the decennial census and other censuses and surveys by spatially locating all habitations within the United States and reporting the resulting census estimates and counts. Census provides the spatial information (not individual addresses) in this publicly accessible database through its Web site at http://www.census.gov/geo/www/tiger/index.html. NOAA provides access to maps and other geospatial information on subjects such as the weather and climate, oceans and fisheries, and satellite imagery used for global weather monitoring at http://www.noaa.gov. EPA maintains a variety of databases with information about the quality of air, water, and land in the United States. EPA’s Envirofacts system (http://www.epa.gov/enviro/index.html) provides public access to selected EPA environmental data. Appendix II provides additional examples of federal geospatial activities. The federal government has for many years taken steps to coordinate geospatial activities both within and outside the federal government. In 1953, the Bureau of the Budget first issued its Circular A-16, encouraging expeditious surveying and mapping activities across all levels of government and avoidance of duplicative efforts. In 1990, OMB revised Circular A-16 to, among other things, establish FGDC within the Department of the Interior, to promote the coordinated use, sharing, and dissemination of geospatial data nationwide. Building on that guidance, the President in 1994 issued Executive Order 12906, assigning to FGDC the responsibility to coordinate the development of the National Spatial Data Infrastructure (NSDI) to address redundancy and incompatibility of geospatial information. The infrastructure is defined by FGDC as the technologies, policies, and people necessary to promote sharing of geospatial data throughout all levels of government, the private and nonprofit sectors, and the academic community. The NSDI’s goals are to reduce duplication of effort among agencies; to improve quality and reduce costs related to geographic information; to make the benefits of geographic data more accessible to the public; and to establish key partnerships with states, counties, cities, tribal nations, academia, and the private sector to increase data availability. Further, in August 2002, OMB again revised Circular A-16 to reflect changes in geographic information management and technology and to more clearly define agency and FGDC roles and responsibilities. In addition to the responsibilities identified for FGDC, Circular A-16 outlines responsibilities and reporting requirements for individual federal agencies to help ensure that geospatial resources are used efficiently and contribute to building the NSDI. Among other things, the circular requires that agencies prepare geographic information strategies, use FGDC data standards, and coordinate and work in partnership with federal, state, and local governments and the private sector. These responsibilities are assigned to all agencies that collect, use, or disseminate geographic information or carry out spatial data activities. More recently, in December 2002, the E-Government Act of 2002 was signed into law, requiring OMB to coordinate with state, local, and tribal governments as well as public-private partnerships and other interested persons on the development of standard protocols for sharing geographic information to reduce redundant data collection and promote collaboration and the use of standards. In addition to its responsibilities for geospatial information under the E-Government Act, OMB has specific oversight responsibilities regarding federal information technology (IT) systems and acquisition activities—including GIS—to help ensure their efficient and effective use. For example, the Clinger-Cohen Act of 1996 requires the Director of OMB to promote and be responsible for improving the acquisition, use, and disposal of information technology by the federal government to improve the productivity, efficiency, and effectiveness of federal programs. These requirements help to advance OMB’s federal IT management responsibilities under the Paperwork Reduction Act of 1995, which has a similar but more general requirement that the Director of OMB oversee the use of information resources to improve the efficiency and effectiveness of government operations to serve agency missions. Appendix III provides brief descriptions of key federal legislation, policies, and guidance that apply to IT and geospatial information and systems investments. To help carry out its investment oversight role, OMB established requirements for the acquisition and management of IT resources in its Circular A-11. The circular establishes policies for planning, budgeting, acquisition, and management of federal capital assets. Specifically, it requires agencies to submit business cases to OMB for planned or ongoing major IT investments. These business cases require agencies to answer questions to help OMB determine if the investment should be funded. Agency business case submissions must also include (1) the type of data used by the IT investment, including geospatial data; (2) whether the data needed for the investment already exist at the federal, state, or local level, and plans to gain access to that data; (3) potential legal reasons why existing data cannot be transferred; and (4) compliance with FGDC standards. According to Circular A-11, agency responses to these questions are reviewed as part of OMB’s evaluation of the overall business case. In addition to activities associated with Circulars A-11 and A-16, in a June 2003 congressional hearing, OMB’s Administrator, Office of Electronic Government and Information Technology, stated that the strategic management of geospatial assets would be accomplished, in part, through development of a robust and mature federal enterprise architecture. In 2001, the lack of a Federal Enterprise Architecture was cited by OMB’s E-Government Task Force as a barrier to the success of the administration’s e-government initiatives. In response, OMB began developing the FEA, and over the last two years it has released various versions of all but one of the five FEA reference models. According to OMB, the purpose of the FEA, among other things, is to provide a common frame of reference or taxonomy for agencies’ individual enterprise architecture efforts and their planned and ongoing investment activities. State and local governments and the private sector independently provide information and services apart from those provided by the federal government, including maintaining land records for nonfederal lands, property taxation, local planning, subdivision control and zoning, and direct delivery of many other public services. These entities use geographic information and GIS to facilitate and support delivery of these services. In fact, local governments often possess more recent and higher resolution geospatial data than the federal government, and in many cases private-sector companies collect these data under contract to local government agencies. For example, the state of New York hosts a Web site to provide citizens with a gateway to state government services at http://www.nysegov.com/map-NY.cfm. Using this Web site, citizens can access information about state agencies and their services, and locate county boundaries, services, and major state highways. New York also developed a clearinghouse (http://www.nysgis.state.ny.us/) to disseminate information about statewide GIS programs and provide information and services including state maps, aerial photographs, and a help desk to provide support for both general questions and specific questions regarding the use of GIS software. Many other states, such as Oregon (http://www.gis.state.or.us/), Virginia (http://www.vgin.virginia.gov/index.html), and Alaska (http://www.asgdc.state.ak.us/), provide similar Web sites and services. For local governments, GIS applications have become integral resources for public works, and financial, public safety, and economic developments. A 2003 survey sponsored by Interior showed that GIS technology is recognized as an essential tool by many local governments. For example, Fairfax County in Virginia developed GIS applications to provide online products and services to the public that include a digital map viewer to see and download property, zoning, topography, an aerial orthoimagery photo viewer to access aerial photographs of specific parcels, areas of interest, or addresses; a department of tax administration parcel finder to locate detailed information about a specific property and to view that parcel with the parcel viewer; and a map gallery that contains many common maps produced by the Fairfax County GIS and Mapping Department. The maps are letter size and available in many formats for downloading and printing. The private sector also plays an important role in support of government GIS activities because it captures and maintains a wealth of geospatial data and develops GIS software. Private companies provide services such as aerial photography, digital topographic mapping, digital orthophotography, and digital elevation modeling to produce geospatial data sets that are designed to meet the needs of government organizations. Figure 4 provides a conceptual summary of the many entities—including federal, state, and local governments and the private sector—that may be involved in geospatial data collection and processing relative to a single geographic location or event. Figure 5 shows the multiple data sets that have been collected by different agencies at federal, state, and local levels to capture the location of a segment of roadway in Texas. Costs associated with collecting and maintaining geographically referenced data and systems for the federal government are significant. Specific examples of the costs of collecting and maintaining federal geospatial data and information systems include FEMA’s Multi-Hazard Flood Map Modernization Program—estimated to cost $1 billion over the next 5 years; Census’s TIGER database—modernization is estimated to have cost over $170 million between 2001 and 2004; Agriculture’s Geospatial Database—acquisition and development reportedly cost over $130 million; Interior’s National Map—development is estimated to cost about $88 The Department of the Navy’s Primary Oceanographic Prediction, and Oceanographic Information systems—development, modernization, and operation were estimated to cost about $32 million in fiscal year 2003; and NOAA’s Coastal Survey—expenditures for geospatial data are estimated to cost about $30 million annually. In addition to the costs for individual agency GIS systems and data, the aggregated annual cost of collecting and maintaining geospatial data for all NSDI-related data themes and systems is estimated to be substantial. According to a recent estimate by the National States Geographic Information Council (NSGIC), the cost to collect detailed data for five key data layers of the NSDI—parcel, critical infrastructure, orthoimagery, elevation, and roads—is about $6.6 billion. The estimate assumes that the data collection will be coordinated among federal, state, and local government agencies, and the council cautions that without effective coordination, the costs could be far higher. OMB, individual federal agencies, and cross-government committees and initiatives such as the Federal Geographic Data Committee (FGDC) and the Geospatial One-Stop project have each taken actions to coordinate the government’s geospatial investments. FGDC and other cross-government entities have established Internet-based information-sharing portals to support development of the NSDI, led geospatial standards-setting activities, and conducted various outreach activities. In addition, individual federal agencies have taken steps to coordinate specific geospatial investments in certain cases—Agriculture and Interior have collaborated on a land management system. Finally, OMB has attempted to oversee and coordinate geospatial investments by collecting and analyzing relevant agency information. However, these efforts have not been fully successful in reducing redundancies in geospatial investments for several reasons. First, a complete and up-to-date strategic plan has not been in place. The government’s existing strategic plan for the NSDI is out-of-date and does not include specific measures for identifying and reducing redundancies. Second, federal agencies have not always fully complied with OMB direction to coordinate their investments. Many agency geospatial data holdings are not compliant with FGDC standards or are not published through the National Geospatial Data Clearinghouse. Third, OMB’s oversight methods have not identified or eliminated specific instances of duplication. The processes used by OMB to identify potentially redundant geospatial investments have not been effective, because the agency has not been able to collect key investment information from all agencies in a consistent way so that it could be used to identify redundancies. As a result of shortcomings in all three of these domains, federal agencies are independently acquiring and maintaining potentially duplicative and costly data sets and systems. Without better coordination, such duplication is likely to continue. Both Executive Order 12906 and OMB Circular A-16 charge FGDC with responsibilities that support coordination of federal GIS investments. Specifically, the committee is designated the lead federal executive body responsible for (1) developing, implementing, and maintaining spatial data standards; (2) promoting and guiding coordination among federal, state, tribal, and local government agencies, academia, and the private sector in the collection, production, sharing, and use of spatial information and the implementation of the NSDI; (3) communicating information about the status of infrastructure-related activities via the Internet; and (4) preparing and maintaining a strategic plan for developing and implementing the NSDI. According to OMB Circular A-16, FGDC is to develop standards, with input from a broad range of data users and providers. Geospatial standards are intended to facilitate data sharing and increase interoperability among automated geospatial information systems. In addition, according to Circular A-16, the committee is to adopt national and international standards in lieu of federal standards, whenever possible, and restrict its standards-development activities to areas not covered by other voluntary standards-consensus bodies. To address these responsibilities, FGDC has created a standards working group that includes federal agencies, states, academia, and the private sector. The working group has developed, and the committee has endorsed, a number of different geospatial standards, including metadata standards, and are working to continue developing additional standards. The committee’s working group also coordinates with national and international standards bodies to ensure that potential users support their work. Regarding coordination with federal and other entities and development of the NSDI, FGDC has taken a variety of actions. It established a committee structure with participation from federal agencies and key nonfederal organizations such as NSGIC, and the National Association of Counties, and established several programs to help ensure greater participation from federal agencies as well as other government entities. The committee structure is composed of (1) a steering committee that sets the high-level strategic direction for FGDC and (2) agency-led subcommittees and working groups. The subcommittees and working groups provide the basic structure for institutions and individuals to interact and coordinate with each other during the implementation of the NSDI. FGDC membership includes 19 federal agencies, with the Secretary of the Interior and the Deputy Director for Management, OMB, serving as Chair and Vice-Chair, respectively. Key actions taken by FGDC to develop the NSDI include implementing a National Geospatial Data Clearinghouse and establishing a framework of data themes. The clearinghouse is a decentralized system of Internet-based servers that contain descriptions of available geospatial data—over 300,000 metadata records, and information on over 2 million digital images are currently available through the clearinghouse. It allows individual agencies, consortia, or others to promote their available geospatial data. The framework of data themes is a collaborative effort in which commonly used data “layers” are developed, maintained, and integrated by public and private organizations within a geographic area. Local, regional, state, and federal organizations and private companies can use the framework as a way to share resources, improve communications, and increase efficiency. Appendix IV provides detailed descriptions of the framework data themes and other geospatial data layers. OMB Circular A-16 also calls for FGDC to communicate information, via the Internet, about its activities related to NSDI development; committee memberships; and the status of agencies’ work on committees, subcommittees, and working groups. FGDC is also to provide a collection of technical publications, articles, and reports related to the NSDI. To address these responsibilities, FGDC has established a Web site at www.fgdc.gov that provides information on its organizational structure and agencies’ activities on its committees and subcommittees—including minutes of meetings for each. The Web site also provides, among other information, technical articles, fact sheets, newsletters, and news releases. In addition to FGDC’s programs to support developing and implementing the NSDI, two other efforts are under way that aim to coordinate and consolidate geospatial information and resources across the federal government—the Geospatial One-Stop initiative and the National Map project. Geospatial One-Stop. Geospatial One-Stop is intended to accelerate the development and implementation of the NSDI to provide federal and state agencies with a single point of access to map-related data, which in turn will enable consolidation of redundant geospatial data. OMB selected Geospatial One-Stop as one of its e-government initiatives, in part to support development of an inventory of national geospatial assets, and also to support reducing redundancies in federal geospatial assets. The Department of the Interior was designated as the managing partner to lead the project, with development support from various other federal agencies. As of April 2004, over 9,000 metadata records were accessible through the Geospatial One-Stop portal, located at www.geodata.gov. According to the initiative’s executive director, the portal will continue to add metadata records by implementing a metadata “harvesting” program to actively gather metadata from many sources, beginning with the clearinghouse. In addition, the portal includes a “marketplace” that provides information on planned and ongoing geospatial acquisitions for use by agencies that are considering acquiring new data to facilitate coordination of existing and planned acquisitions. The National Map. The U.S. Geological Survey (USGS) is developing and implementing The National Map as a database to provide core geospatial data about the United States and its territories, similar to the data traditionally provided on USGS paper topographic maps. Through this project, USGS maintains an archive for the historic preservation of data and science applications; provides products and services that include paper maps, digital images, data download capabilities, and scientific reports; and promotes geographic integration and analyses. USGS relies heavily on partnerships with other federal agencies as well as states, localities, and the private sector to maintain the accuracy and currency of the national core geospatial data set as represented in The National Map. According to Interior’s Assistant Secretary—Policy, Management, and Budget, FGDC, Geospatial One-Stop, and The National Map are coordinating their efforts in several areas, including developing standards and framework data layers for the NSDI, increasing the effectiveness of the clearinghouse, and making information about existing and planned data acquisitions available through the Geospatial One-Stop Web site. Table 1 summarizes the NSDI, Geospatial One-Stop, and National Map programs. In addition to its other responsibilities, OMB Circular A-16 charges FGDC with leading the preparation of a strategic plan for the implementation of the NSDI. Such a plan could ensure coherence among the many geospatial coordination activities that are under way and provide ways to measure success in reducing redundancies. In 1994, FGDC issued a strategic plan that described actions federal agencies and others could take to develop the NSDI, such as establishing data themes and standards, training programs, and partnerships to promote coordination and data sharing. In April 1997, FGDC published an updated plan—with input from many organizations and individuals having a stake in developing the NSDI—that defined strategic goals and objectives to support the vision of the NSDI as defined in the 1994 plan. No further updates have been made. As the current national geospatial strategy document, FGDC’s 1997 plan is out of date. First, it does not reflect the recent broadened use of geospatial data and systems by many government agencies. In conjunction with EPA, the Department of Housing and Urban Development (HUD), for example, now makes geospatial information about housing available to potential home buyers over the Internet. This is one of several agency geospatial projects that did not exist in 1997. Second, significant governmentwide geospatial efforts—including the Geospatial One-Stop and the National Map projects—did not exist in 1997 and are therefore not reflected in the strategic plan. Finally, the 1997 plan does not take into account the increased importance that has been placed on homeland security in the wake of the September 11, 2001, attacks. Geospatial data and systems have a key role to play in supporting decision makers and emergency responders in protecting critical infrastructure and responding to threats. In addition to being out of date, the 1997 document lacks important elements that should be included in an effective strategic plan. According to the Government Performance and Results Act of 1993, such plans should include a set of outcome-related strategic goals, a description of how those goals are to be achieved, and an identification of risk factors that could significantly affect their achievement. The plans should also include performance goals and measures, with resources needed to achieve them, as well as a description of the processes to be used to measure progress. While the 1997 NSDI plan contains a vision statement and goals and objectives, it does not include other essential elements. For example, FGDC’s plan does not include a set of outcome-related goals, with actions to achieve those goals, that would bring together the various actions being taken to coordinate geospatial assets and achieve the vision of the NSDI. Specifically, the plan does not include a description of how the development and implementation of geospatial standards could foster coordination of national geospatial investments, and what actions FGDC is taking to help ensure that standards are implemented to effectively support such coordination. The plan also does not identify how the programs that FGDC uses to promote coordination among federal agencies and other entities fit together in a cohesive approach to support and facilitate collaboration. In addition to not developing a plan that integrates each of FGDC’s activities to ensure that the actions it takes effectively contribute to its vision, the strategy does not identify key risk factors that could significantly affect the achievement of the goals and objectives. Identifying such risk factors would be the first step in mitigating them, helping to ensure that the plan’s goals and objectives are achievable. Finally, the current plan does not include performance goals and measures to help ensure that the steps being taken are resulting in the development of the National Spatial Data Infrastructure. Performance goals and measures, with processes in place to measure progress, are important factors to ensuring the overall effectiveness of the plan and whether the objectives of the plan are being met. FGDC officials, in consultation with the executive director of Geospatial One-Stop, USGS, and participating FGDC member agencies, have initiated a “future directions” effort to begin the process of updating the plan. However, this activity is just beginning, and there is no time frame as to when a new strategy will be in place. Until a complete and up-to-date national strategic plan, with measurable goals and objectives for developing the NSDI, is in place, coordination will continue to be limited, resulting in unnecessary duplication of geospatial assets and activities. OMB Circular A-16 directs federal agencies to coordinate their investments to facilitate building the NSDI. The circular lists 11 specific responsibilities for federal agencies, including preparing, maintaining, publishing, and implementing a strategy for advancing geographic information and related spatial data activities appropriate to their mission, in support of the NSDI; using FGDC standards, including metadata and other appropriate standards, documenting spatial data with relevant metadata; and making metadata available online through a registered NSDI-compatible clearinghouse site. In certain cases, federal agencies have taken steps to coordinate their specific geospatial activities. For example, Agriculture’s U.S. Forest Service and Interior’s Bureau of Land Management (BLM) collaborated to develop the National Integrated Land System (NILS), which is intended to provide land managers with software tools for the collection, management, and sharing of survey data, cadastral data, and land records information. BLM and the Forest Service signed a formal interagency agreement at the outset of the project, coordinated project planning and management, and shared project funding. At an estimated cost of about $34 million, a single GIS—NILS—was developed that can accommodate the shared geospatial needs of both agencies, eliminating the need for each agency to develop a separate system. In another example, HUD and the Environmental Protection Agency (EPA) worked together to develop an enterprise GIS that combines information on HUD’s community development and housing programs with EPA’s environmental data, as well as other agencies’ data, to provide homeowners and prospective home buyers with ready access to detailed local information about environmental hazards and other pertinent information, including data about roadways, population, and local landmarks. However, despite such examples of coordination, agencies have not always complied with OMB’s broader geospatial coordination requirements. For example, only 10 of the 17 agencies that provided reports to FGDC reported having published geospatial strategies as required by Circular A-16. In addition, agencies’ spatial data holdings are generally not compliant with FGDC standards. Specifically, the annual report shows that, of the 17 agencies, only 4 reported that their spatial data holdings were compliant with FGDC standards. Ten agencies reported being partially compliant, and 3 agencies provided answers that were unclear as to whether they were compliant. Finally, regarding the requirement for agencies to post their data to the clearinghouse, only 6 of the 17 agencies indicated that their data or metadata were published through the clearinghouse, 10 indicated that their data were not published, and 1 indicated that some data were available through the clearinghouse. According to comments provided by agencies to FGDC in the annual report submissions, there are several reasons why agencies have not complied with their responsibilities under Circular A-16, including the lack of performance measures that link funding to coordination efforts. According to the Natural Resources Conservation Service, few incentives exist for cross-agency cooperation because budget allocations are linked to individual agency performance rather than to cooperative efforts. In addition, according to the USGS, agencies’ activities and funding are driven primarily by individual agency missions and do not address interagency geospatial coordination. In addition to the information provided in the annual report, Department of Agriculture officials said there are no clear performance measures that link funding to interagency coordination. OMB has recognized that potentially redundant geospatial assets need to be identified and that federal geospatial systems and information efforts need to be coordinated. To help identify potential redundancies, OMB’s Administrator of E-Government and Information Technology testified in June 2003 that the agency uses three key sources of information: business cases for planned or ongoing IT investments, submitted by agencies as part of the annual budget process; comparisons of agency lines of business with the Federal Enterprise Architecture (FEA); and annual reports compiled by FGDC and submitted to OMB. In addition, OMB has asked for detailed information from federal agencies on specific types of geospatial information and systems assets as an additional means of identifying and minimizing redundant IT investments. None of OMB’s major oversight processes—the annual review process associated with development of the federal budget, the FEA effort, and the FGDC-administered Circular A-16 reporting process—have been effective tools to help OMB identify major redundancies in federal GIS investments. According to OMB officials responsible for oversight of geospatial activities, the agency’s methods have not yet led to the identification of redundant investments that could be targeted for consolidation or elimination. The OMB officials said they believe that, with further refinement, these tools will be effective in the future in helping them identify redundancies. However, until more effective oversight measures are in place, duplicative and potentially costly geospatial data and projects are likely to continue, resulting in inefficient use of limited resources. In their IT business cases submitted annually as part of the budget process, agencies must report the types of data that will be used, including geospatial data. According to OMB’s branch chief for information policy and technology, OMB reviews these business cases to determine whether any redundant geospatial investments are being funded. Specifically, the process for reviewing a business case includes comparing proposed investments, IT management and strategic plans, and other business cases, in an attempt to determine whether a proposed investment duplicates another agency’s existing or already-approved investment. However, business cases submitted to OMB under Circular A-11 do not always include enough information to effectively identify potential geospatial data and systems redundancies because OMB does not require such information in agency business cases. For example, OMB does not require that agencies clearly link information about their proposed or existing geospatial investments to the spatial data categories (themes) established by Circular A-16. Geospatial systems and data are ubiquitous throughout federal agencies and are frequently integrated into agencies’ mission-related systems and business processes. Business cases that focus on mission-related aspects of agency systems and data may not provide the information necessary to compare specific geospatial investments with other, potentially similar investments unless the data identified in the business cases are categorized to allow OMB to more readily compare data sets and identify potential redundancies. For example, FEMA’s fiscal year 2004 business case for its Multi-Hazard Flood Map Modernization project indicates that topographic and base data are used to perform engineering analyses for estimating flood discharge, develop floodplain mapping, and locate areas of interest related to hazard areas. However, FEMA does not categorize these data according to standardized spatial data themes specified in Circular A-16, such as elevation (bathymetric or terrestrial), transportation, and hydrography. As a result, it is difficult to determine whether the data overlap with other federal data sets. Similarly, Census’s fiscal year 2005 business case for its MAF/TIGER Enhancement project indicates that state, local, tribal, and private-sector spatial data are used for the realignment of the street centerlines and other features. However, like the Flood Map Modernization business case, the MAF/TIGER Enhancement business case does not categorize these data according to the Circular A-16 data themes, which would allow OMB to compare them with other agencies’ holdings. Without categorizing the data using the standard data themes as an important step toward coordinating that data, information about agencies’ planned or ongoing use of geospatial data in their business cases cannot be effectively assessed to determine whether it could be integrated with other existing or planned federal geospatial assets. An FEA is being constructed that, once it is further developd, may help identify potentially redundant geospatial investments. It will comprise a collection of five interrelated “reference models” designed to facilitate cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration within and across federal agencies. According to recent GAO testimony on the status of the FEA, although OMB has made progress on the FEA, it remains a work in process and is still maturing. The five FEA reference models are summarized in table 2. OMB has identified multiple purposes for the FEA. One purpose cited is to inform agencies’ individual enterprise architectures and to facilitate their development by providing a common classification structure and vocabulary. Another stated purpose is to provide a governmentwide framework that can increase agencies’ awareness of IT capabilities that other agencies have or plan to acquire, so that they can explore opportunities for reuse. Still another stated purpose is to help OMB decision makers identify opportunities for collaboration among agencies through the implementation of common, reusable, and interoperable solutions. GAO supports the FEA as a framework for achieving these ends. According to OMB’s branch chief for information policy and technology, OMB reviews all new investment proposals against the federal government’s lines of business in its Business Reference Model to identify those investments that appear to have some commonality. Many of the model’s lines of business include areas in which geospatial information is of critical importance, including disaster management (the cleanup and restoration activities that take place after a disaster); environmental management (functions required to monitor the environment and weather, determine proper environmental standards, and address environmental hazards and contamination); and transportation (federally supported activities related to the safe passage, conveyance, or transportation of goods and people). The Service Component Reference Model includes specific references to geospatial data and systems. It is intended to identify and classify IT service components (i.e., applications) that support federal agencies and promote the reuse of components across agencies. The model includes 29 types of services—including customer relationship management and visualization service, which defines capabilities that support the conversion of data into graphical or picture form. One component of visualization service is associated with mapping, geospatial, elevation, and GPS services. Identification of redundant investments under the visualization service could provide OMB with information that would be useful in identifying redundant geospatial systems investments. Finally, the Data and Information Reference Model would likely be the most critical FEA element in identifying potentially redundant geospatial investments. According to OMB, it will categorize the government’s information along general content areas and describe data components that are common to many business processes or activities. Although the FEA includes elements that could be used to help identify redundant investments, it is not yet sufficiently developed to be useful in identifying redundant geospatial investments. While the Business and Service Component reference models have aspects related to geospatial investments, the Data and Information Reference Model may be the critical element for identifying agency use of geospatial data because it is planned to provide standard categories of data that could support comparing data sets among federal agencies. However, this model has not yet been completed and thus is not in use. Until the FEA is completed and OMB develops effective analytical processes to use it, it will not be able to contribute to identifying potentially redundant geospatial investments. OMB Circular A-16 requires agencies to report annually to OMB on their achievements in advancing geographic information and related spatial data activities appropriate to their missions and in support of the NSDI. To support this requirement, FGDC has developed a structure for agencies to use to report such information in a consistent format and for aggregating individual agencies’ information. Using the agency reports, the committee prepares an annual report to OMB purportedly identifying the scope and depth of spatial data activities across agencies. For the fiscal year 2003 report, agencies were asked to respond to a number of specific questions about their geospatial activities, including (1) whether a detailed strategy had been developed for integrating geographic information and spatial data into their business processes, (2) how they ensure that data are not already available prior to collecting new geospatial data, and (3) whether geospatial data are a component of the agency’s enterprise architecture. However, additional information that is critical to identifying redundancies was not required. For example, agencies were not requested to provide information on their specific GIS investments or the geospatial data sets they collected and maintained. According to the FGDC staff director, the annual reports are not meant to provide an inventory of federal geospatial assets. As a result, they cannot provide OMB with sufficient information to identify redundancies in federal geospatial investments. Further, because not all agencies provide reports to FGDC, the information that OMB has available to identify redundancies is incomplete. Eight of the FGDC partner agencies, including the Departments of Energy, Justice, and Homeland Security, and the National Science Foundation, did not provide reports for fiscal year 2003. In addition, nonpartner agencies, including the Departments of Education, Labor, Veterans Affairs and the Treasury, did not provide reports, although all agencies that collect, use, or disseminate geospatial information, regardless of whether they are FGDC partners, are required to do so. According to OMB’s program examiner for the Department of the Interior, OMB does not know in detail how well agencies are complying with the reporting requirements in Circular A-16. Until the information reported by agencies is consistent and complete, OMB may not be able to effectively use what information they do have to identify potential geospatial redundancies. In addition to the three tools OMB uses to identify potentially redundant geospatial investments, it has also issued special requests to agencies to report on their geospatial investments to help support its oversight function for geospatial information, as required by OMB Circular A-16. For example, as part of the 2004 budget cycle, OMB initiated a pilot project to collect detailed cost information on one geospatial data theme—elevation data. Despite specifying criteria for identifying elevation data, the pilot encountered problems. FGDC developed criteria for this pilot process, but OMB did not follow it. Budget examiners at OMB modified the criteria to take into account the agencies’ widely varying missions, and broadened the criteria for individual agencies to make it easier for them to identify elevation data in the same way they tracked the data internally. As a result, elevation data were not reported consistently and could not be compared across agencies. A data collection effort associated with the fiscal year 2005 budget process raised the same questions as the 2004 effort about its effectiveness to support OMB’s oversight responsibilities. As part of the fiscal year 2005 budget cycle, OMB again requested supplemental information from federal agencies to identify which agencies are collecting geospatial data, for what purposes, and covering which geographic areas; federal expenditures related to data collection and the extent of leveraging of those expenditures; the extent of sharing of and public access to federal geospatial data; and the use of standards. Specifically, OMB asked agencies that spend $500,000 or more on any geospatial data to report information on all types of geospatial data, with a focus on the seven types of framework data identified by FGDC. However, because the earlier problems have not been addressed, the 2005 supplemental data request is also unlikely to provide useful information for OMB to identify redundant federal geospatial investments. Without a complete and up-to-date strategy for coordination or effective investment oversight by OMB, federal agencies continue to acquire and maintain duplicative data and systems. According to the initial business case for the Geospatial One-Stop initiative, about 50 percent of the federal government’s geospatial data investment is duplicative. Such duplication is widely recognized. Officials from federal and state agencies and OMB have all stated that unnecessarily redundant geospatial data and systems exist throughout the federal government. The Staff Director of FGDC agreed that redundancies continue to exist throughout the federal government and that more work needs to be done to specifically identify them. DHS’s Geospatial Information Officer also acknowledged redundancies in geospatial data acquisitions at his agency, and said that DHS is working to create an enterprisewide approach to managing geospatial data in order to reduce redundancies. Similarly, state representatives to the National States Geographic Information Council have identified cases in which they have observed multiple federal agencies funding the acquisition of similar data to meet individual agency needs. We found that USGS, FEMA, and the Department of Defense (DOD) each maintain separate elevation data sets: USGS’s National Elevation Dataset, FEMA’s flood hazard mapping elevation data program, and DOD’s elevation data regarding Defense installations. FEMA officials indicated that they obtained much of their data from state and local partners or purchased them from the private sector because data from those sources better fit their accuracy and resolution requirements than elevation data available from USGS. Similarly, according to one Army official, available USGS elevation data sets generally do not include military installations, and even when such data are available for specific installations, they are typically not accurate enough for DOD’s purposes. As a result, DOD collects its own elevation data for its installations. In this example, if USGS elevation data-collection projects were coordinated with FEMA and DOD to help ensure that the needs of as many federal agencies as possible were met through the project, potentially costly and redundant data-collection activities could be avoided. According to the USGS Associate Director for Geography, USGS is currently working to develop relationships with FEMA and DOD, along with other federal agencies, to determine where these agencies’ data-collection activities overlap. In another example, officials at the Department of Agriculture and the National Geospatial-Intelligence Agency (NGA) both said they have purchased data sets containing street-centerline data from commercial sources, even though the Census Bureau maintains such data in its TIGER database. According to these officials, they purchased the data commercially because they had concerns about the accuracy of the TIGER data. The Census Bureau is currently working to enhance its TIGER data in preparation for the 2010 census, and a major objective of the project is to improve the accuracy of its street location data. However, despite Agriculture and NGA’s use of street location data, Census did not include either agency in the TIGER enhancement project plan’s list of agencies that will be affected by the initiative. Without better coordination, agencies such as Agriculture and NGA are likely to continue to need to purchase redundant commercial data sets in the future. Further, in a recent report on coastal mapping and charting, the National Research Council cited numerous examples of redundant activity in coastal mapping, including aerial imaging, shoreline mapping, and habitat mapping. The council noted that redundancy in data collection is of most concern, as it is by far the most expensive of geospatial activities, and concluded that agencies do not have an efficient means of determining whether an area of interest has been previously mapped. Without better-coordinated activities, federal agencies are likely to continue to duplicate data collection. The longstanding problem of effectively coordinating federal geospatial investments to reduce unnecessary redundancies and their concomitant costs has not yet been resolved. A number of activities have been initiated with the aim of better coordinating geospatial investments, including the OMB-required activities of FGDC, as well as the Geospatial One-Stop initiative and other projects such as The National Map. In addition, individual agencies have collaborated on specific geospatial projects, and OMB has adopted several processes for identifying redundant geospatial investments. However, these efforts have not been very successful in reducing redundancies in geospatial investments. A complete and up-to-date strategic plan to coordinate the government’s various geospatial activities is lacking, and federal agencies have not fully complied with OMB’s Circular A-16 guidance. Similarly, OMB’s processes for identifying duplicative federal geospatial investments have not proven effective. Until a comprehensive national strategy is in place, the current state of ineffective coordination is likely to remain, and the vision of the NSDI will likely not be fully realized. In addition, without effective oversight by OMB, agencies might not have adequate incentives to fully coordinate their geospatial activities, and OMB will not be able to identify potentially duplicative geospatial investments. Until these shortcomings are addressed, cost savings from eliminating duplicative geospatial investments will not materialize. In order to encourage more coordination of geospatial assets, reduce needless redundancies, and decrease costs, we recommend that the Director of OMB and the Secretary of the Interior, in coordination with the FGDC, establish milestones for the development of an updated national geospatial data strategic plan, ensuring that the plan includes outcome-related strategic goals and objectives; a plan for how the goals and objectives are to be achieved; identification of key risk factors that could significantly affect the achievement of the general goals and objectives and a mitigation plan for those risk factors; and performance goals and measures that will be used to ensure that the goals and objectives of the NSDI are being met. To encourage better agency compliance with Circular A-16, we also recommend that the Director of OMB develop criteria for assessing the extent of interagency coordination on proposals for potential geospatial investments. Based on these criteria, funding for potential geospatial investments should be delayed or denied when coordination is not adequately addressed in agencies’ proposals. Finally, we recommend that the Director of OMB strengthen the agency’s oversight actions to more effectively coordinate federal geospatial data and systems acquisitions and thereby reduce potentially redundant investments. Specifically, OMB should require that information about planned geospatial data acquisitions provided in agencies’ business cases include specific categorizations of all geospatial data according to the standardized data themes defined by FGDC and described in OMB Circular A-16; and require that all federal agencies submit annual reports to FGDC on their GIS investments, including geospatial systems and data sets already in place. We received oral comments on a draft of this report from representatives of OMB’s Offices of Information and Regulatory Affairs and Resource Management and from the Assistant Secretary of the Interior—Policy, Management, and Budget. The officials from both agencies generally agreed with the content of our draft report and our recommendations and provided technical comments, which have been incorporated where appropriate. In addition, the Departments of Defense and Health and Human Services and the Bureau of the Census also provided oral technical comments, which have been incorporated where appropriate. Concerning our recommendation that OMB strengthen its oversight to more effectively coordinate federal geospatial data and systems acquisitions, the OMB representatives stated that they are planning to institute a new process to collect more complete information on agencies’ geospatial investments by requiring agencies to report all such investments through the Geospatial One-Stop Web portal. OMB representatives told us that reporting requirements for agencies would be detailed in a new directive that OMB expects to issue by the end of summer 2004. The Department of the Interior’s Assistant Secretary of the Interior—Policy, Management, and Budget noted that our report emphasizes geospatial investments rather than the broader and more comprehensive geospatial strategies outlined in OMB Circular A-16, and pointed out that encouraging the growth of a national spatial data infrastructure—versus tracking geospatial investments and minimizing duplication—required different approaches. In the department’s view, activities by FGDC and the Geospatial One-Stop initiative to develop an infrastructure for information sharing have established business practices that can result in sound investments. We agree with the department that these are valuable activities that can promote sound investments. Moreover, a detailed strategic plan, coupled with improved oversight and agency compliance with coordination guidance, remain critical steps to achieving the objective of reducing duplication in federal geospatial investments. We are sending copies of this report to the Chairman and Ranking Minority Member, House Committee on Government Reform, and the Ranking Minority Member, Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census. In addition, we are providing copies to the Director of OMB and the Secretary of the Interior, and the report is available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at (202) 512-6240 or John de Ferrari, Assistant Director, at (202) 512-6335. We can also be reached by e-mail at koontzl@gao.gov and deferrarij@gao.gov, respectively. Other key contributors to this report were Michael Holland, Steven Law, and Elizabeth Roach. Our objective was to determine the extent to which the federal government is coordinating the sharing of geospatial assets, including through oversight measures in place at the Office of Management and Budget (OMB), in order to identify and reduce redundancies in federal geospatial data and systems. To address this objective, we reviewed relevant federal guidance and legislation, including The E-Government Act of 2002; The Clinger-Cohen Act of 1996; The Paperwork Reduction Act of 1995; Executive Order 12906: Coordinating Geographic Data Acquisition and Access; OMB Circular A-11: Preparation, Submission, and Execution of the Budget; OMB Circular A-16: Coordination of Geographic Information and Related Spatial Data Activities; and OMB Circular A-130: Management of Federal Information Resources. Appendix III provides additional information about each. We also reviewed agency IT business cases, known as Exhibit 300s, submitted as part of the annual budget process. In addition, we evaluated the Federal Enterprise Architecture reference models and various FGDC documents and interviewed officials from the following federal agencies in the Washington, D.C. metropolitan area: Department of Commerce, including the Census Bureau and the National Oceanic and Atmospheric Administration; Department of Defense, including the National Geospatial-Intelligence Department of Health and Human Services; Department of Homeland Security, including the Federal Emergency Department of the Interior, including the Bureau of Land Management and the U.S. Geological Survey; Environmental Protection Agency; and Office of Management and Budget. We interviewed program officials representing key federal geospatial projects, including the Federal Geographic Data Committee, Geospatial One-Stop, The National Map, and the TIGER Modernization project. For these projects, we reviewed key documents such as capital asset plans, project plans, and other project documentation. To better understand federal efforts to coordinate with state and local governments and the private sector, we interviewed state and local government and private sector officials at several conferences, including the ESRI Federal User Conference and the National Association of Counties Legislative Conference. In addition, we conducted focus groups at three national conferences in March 2004: (1) The National League of Cities Congressional City Conference; (2) the Management Association for Private Photogrammetric Surveyors Federal Programs Conference; and (3) the National States Geographic Information Council Midyear Conference. At these focus groups we asked state and local government and private sector officials for their views on what the federal government was doing to coordinate its geospatial activities with them and what could be done to improve the coordination of federal geospatial activities. A total of 34 state and local government and private sector officials attended these focus groups. In addition, to determine the extent of state and local participation in the National Geospatial Data Clearinghouse and the Geospatial One-Stop portal, we obtained information from FGDC officials about the metadata records contained in the clearinghouse and conducted analyses of the data referenced in the Geospatial One-Stop portal. We conducted our work from October 2003 through May 2004 in accordance with generally accepted government auditing standards. Many federal agencies have established geospatial activities to help them achieve their specific goals and objectives. Table 3 highlights selected federal geospatial activities at certain agencies. The table is not intended to be a comprehensive list of agency geospatial activities. The E-Government Act of 2002, Section 216: Common Protocols for Geographic Information Systems. The purposes of this section are to (1) reduce redundant data collection and information and (2) promote collaboration and use of standards for government geographic information. It requires the Director of OMB to oversee (1) an interagency initiative to develop common geospatial protocols; (2) the coordination with state, local, and tribal governments, public private partnerships, and other interested persons of effective and efficient ways to align geographic information and develop common protocols; and (3) the adoption of common standards. The Clinger-Cohen Act of 1996. The Clinger-Cohen Act directs the OMB Director to promote and improve the acquisition, use, and disposal of information technology by the federal government to improve the productivity, efficiency, and effectiveness of federal programs, including through dissemination of public information and the reduction of information collection burdens on the pubic. The Paperwork Reduction Act of 1995. This legislation directs the OMB Director to oversee the use of information resources to improve the efficiency and effectiveness of government operations to serve agency missions, including burden reduction and service delivery to the public. This includes developing, coordinating, and overseeing the implementation of federal information resources management policies, principles, standards, and guidelines. Executive Order 12906: Coordinating Geographic Data Acquisition and Access. The National Spatial Data Infrastructure. This order, originally issued in 1994 and revised in 2003, establishes FGDC as the interagency coordinating body for the development of the NSDI and directs FGDC to involve state, local, and tribal governments in the development and implementation of the NSDI. The executive order also establishes a National Geospatial Data Clearinghouse, directs FGDC to develop standards for implementing the NSDI, and requires that federal agencies collecting or producing geospatial data shall ensure that data will be collected in a manner that meets all relevant standards adopted through the FGDC process. In addition, the executive order requires the Interior Secretary to develop strategies for maximizing cooperative participatory efforts with state, local, and tribal governments, the private sector, and other nonfederal organizations to share costs and improve efficiencies of acquiring geospatial data. OMB Circular A-11: Preparation, Submission, and Execution of the Budget. Part 7, Planning Budgeting, Acquisition, and Management of Capital Assets. This circular establishes policy for planning, budgeting, acquisition, and management of federal capital assets and instructs agencies on budget justification and reporting requirements for major IT investments. It requires agencies to submit business cases to OMB for planned or ongoing major IT investments and to answer questions to help OMB determine if the investment should be funded. OMB Circular A-16: Coordination of Geographic Information and Related Spatial Data Activities. This circular calls for a coordinated approach to developing the NSDI, establishes FGDC and identifies its roles and responsibilities, and assigns agency roles and responsibilities for development of the NSDI. The document states that “implementation of this Circular is essential to help federal agencies eliminate duplication, avoid redundant expenditures, reduce resources spent on unfunded mandates, accelerate the development of electronic government to meet the needs and expectations of citizens and agency programmatic mandates, and improve the efficiency and effectiveness of public management.” The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
From homeland security to tracking outbreaks of disease, to investigating the space shuttle disaster to responding to natural disasters, the collection, maintenance, and use of location-based (geospatial) information has become critical to many federal agencies' abilities to achieve their goals. Local governments and the private sector also rely on such data to support essential functions. GAO was asked to determine the extent to which the federal government is coordinating the sharing of geospatial assets, including through oversight measures in place at the Office of Management and Budget (OMB), in order to identify and reduce redundancies in geospatial data and systems. OMB, individual federal agencies, and cross-government committees and initiatives such as the Federal Geographic Data Committee and the Geospatial One-Stop project have taken actions to coordinate the government's geospatial investments across agencies and with state and local governments. However, these efforts have not been fully successful in reducing redundancies in geospatial investments for several reasons. First, a complete and up-to-date strategic plan for doing so has not been in place. Second, agencies have not consistently complied with OMB guidance that seeks to identify and reduce duplication. Finally, OMB's oversight of federal geospatial activities has not been effective because its methods--the annual budget review process, the federal enterprise architecture effort, and the Federal Geographic Data Committee's reporting process--are insufficiently developed and have not produced consistent and complete information. As a result of these shortcomings, federal agencies are still independently acquiring and maintaining potentially duplicative and costly data sets and systems. Until these problems are resolved, duplicative geospatial investments are likely to persist.
The conference, whose theme was “Realizing the Promise of Technology: Modernizing Information Systems for Human Services,” was co-sponsored by GAO, the Rockefeller Institute, the National Health Policy Forum, and The Finance Project (Welfare Information Network). To promote an informed dialogue at the conference, invitations were sent to selected individuals from four key sectors involved in developing information systems for human services—the Congress, federal agencies, state and local governments, and information technology contractors—along with research organizations and foundations. Appendix II lists the names and affiliations of conference participants. State representatives included those with responsibility for program management as well as those with expertise in information technology. Participants from 14 organizations were asked to prepare papers for presentation at one of three panels—The Need for Systems Modernization, Possible Approaches for the Future, and State and Local Experiences. Appendix I contains the conference objectives, agenda, and Web addresses for each of the papers and briefing charts presented at the conference. Following the panel presentations, participants were separated into small groups on the first day to discuss the history, roles, and challenges of various sectors in systems modernization, and on the second day to propose actions that would best facilitate systems modernization. Assignments to each discussion group were made to achieve a mix of participants from diverse backgrounds. Presenters at the conference maintained that state information systems need to be modernized to better meet new information needs that have arisen from shifts in the objectives and operations of states’ welfare programs. Research on states’ systems has identified major gaps in their capabilities to support the implementation and oversight of welfare reform. In addition, many states are using large, mainframe systems that are old, which compounds the difficulty of meeting new information needs because these systems are limited in their ability to take advantages of recent innovations in technology. Innovations, such as Internet technology, offer significant opportunities for improving the delivery of human services. With the advent of welfare reform, states’ programs for needy families with children have experienced dramatic shifts in their objectives and operations, which have created new demands on information systems, according to GAO assistant director Andrew Sherrill and Rockefeller Institute director Richard Nathan and senior fellow Mark Ragan.PRWORA placed a greater emphasis on the importance of work and established various signals to reinforce this emphasis, such as stronger work requirements and a 5-year time limit on federal TANF assistance to families. The shift from an income maintenance focus under the prior AFDC program to a service-oriented, self-sufficiency focus under TANF has significant implications for information systems. The technology challenge of welfare reform is to provide the information needed to integrate services to clients and track their progress towards self- sufficiency. To help needy families prepare for and obtain work, case managers need detailed information about factors such as family circumstances, job openings, and support services, which is very different from the information needed to issue timely and accurate cash assistance payments. In many cases, states and localities have enhanced their efforts to partner with other organizations to serve needy families, which creates demands for sharing data across organizations. As welfare agencies focus on moving needy families toward self-sufficiency, workers are drawing on other federal and state programs, often administered by separate agencies, to provide a wide array of services. While local welfare agencies typically determine eligibility for TANF, food stamps, and Medicaid, other programs that provide key services to TANF clients may be administered by separate entities, such as housing authorities or education agencies. Most notably, because TANF has focused welfare agencies on employment, a focus that has long been the province of state and local workforce development systems, welfare agencies need to work more closely than before with workforce development systems. Finally, in many cases state and local welfare reforms involve a greater effort to partner with community organizations, including faith-based organizations, to meet the needs of low-income families. Devolution is another factor that has contributed to the expansion of information needs for human services. Under PRWORA, states have greater flexibility in designing and operating their TANF programs and some states in turn have devolved substantial authority to localities for their TANF programs. As a result, state information systems will be called upon to support a potentially more diverse range of local program goals and operations. Moreover, providing automated support for localities is typically an evolving process, since local information needs can change as caseload composition changes, service strategies evolve, or new policy issues emerge. Andrew Sherrill provided an overview of the research done by GAO, in collaboration with the Rockefeller Institute, on the capabilities of states’ information systems. This research, he said, highlights the need for systems modernization. In 1999, GAO surveyed state and local program administrators in 15 states on the overall extent to which their current information systems met different types of information needs for administering and overseeing welfare reform. GAO focused on three broad types of information needs: those for case management, service planning, and program oversight. Agency workers need information for case management to perform the full range of tasks involved in coordinating the various services provided to an individual client, such as making referrals to training and monitoring a client’s progress towards employment. Service planning, which is performed by local and state program administrators, requires aggregate information on the characteristics and service needs of the caseload to determine the appropriate services that should be made available for the caseload. Program oversight, which is performed by program administrators and oversight officials, requires aggregate information on relevant measures of program performance, such as job entries and job retention. The majority of the local officials that GAO surveyed reported that their current systems provided half or less of the information needed for each of the three types of information needs. Overall, state officials provided a somewhat higher assessment of system capabilities but still acknowledged major gaps in some cases. Andrew Sherrill explained that GAO’s in-depth fieldwork at the state and local level in six states provided more detail about information system shortcomings. A major shortcoming, cited to varying degrees by officials in these states, is that some of the systems used by the agencies providing services to TANF recipients do not share data on these recipients, thus hampering a case manager’s ability to arrange and monitor the delivery of services in a timely manner. For example, local officials in New Jersey told GAO that data are not transferred electronically between the labor department, which tracks attendance of TANF recipients at work activities, and the welfare department, which imposes sanctions on TANF recipients who fail to meet work requirements. Consequently, in some cases, TANF recipients have received sanctions in error because the welfare department’s system could not obtain the needed data in a timely manner from the labor department’s system to verify a recipient’s participation in work activities. Another consequence of the lack of data sharing in the states GAO studied is that agency workers have had to input data for some items more than once because the data were not automatically transferred and updated from one system to another. Multiple entries of the same data not only reduces the time available for work directly with clients but also increases the risk of introducing errors into the data contained in information systems. The extent to which states have established links among information systems for human services varies substantially. In the 15 states that GAO surveyed, the systems that support TANF eligibility determination are, in almost all cases, linked with the information systems for food stamps, child support enforcement, TANF work activities, Medicaid eligibility determination, and transportation subsidies. These links reflect federal mandates and enhanced federal funding for systems in these programs. In contrast, GAO found that information systems for other services that TANF recipients may need to facilitate their movement toward employment, such as job training, welfare-to-work grant services, vocational rehabilitation, job listings, and subsidized housing were generally not linked to systems for determining TANF eligibility. Some state officials and others attending the conference commented that changed rules governing interactions between welfare and Medicaid have also presented new demands for the modification of information systems. Under these rules, TANF recipients, unlike AFDC recipients, are not automatically eligible for Medicaid. Not only has more work been required to demonstrate the eligibility of TANF families for these programs, but more work has also been required to modify systems so that closures of TANF cases do not generate automatic closures of Medicaid cases, as has happened in some situations. A second shortcoming of some information systems, which was voiced especially at the local level, was the limited ability to obtain data needed by program managers to meet their particular management challenges. For example, local officials at one site told GAO that data on the characteristics of TANF recipients in the state’s information system are often not available in a format that can be easily manipulated, so obtaining data depends on the technical expertise of the user. Overall, local officials cited a need for user friendly tools that provide the capability to generate a locally designed management report. In his comments on the presentation by Andrew Sherrill, Thomas Gais, director of the federalism research group at the Rockefeller Institute, said that the gaps in systems capabilities identified by GAO represent persistent problems that were also identified in earlier fieldwork by Rockefeller Institute researchers and in their follow-up fieldwork in 2000. The results of a survey by the U.S. Department of Health and Human Services (HHS) cited in GAO’s presentation indicate that many states have been using old information systems. Of the states responding, 26 percent said that the systems they were using when TANF was enacted in 1996 had first become operational in the 1970s and 40 percent said that their systems had become operational in the 1980s. Many of these older systems are housed in large mainframe computers. The HHS report goes on to point out that generally accepted information technology standards assume that the average useful life of a large-scale computer system ranges from 5 to 7 years. Moreover, the report maintains that the age of states’ systems has limited their ability to take advantage of technological improvements because the underlying equipment and software platforms of these systems do not lend themselves easily, if at all, to technological advances because of basic incompatibilities. A conference participant commented that New York’s large mainframe system has not been modernized because it would be costly and time-consuming. Instead, the state operates a dual system, relying primarily on its mainframe, but with a separate system developed to meet new data reporting requirements. Conference presenters from New Jersey, North Carolina, Oregon, Utah, and Wisconsin noted that their states continue to use older mainframe systems to varying degrees, using upgrades and interfaces where possible, although they are developing new systems to enhance their capabilities. The continued presence of these older mainframe computers reflects the historical role of the federal government in funding the development of such systems in the 1970s and 1980s, according to some conference participants. The major objectives of these systems were to increase the accuracy of eligibility determinations and cash payments, reduce error rates, and detect and deter fraud and abuse in major entitlement programs. While costs for systems development and operation were shared by the federal government and states, the federal government provided enhanced funding (i.e., more than 50 percent) in many cases. For example, states could receive federal matching funds for 90 percent of their development costs for approved welfare, Medicaid, child support, and certain child care systems. States could also receive federal matching funds of 75 percent for developing statewide food stamps systems, and in the early 1990s, for developing child welfare systems. In the mid-1990s, the federal government eliminated enhanced federal matching payments for all systems except child support and Medicaid management information systems for claims processing. Information system contractors from the Human Services Information Technology Advisory Group (HSITAG) described various innovations in technology that they said offer significant opportunities for improving the delivery of human services. Today’s personal computers can process more data at lower costs, making it possible to automate even small service providers in the local community. Systems can be secured from outsiders using firewall technologies, and confidential information that is transferred among agencies can be encrypted, further increasing security. Telecommunications networks are more widely available, providing greater opportunities for data sharing among different programs that serve the same populations. The Internet and World Wide Web provide opportunities to link program applicants, recipients, case managers, and administrators to each other and to a wealth of information needed to achieve various objectives. Graphical user interfaces allow icons or pictures to be used as well as words, so it is easier to access and navigate systems from the computer screen, and the data accessible can be expanded to include photographs, sound clips, and movies that can facilitate program orientation, assessment, and training. Coding by location and mapping represent new capabilities available to program planners to target services to families and neighborhoods. Other technological advances make it possible to store and retrieve large volumes of data with greater efficiency at less cost than was possible a decade or more earlier to facilitate meeting reporting requirements and providing information for program oversight. Presenters from North Carolina, Oregon, New Jersey, Utah, and Wisconsin described initiatives that their states had undertaken to modernize information systems for human services. The initiatives—designed to meet the unique needs of each state—are in varying stages of implementation and generally share some common goals, such as enhancing service integration. The states faced a broad range of issues in developing and implementing their initiatives, which reflect the complexity and scale of information systems projects. While the states’ initiatives have a multitude of stated objectives, their central goals generally include providing enhanced automated support for service integration and program management. Gary Weeks, director of human services reform at the Annie E. Casey Foundation, discussed his experiences in promoting service integration as the former director of the Oregon Department of Human Resources. He said that many program recipients fail because they are among the least prepared to deal with the maze of human services bureaucracy and case management plans—in some cases multiple plans for a single recipient. His strategy in Oregon was to create a system in which each recipient had a single case management plan, based on an initial, comprehensive assessment and coordinated by a lead case manager who was supported by information systems that were linked. Creating such a system, he added, did not require cutting edge technology but rather getting agreement from all the right people on the recipient data that was most important, securing access to critical databases, and authorizing case managers to work with individualized recipient data. Richard Nathan and Mark Ragan of the Rockefeller Institute echoed this point in their presentation, arguing that service integration has been a longstanding aim of program officials, but that the real politics of human services—characterized by bureaucracies with their own cultures and politics—have made this difficult. They went on to say that information technology can allow human service providers to overcome the politics of program proliferation not necessarily through “one-stops”—co-locating staff from different programs at one-stop centers—but through “one-screen,” that is, making data from different programs available to a caseworker on a single computer screen. With respect to the objective of improving automated support for program management, three of the states have developed or plan to develop large data warehouses or smaller data marts, that is, specialized databases that store information from multiple sources in a consistent format, usually for a specific subject area, and are separate from the databases used for daily business operations. Using data warehouses or marts, program administrators can generate customized management reports on request without slowing routine business transactions, including reports that track recipients’ use of government services over time and respond to varied requests for information from state legislatures, federal agencies, and research organizations. While the information systems initiatives of the five states share similar broad goals, they vary in terms of stages of development, with North Carolina in the planning phase, Oregon in the pilot phase, and New Jersey, Utah, and Wisconsin fully operational. What follows is an overview of some of the distinctive aspects of each state’s initiative. Bill Cox, director of information resource management at the state’s Department of Health and Human Services, described North Carolina’s comprehensive planning effort, the Business Process Re-Engineering Project.Recognizing that its current mainframe information systems are at the end of their life cycle, the state developed a model of a reengineered business process for human services to prepare for the development of a single, comprehensive statewide information system. This system would support a wide array of programs, including TANF, Medicaid, children’s health insurance program, food stamps, child care, child support, child welfare services, and adult services for families. The reengineered business process is intended to resolve a host of deficiencies with the current process, such as excessive paper-based processes, little access to “real-time” data, and minimal communications among agencies and partners. As part of the reengineering initiative, a contractor working with a team of state and county officials for 3 months examined current business processes and concluded that a minimal amount of time is actually spent assisting applicants and recipients while the majority of time is spent on administrative tasks. On the basis of the team’s recommendations, the state began implementing its initiative in June 2001, including the development of a data warehouse. Gary Weeks of the Anne E. Casey Foundation outlined Oregon’s pilot initiative that uses information technology to support integrated service provision at Family Resource Centers in 4 of the state’s 36 counties. Workers from various agencies have been co-located at these centers, where families and individuals receive an initial comprehensive needs assessment, a single case management plan is developed with a lead case manager, and data on the family are available to agencies located at the center. To provide this shared data, the centers use a software tool called MetaFrame, which provides access on a caseworker’s computer screen to the separate databases for TANF, child welfare, and mental health and substance abuse systems. Caseworkers can obtain information from these databases on eligibility, services received, and case narrative notes in some cases, and thereby build their own comprehensive file on a client. Gary Weeks noted that the software tool’s capabilities are fairly rudimentary because they do not provide a single integrated database, but the tool provides caseworkers access to information in a fairly low-tech and relatively inexpensive manner. To overcome data confidentiality issues, applicants are asked to sign a release form at the time of their assessment that authorizes the sharing of their case file data for program purposes, and about 96 percent of applicants sign this form. William Kowalski, director of the One Ease-E Link project at the New Jersey Department of Human Services, explained that a key aim of the initiative was to employ information technology to support the building of new cooperative relationships among the diverse providers of human services in New Jersey and thereby enhance service integration. The initiative seeks to accomplish this by providing hardware and software to counties so they can create county-level networks comprised of a multitude of public and private organizations, including nonprofits such as United Way organizations. Each county network is part of the larger One Ease-E Link network that includes a website with an eligibility screening tool, case management software, secure e-mail, discussion forums, document libraries, and resource directories. This network is also linked to a single database shared with three state agencies: the Departments of Human Services, Labor, and Health and Senior Services. The sharing of information is secured behind a firewall and protected by Public Key Infrastructure (PKI) technology that uses digital signatures and encrypts data. Counties that join One-Ease-E Link maintain their networks through fees they collect from member service providers. One Ease-E Link has been implemented by 17 of New Jersey’s 21 counties and more than 900 local service providers have become part of the network. Russell Smith, deputy director of information technology at the Utah Department of Workforce Services, described Utah’s development of the UWORKS One-Stop Operating System. In 1996, the state created the Department of Workforce Services, which combined 25 programs from 5 different departments with the goal of merging job training, job development, and welfare-related services such as TANF, food stamps, and child care into a single efficient system. The new department inherited various computer systems that had supported each of the programs and recognized that it needed an integrated case management system that supported all of its programs. The One-Stop Operating System was developed to fill this need at nearly 50 one-stop employment centers throughout the state. The system uses Internet technology and has linkages with databases for program eligibility, job listings, job training, labor market information, and unemployment insurance. Job seekers can access services on their own by using a web browser or obtain help from state staff at the one-stop centers that offer multiple services under a single roof. To expand information for program management, the state has developed a data warehouse that can generate reports in response to on- line queries. Paul Saeman, acting director of the workforce information bureau in Wisconsin’s Department of Workforce Development, explained how his state’s extensive information system has evolved in response to changes in program objectives and organization. The system serves two state departments that have split responsibility for human services programs. His department is consolidating TANF and child care with other employment programs, while the Department of Health and Family Services is expanding benefit entitlement programs like Medicaid and food stamps. To support integrated case management and eligibility determination across these departments and programs, the state has built 22 subsystems that comprise the Client Assistance for Re-employment and Economic Support System (CARES). Teams of workers at one-stop job centers use the Case Manager’s Desktop Reference system to access CARES data and monitor participant eligibility and services received in 6 or more programs. A plan for sharing the CARES system and developing it in the future was established by the two departments after many months of negotiation. While CARES supports day to day program operations, it also feeds information into a series of small data marts and a larger data warehouse, called the Wisconsin Data for Operational Management (WISDOM), that are used for planning and reporting purposes. With the help of WISDOM, knowledgeable state and local users expect to be able to create hundreds of different reports in almost endless combinations for programs such as TANF, child care, and food stamps. In addition, CARES data compiled over time on families served by TANF and other programs is being inventoried, documented, and stored as part of the Wisconsin Program and Administrative Data and used for research and evaluation by state staff and the Institute for Research on Poverty at the University of Wisconsin. The information systems initiatives of these states are complex and large- scale undertakings, and states faced a broad range of issues in developing and implementing their initiatives. Table 1 summarizes some of the issues most commonly reported by the state presenters and provides examples of responses taken to these issues. For example, these issues include obtaining support for the initiative, training system users, maximizing the useful life of the system, and managing the project effectively. These issues are not unique to the human services but are the general types of issues that arise in large-scale information systems projects. Conference participants identified and discussed at length three key challenges for systems modernization: enhancing strategic collaboration among different levels of government, simplifying the cumbersome approval process for obtaining federal funding for information systems, and obtaining staff expertise in project management and information technology. These challenges were identified in the small group sessions and elaborated in greater depth in several of the conference papers. A key challenge to modernization and integration identified by conference participants is that of achieving greater strategic collaboration across programs and agencies and among levels of government. This challenge was articulated in the presentation by Sandra Vargas, Administrator of Hennepin County, Minnesota, and Costis Toregas, president of Public Technology Incorporated, who provided a local perspective on information technology issues. Vargas and Toregas reminded other participants of the importance of including localities when states and federal agencies develop plans for human service programs and information systems. In their view, the guiding vision in this area should be that of “local, state, and federal governments investing and executing together around a citizen-oriented service delivery model that produces measurable results” and they see technology as the tool to execute the vision. However, they maintained that what is still missing is a framework for achieving this vision that is truly collaborative. They added that greater collaboration could promote such outcomes as information technology investments that build on one another and work being performed by the level of government best able to accomplish the task. Richard Nathan and Mark Ragan of the Rockefeller Institute echoed the need for more intergovernmental collaboration in their presentation. They maintained that many of the recommendations that have been made in the last decade to facilitate systems improvements have expressed a common theme—that federal agencies should improve and integrate their policies and procedures. However, in their view, it is not reasonable to expect all solutions to come from the federal government or that federal changes will necessarily and quickly result in better state and local information systems. They maintained that federal, state, and local governments, as well as technology contractors, all have a role to play in systems modernization for human services and that improvements are needed in the interactions of these partners. Nathan and Ragan proposed that an institute for the management of human services information systems be created that would, among other objectives, convene federal, state, and local officials across program areas to discuss ways to remove barriers to system development. Some conference participants commented that the federal government could play a greater collaborative role in facilitating systems modernization. They explained that in the 1970s and 1980s, the Congress and federal agencies had taken the lead in encouraging states to invest in technology to improve services to needy families. But, they added that they currently see little coordinated federal effort to help states and localities invest wisely in technology, learn from the best practices as well as the mistakes of others, and tailor information systems to meet local needs. Instead, they are left with the impression that federal agencies primarily regulate rather than facilitate systems development for human services, and do so in a narrow context, prescribing details rather than providing broader strategic guidance. Another area cited in which the federal government could play an improved collaborative role pertains to the enactment of legislation that has implications for state systems. Some conference participants commented that in certain instances, federal legislation is enacted that does not anticipate adequately the time and cost required to develop or modify state information systems. For example, several conference participants noted that legislative deadlines for systems implementation often follow a “one size fits all” approach that places all states in competition for a limited number of private contractors and fails to accommodate differences in state capabilities. Another participant said that states do not receive sufficient federal funding for the costs of providing benefits to needy families through electronic benefit transfers. Several participants also cited the extensive efforts required of diverse state agencies to re-examine the privacy and security of their automated data as a result of the passage of the Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191). Obtaining approval for federal funding of state information systems development and operations can be a slow and burdensome process that delays project implementation, according to various participants at the conference. Participants cited problems with both the overall approval process for obtaining funding—the advanced planning document (APD) process—and the cost allocation component of this process. As shown in table 2, states must submit required documents under the APD process and receive federal approval from the relevant federal agency to obtain federal funding for systems development for Medicaid, food stamps, child welfare and child support enforcement. An APD is not required if TANF funds only are used for a project, because TANF is a block grant. As part of the APD process, states submit specific documents, including planning, contracting, and purchasing documents, which cover needs, objectives, requirements analysis, alternatives analysis, project management plan, cost benefit analysis, proposed budget, and any proposed cost allocation. If federal agencies do not respond within 60 days, approval is automatic. If federal agencies request further state documentation or clarification, the 60-day clock starts over when the state’s additional documentation is received, so the actual approval process may take longer. An updated APD is required annually or more frequently if significant changes are involved. The current APD process fails to address the fundamental shift that has occurred in information systems practices over the past 20 years, according to Jerry Friedman, former executive deputy commissioner at the Texas Department of Human Services, and John Cuddy, chief information officer at Oregon’s Department of Human Resources. In their view, the APD process, designed to mitigate financial risks and avoid incompatibilities among systems, was appropriate when states typically worked for 3 to 5 years to develop mainframe systems that were implemented with a “big bang.” Since then, states have generally shifted from investments in mainframes to smaller systems that are developed and implemented incrementally through a series of small, quick projects. Friedman and Cuddy explained that in the time it takes to obtain federal funding approval under the APD, states’ plans may be obsolete, given the current fast pace of technological advances. They also noted that the APD process was intended for systems in which the design and development stage was distinct from the implementation and operations stage. They maintained that these distinctions no longer fit state practices, which are iterative, with one stage overlapping or running concurrently with another and lessons learned from one project’s implementation altering the planning of another. Friedman and Cuddy concluded that the APD process is not working to the satisfaction of anyone and that it is time to reengineer the process. William Kowalski echoed their views, commenting that New Jersey experienced lengthy delays and altered its plans for the development of a data warehouse because of difficulties obtaining approval for federal funding under the APD process. Rick Friedman of the Centers for Medicare and Medicaid (CMS, formerly the Health Care Financing Administration) agreed that the APD documentation appears daunting, but noted that similar documentation is often required for approval within states. To the extent that the federal requirements are already addressed in the states’ own internal approval processes, Rick Friedman said that the federal agencies would be willing to review the documentation previously developed to satisfy the state procurement offices. If there are additional federal requirements, however, these would still have to be addressed. In an effort to expedite the APD approval process, his agency developed a streamlined APD format for use by states interested in receiving federal financial support for Medicaid- related activities under the Health Insurance Portability and Accountability Act. The new format re-packaged existing requirements in a way that simplified the entire process. He added that North Carolina used this format in making its request and found it to be considerably easier and more efficient. Within the APD process, conference participants identified cost allocation as a component that may delay federal funding approval and impede service integration. State information systems that support more than one federal program must have a cost allocation plan approved by the federal agencies that provide funding. To receive federal approval, the cost allocation plan must be complete and provide sufficient detail to demonstrate that the costs are allowable and fairly allocated among the various federal and state programs that benefit from the project, including TANF (if applicable). Within the plan, different methodologies are used to justify the costs for specific objectives, such as eligibility determination. The allocation of costs that must accompany the APD for systems development is usually based on different methodologies than the allocation of costs for systems operations. Federal agencies have not issued guidance on specific methodologies. The cost allocation plans for systems development must be approved by each federal agency expected to provide funding, while the plans for systems operations must be approved by HHS, the lead federal agency. Cost allocation has received more attention from state human services officials under welfare reform because TANF is now subject to rules governing cost allocation that did not apply to AFDC. AFDC was exempted from Office of Management and Budget cost allocation rules based on HHS’ interpretation of the legislative history. Under the exemption, AFDC could be considered the primary program for common costs, such as entering data on applicants’ income and assets, and could cover costs that otherwise would have been allocated to various programs like Medicaid or food stamps. The same is not true under TANF. TANF funds may be used to pay for shared systems only to the extent that the TANF program benefits from the systems, so they cannot cover common costs, but only a proportion of these costs in shared systems. As part of the transition from AFDC to TANF, HHS requested that states submit new public assistance cost allocation plans that would take effect July 1999 for most states. Some conference participants cited a need for more guidance or flexibility on acceptable cost allocation methodologies. In his presentation on the development of Utah’s UWORKS project, Russell Smith said that obtaining approval for the cost allocation plan took considerably more time and effort than originally estimated. Utah State officials spent 6 months negotiating an acceptable cost allocation plan with federal officials for the project, which used funds from Labor’s One Stop grants, TANF funds, and food stamp employment and training funds. Bill Cox identified inflexible cost allocation methodologies as a problem in his presentation on North Carolina’s Business Process Reengineering Project. He said that while project costs are commonly allocated based on the size of program caseloads, the state did not think it was appropriate to use this basis for its reengineering project. He explained that while the state’s TANF caseload has decreased in recent years, the size of the caseload does not accurately represent the amount of time that caseworkers actually spend on TANF cases. The state proposed using a cost allocation methodology based on the amount of time caseworkers spent on different programs and projects. However, while the CMS and the Food and Nutrition Service had no comments on this change in methods, the Administration for Children and Families did have reservations and indicated that the preferred method is caseloads, according to Cox. Cox also maintained that more guidance is needed with respect to appropriate cost allocation methodologies in complex projects with multiple phases. In their presentation, Software Productivity Consortium president Werner Schaer and State Information Technology Consortium president Bob Glasser highlighted project management as a key challenge for systems modernization. They explained that in their extensive consulting work on a wide range of state information systems projects, the major problems they observed have involved issues other than technology. The primary causes of these problems are a lack of wide-ranging management experience with information technology, a lack of management experience with large and complex systems, and insufficient user participation in project processes. They added that most firms that are dependent on software development for their core business have learned significant lessons about how to manage the development and deployment of large, complex software systems. Yet in their view, the states, as a general rule, are very early on this learning curve and could benefit from the lessons that the industry has learned. Information technology contractor representatives from HSITAG echoed these themes in their presentation. For example, they explained that HSITAG members have encountered situations in which states have chosen proven program managers but failed to provide training to help them become successful managers of information technology projects. HSITAG presenters emphasized that as systems projects grow to span multiple programs and increase in complexity, it is important to use proven methods for promoting regular communication among project stakeholders, predicting system impacts, and defining and achieving results. Georgia chief information officer Larry Singer commented that the project management challenges faced by states are similar to those described in GAO testimony on the information system challenges facing the federal government. Some states have found it difficult to attract and retain staff with the necessary expertise in information technology because these specialists command high salaries and technology is changing so rapidly. For example, due to government salary limits, it is hard to compete for database analysts who can earn $150 to $200 an hour in the private sector, according to Russell Smith. Private contractors also may face staffing problems, lacking the expertise required for specific work they have agreed to undertake or reassigning experienced staff to other work before projects are completed. Conference participants identified numerous strategies to improve state information systems and facilitate service integration. By identifying broad roles that each of the following sectors could play—the Congress, federal agencies, states and localities, and information technology contractors— they affirmed that diverse groups can contribute to making progress in this area. In addition, participants developed more detailed proposals of actions that could be taken to address challenges for systems modernization and facilitate service integration. The majority of these proposals pertain to the challenges of enhancing collaboration among different levels of government and simplifying approval processes for obtaining federal funding. Table 3 summarizes conference participants’ suggestions about the roles that different sectors could play in facilitating systems modernization and some of the challenges associated with fulfilling these roles. For example, in addition to authorizing funding for systems demonstration projects, the Congress could play a broad supportive role in helping remove barriers and promoting systems modernization as it obtains additional knowledge of information systems trends and needs. A key challenge in fulfilling these roles is how organizations should target their efforts to better inform the Congress of needs and trends in this area. Beyond their roles as regulators, federal agencies could help states work together to develop information systems and share their models with other states. State and local governments, which are on the front lines of system design and operation, could facilitate progress by developing model information systems and testing innovative system linkages. Information technology contractors could use their unique perspectives and expertise to play a range of educational roles, such as helping states and localities improve their management of information systems projects. Conference participants, working in small discussion groups, proposed numerous actions to address systems modernization and facilitate improvements in state information systems for human services. These proposals are summarized in table 4. The proposals vary in their scope and specificity, and also whether or not they would require legislative or regulatory changes to be implemented. Some of the proposals are described more fully in papers presented at the conference. However, the list of proposals does not represent a consensus of participants. Participants brought diverse perspectives to the issues examined at the conference and did not have time to discuss each proposal in detail or systematically assess the merits or relative priorities of the various proposals. Nonetheless, this list of proposals represents a rich source of potentially useful ideas for improving the development of information systems for human services and thus merits further analysis and discussion. Many of the proposals pertain to enhancing strategic collaboration among different levels of government and these proposals present various approaches to this objective. For example, several proposals focus on informing federal or state political leaders about, and involving them in, issues related to systems modernization, such as by holding a congressional hearing on integrated information technology for human services. Other proposals would create a forum for intergovernmental collaboration by creating an institute for the management of human services information systems or establishing federally funded systems demonstration projects to integrate state and local services. Other proposals are intended to minimize the occurrence of perceived adverse effects on state information systems resulting from federal legislation. The proposals related to improving the federal funding process also encompass a wide range of approaches, ranging from making incremental changes to the APD process to creating a federal block grant for human service information systems. Several proposals call for replacing the APD process—in one case with a process in which states’ information systems plans would be reviewed as a component of their overall program plans and in another with a process based on states’ certified capacity to manage information systems. Another proposal suggests a negotiating procedure that could be used to develop an acceptable replacement for the APD process. There is an effort underway to implement changes to address one of the broad challenges identified by conference participants: simplifying the approval process for obtaining federal funding. Partly in response to a recommendation in GAO’s April 2000 report on information systems, a federal interagency group has been established and is focusing its attention on the APD process. Rick Friedman of the CMS, who chairs the group, gave conference participants a status report on the work of the group. He said that the interagency group includes representatives from five HHS offices and the U.S. Department of Agriculture’s’ Food and Nutrition Service. The group has met several times to examine the APD process, has consulted with state officials, and has formulated some recommended changes, but the proposed changes have not been approved by the respective federal agencies. We are sending copies of this report to appropriate congressional committees; the Secretary of Health and Human Services; the Secretary of Agriculture; the Secretary of Labor; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512- 7215. Other GAO contacts and staff acknowledgments for this report are listed in appendix III. Realizing The Promise Of Technology: A Conference On Modernizing Information Systems For Human Services Sponsored by: U.S. General Accounting Office The Nelson A. Rockefeller Institute of Government Welfare Information Network (The Finance Project) June 28 and 29, 2001 in Reston, Virginia With its heightened emphasis on employment and time-limited assistance, welfare reform significantly expanded the information needed to support activities ranging from integrated service delivery by front-line caseworkers to program performance monitoring by administrators and oversight agencies. To meet such needs, automated systems must be able to share data across the numerous programs that serve low-income families, such as Temporary Assistance for Needy Families, Medicaid, child care, job training, vocational rehabilitation, and child welfare. For three years, members of the GAO / Rockefeller Institute Working Seminar on Social Program Information Systems have met regularly to study system capabilities, obstacles to modernization, and strategies to facilitate progress. In April 2000, GAO issued a report that identified major gaps in the capabilities of state automated systems to meet information needs for welfare reform. This conference will build on prior work by providing diverse perspectives on key issues and options. To help develop a literature in this area, the presenters at this conference will write papers that we plan to publish, along with an overview of conference proceedings. Attendance will be by invitation only, and conference participants will include congressional staff, federal and state program and information technology managers, welfare researchers, information technology vendors, and others. A key objective will be to tap this collective expertise by having participants take part in breakout sessions each day. Participants will consider proposals for actions that could be taken in four key sectors to facilitate systems modernization: the Congress, federal agencies, states and localities, and information technology vendors. We will then determine the level of consensus for these proposals. By documenting current knowledge and highlighting collaboratively developed proposals—an action agenda—the report issued from this conference should provide the Congress, Administration, and states and localities with timely suggestions pertinent to the reauthorization of welfare. WELCOME AND CONFERENCE OVERVIEW Cynthia Fagnoni, General Accounting Office (GAO), and Richard Nathan, Rockefeller Institute of Government THE NEED FOR SYSTEMS MODERNIZATION Chair: Barbara Blum, Research Forum on Children, Families, and the New Federalism The Capabilities of State Automated Systems to Meet Information Needs in the Changing Landscape of Human Services Andrew Sherrill, GAO http://www.gao.gov/special.pubs/GAO-02-121/ap1.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap2.pdf The Need to Align Federal, State, and Local Technology Investments: A Local Perspective Sandra Vargas, County Administrator, Hennepin County, Minnesota, and Cost is Toregas, Public Technology Incorporated http://www.gao.gov/special.pubs/GAO-02-121/ap3.pdf Reactor: Thomas Gais, Rockefeller Institute of Government POSSIBLE APPROACHES FOR THE FUTURE Chair: Judith Moore, National Health Policy Forum Re-engineering the Approach by Which the Federal Government Approves and Monitors the Creation of State Human Services Information Systems Jerry Friedman, Texas Department of Human Services, and John Cuddy, Oregon Department of Human Resources http://www.gao.gov/special.pubs/GAO-02-121/ap4.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap5.pdf Federalism and the Challenges of Improving Information Systems For Human Services Richard Nathan and Mark Ragan, Rockefeller Institute of Government http://www.gao.gov/special.pubs/GAO-02-121/ap6.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap7.pdf Innovations in Technology and Project Management Practices That Can Improve Human Services Representatives from the Human Services Information Technology Advisory Group http://www.gao.gov/special.pubs/GAO-02-121/ap8.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap9.pdf Lessons Learned Helping Organizations Make Smart Information Technology Decisions Werner Schaer, Software Productivity Consortium, and Robert Glasser, State Information Technology Consortium http://www.gao.gov/special.pubs/GAO-02-121/ap10.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap11.pdf Reactors: Joseph Leo, Science Applications International Corporation, and Bruce Eanet, Employment and Training Administration, U.S. Department of Labor The Oregon Experience and Looking to the Future Gary Weeks, Director of Human Services Reform, Annie E. Casey Foundation (former director of the Oregon Department of Human Resources) http://www.rockinst.org/publications/pubs_and_reports.ht ml BREAKOUT SESSIONS Participants are divided into the following groups to discuss the historical involvement, role, and special challenges of that sector in facilitating systems modernization. Group 1: The Congress Moderator/Reporter: Elaine Ryan, American Public Human Services Association, and Gregory Benson, Rockefeller Institute of Government Group 2: Federal Agencies Moderator/Reporter: Rick Friedman, Centers for Medicare and Medicaid Services, and Richard Roper, The Roper Group, New Jersey Group 3: States and Localities Moderator/Reporter: Lorrie Tritch, Iowa Department of Human Services, and Michael Rich, Emory University Group 4: Information Technology Vendors Moderator/Reporter: Vicki Grant, Supporting Families After Welfare, and Robert Stauffer, Deloitte & Touche Consulting Group PLENARY SESSION: REPORTS FROM BREAKOUT GROUPS AND DISCUSSION OF THEIR IDEAS Discussion Leader: Barry Van Lare, Welfare Information Network DINNER STATE AND LOCAL EXPERIENCES Chair: Sigurd Nilsen, GAO Wisconsin’s System Initiatives for Eligibility and Work- Based Programs Paul Saeman, Wisconsin Department of Workforce Development http://www.gao.gov/special.pubs/GAO-02­ 121/ap12.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap13.pdf http://www.gao.gov/special.pubs/GAO-02-121/ap14.pdf http://www.gao.gov/special.pubs/GAO-02-121/ap15.pdf One Ease E-Link: New Jersey’s Pursuit to Establish an Electronic, Multi-Tooled Network for the Delivery of Coordinated Social, Health And Employment Services William Kowalski, New Jersey Department of Human Services http://www.gao.gov/special.pubs/GAO-02-121/ap16.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap17.pdf Utah’s Development of a One-Stop Operating System Russell Smith, Utah Department of Workforce Services http://www.gao.gov/special.pubs/GAO-02-121/ap18.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap19.pdf Reengineering Business Processes to Integrate the Delivery of Human Services in North Carolina Bill Cox, North Carolina Department of Health and Human Services http://www.gao.gov/special.pubs/GAO-02-121/ap20.pdf Briefing charts: http://www.gao.gov/special.pubs/GAO-02­ 121/ap21.pdf Reactor: Rachel Block, Centers for Medicare and Medicaid Services Participants are divided into the same four groups in which they participated the previous day. Building on their previous discussions, they develop proposals for actions that could be taken to facilitate systems modernization. However, participants are not limited to any particular sector (e.g., federal agencies) in developing their proposals. Elizabeth Caplick also helped arrange the conference that resulted in this report.
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 replaced the Aid to Families With Dependent Children program with a block grant to states that provide Temporary Assistance for Needy Families (TANF). TANF strongly emphasizes work and job replacement and sets a five-year lifetime limit on federally funded TANF assistance to adults. To meet information needs for welfare reform, information systems must be able to share data across various programs, including TANF, Medicaid, job training, child care, and vocational rehabilitation. However, previous GAO studies found major gaps in states' information systems. Most of the local TANF administrators in 15 states surveyed by GAO reported that their current systems provide half or less of the information needed to manage individual cases, plan appropriate services for the caseload, and monitor overall program performance. The administrators are missing information because some of the systems used do not share data on these recipients, which constrains the ability of case managers to arrange and monitor the delivery of services. Five states--New Jersey, North Carolina, Oregon, Utah, and Wisconsin--are modernizing their information systems to take advantage of recent technological advances. These initiatives have expanded their data-sharing capabilities to enhance program management and service integration. Three key challenges confront systems modernization: enhancing strategic collaboration among different levels of government, simplifying the cumbersome approval process for obtaining federal funding for information systems, and obtaining staff expertise in project management and information technology.
The OIC Program is one of IRS’s collection programs to resolve delinquent tax accounts. Taxpayers who do not pay their taxes in full when they file their tax returns or when IRS determines that they owe additional taxes are subject to IRS’s collection process. The collection process begins when IRS sends the taxpayer a bill demanding full payment. For taxpayers who are unwilling or unable to pay, IRS may take enforcement action through liens, levies, or seizures of property; place the account in a temporary inactive status; or refer the case to IRS counsel for litigation. By law, IRS has 10 years from the date of assessment to collect delinquent taxes from a taxpayer. Taxpayers who are willing to pay may qualify for an installment agreement, which allows payments to be made over time; taxpayers who cannot afford to pay their full liability may be eligible for an offer in compromise. Section 7122 of the Internal Revenue Code gives IRS authority to settle tax debts through compromises, that is, by accepting less than full payment. Historically, IRS’s compromise authority has been limited to cases where there was doubt as to liability or doubt as to collectibility. In July 1999, IRS issued temporary regulations allowing for a third type of compromise when there is no doubt as to liability or collectibility but when compromising the taxes would promote effective tax administration. IRS can accept the following types of compromise. A compromise based on doubt as to liability can be accepted when there is a dispute that the tax liability is correct. A compromise based on doubt as to collectibility can be accepted when (1) it is unlikely that the tax liability can be collected in full and (2) the amount of the taxpayer’s offer reasonably reflects collection potential—the net equity of the taxpayer’s assets plus the amount that IRS could collect from the taxpayer’s future income. Because IRS’s policy is to allow taxpayers sufficient resources to provide for necessary living expenses, if special circumstances exist, such as advanced age or serious illness of the taxpayer, IRS may accept an offer for an amount that is less than what could be collectible based on the taxpayer’s financial condition. A compromise based on effective tax administration can be accepted only when (1) there is no dispute about the tax liability and (2) the taxpayer has sufficient resources to fully pay the tax but collection of the liability in full would either create an economic hardship or be detrimental to voluntary compliance. As illustrated in figure 1, the offer process starts when an offer application is submitted by a taxpayer or his or her representative. The offer must be supported by a current statement of the taxpayer’s financial condition, including data on assets and liabilities and a monthly income and expense analysis. If the taxpayer has not filed all required federal tax returns or is in bankruptcy, the offer is not considered workable and will be returned to the taxpayer. If the offer is eligible for consideration and the offer package is incomplete, IRS will contact the taxpayer and attempt to obtain the missing information. If the taxpayer does not provide the requested information, the offer application will be returned to the taxpayer. Once an offer package is complete, IRS determines whether the offer is acceptable by reviewing and verifying the taxpayer’s financial data. The verification includes a review of prior-year tax returns as well as records showing the taxpayer’s assets, bank accounts, and personal and real property. The taxpayer may be asked to provide additional documentation to verify financial or other information. If the financial statement becomes older than 12 months while the offer is being processed, the taxpayer must be contacted to update the information. When an offer is unacceptable, IRS gives the taxpayer the opportunity to submit an amended offer, withdraw the offer application, or seek an alternative resolution to the case. When an offer is rejected, the taxpayer will be notified in writing after an independent administrative review of the proposed rejection. The letter will explain the reason for the decision and give instructions on how the taxpayer may appeal the decision. If an offer application is returned because the taxpayer did not provide all requested financial information, IRS’s policy is to conduct an independent administrative review of the offer application before returning it to the taxpayer. After an offer is accepted by IRS, the taxpayer will be notified in writing and given instructions on how to make the agreed payments. IRS allows taxpayers three payment options—an immediate payment (within 90 days of acceptance); a short-term deferred payment plan (after more than 90 days but within 2 years of acceptance); or a deferred payment plan (during the remaining statutory period for collecting the tax). Another way that IRS collects delinquent taxes is through installment agreements. Under an installment agreement, the taxpayer remains obligated to pay the entire tax liability and agrees to do so in installments over a period of time not to exceed the remaining statutory period allowed IRS by law to collect the tax liability, plus a 5-year extension. Interest continues to accrue on the unpaid balance. IRS may periodically review a taxpayer’s financial condition and pursue further collection action if the taxpayer’s ability to pay increases in the future. To determine why the inventory of cases and case processing times have continued to grow, we reviewed and analyzed OIC Program data from IRS statistical reports; reviewed OIC policies and procedures, program documents, and changes mandated by the Restructuring Act; and interviewed IRS officials. Since offers based on doubt as to liability are not processed by collection staff and represent less than one percent of all offers, we omitted them from our review. We did not check the reliability of IRS’s program data in the automated OIC system, the collection time reporting system, and the OIC quality measurement system. To assess whether IRS’s current initiatives for managing the OIC Program will reduce inventory and processing times, we analyzed IRS’s bases for the assumptions underlying the initiatives. As part of our evaluation, we interviewed IRS officials and reviewed relevant program documents, data, and studies by an outside contractor. Because the success of the initiatives depends in part on how well they are managed, we assessed IRS’s goals and evaluation plans. For criteria for this assessment, we relied on past GAO reports on performance management and IRS’s guidance on program evaluation. To determine whether IRS is fulfilling the requirements of the Restructuring Act in terms of independently reviewing all proposed offer rejections, considering the facts and circumstances of each case, and not rejecting offers from low-income taxpayers solely on the basis of the amount offered, we reviewed relevant laws, regulations, and program guidance; studies by TIGTA; and reports by the OIC quality review program, IRS’s appeals office, and the National Taxpayer Advocate. We also interviewed IRS officials. To determine the extent to which IRS has information on how the policy change eliminating partial payment installment agreements affects taxpayers and to evaluate IRS’s plan for a new partial payment installment agreement program, we interviewed installment agreement program officials and officials with responsibility for developing IRS’s legislative proposal relating to installment agreements. We also reviewed installment agreement policies and procedures, program documents and data, IRS’s legislative proposal, and examples of circumstances in which taxpayers may not qualify for either an installment agreement or an offer. We performed our investigation at IRS’s national headquarters and its Small Business and Self-Employed headquarters; IRS offices in Oakland and Fresno, California, and in Austin, Texas; IRS centers in Fresno and Austin; IRS’s appeals office; the National Taxpayer Advocate’s office; and the OIC quality review program in Atlanta, Georgia. The offices and centers in California and Texas were subjectively selected because of their location and experience with offers. We also reviewed a subjectively selected sample of accepted, rejected, and returned offers in the offices we visited. We did our work from May 2001 through January 2002 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue. His written comments are discussed near the end of this report and are reprinted in appendix III. In addition, the National Taxpayer Advocate provided written comments, which are reprinted in appendix IV. OIC inventory and processing times grew, largely because increases in staffing were outpaced by the effects of program changes. Between fiscal years 1997 and 2001, the inventory of unresolved offers almost tripled and the percentage of offers processed within 6 months dropped from 64 percent to 32 percent. Program changes, some initiated by IRS and some mandated by the Restructuring Act, contributed to increases in the demand for offers, the number of processing steps, and the number of staff hours needed to process a case. Despite significant increases in the staff devoted to the OIC Program, IRS was unable to close as many cases as it received. During fiscal years 1997 through 2001, IRS’s ending inventory of offers, or the number of cases still unresolved at the end of the fiscal year, grew from about 32,300 to about 94,900 offers. As figure 2 shows, most of the growth occurred during fiscal years 1999 and 2000, when the inventory rose by almost 50,000 offer cases. IRS measures its timeliness in processing offers by the percentage of offers completed within 6 months of the date that the offer is accepted for investigation. Our analysis of IRS data showed that from fiscal years 1997 through 2001, the percentage of cases that were closed within 6 months dropped from 64 to 32 percent, the percentage that were closed within 6 to 12 months grew from 29 to 43 percent, and the percentage closed after more than 12 months rose from 7 to 25 percent, as illustrated by figure 3. Although IRS does not routinely report the average number of days it takes to close a case, data from an IRS sample of closed offer casesshowed that, on average, it took about 292 days to close an offer caseduring fiscal year 2000 and about 312 days to close an offer case during fiscal year 2001. Other data describing the results of the OIC Program are included in appendix I. The data include the dollar amount accepted in compromise and the amount of the total tax liability compromised. Program changes contributed to the growth in inventory and processing times. First, program changes increased the demand for offers, as measured by the number of workable offers, or new offers that meet IRS’s criteria for processing. Second, some changes increased the complexity of the offer process, resulting in more processing steps and staff hours to process a case. Our analysis of IRS’s data showed that the number of workable offers doubled over the last 5 years, from about 51,700 offers in fiscal year 1997 to about 104,500 in fiscal year 2001, as illustrated by figure 4. According to IRS officials, the following program changes, some initiated by IRS and some mandated by the Restructuring Act, increased the demand for offers. Increase in publicity. In response to a Restructuring Act requirement that IRS inform taxpayers about the availability of offers in compromise as an option for resolving tax debts, the agency undertook outreach and education efforts. According to IRS officials, these efforts, along with media coverage, brought the revised program to the attention of taxpayers and practitioners who represent taxpayers. IRS officials told us that practitioners, in turn, have extensively marketed the revised OIC Program as a way for taxpayers to settle their tax debts for “cents on the dollar.” Change in processability criteria. Before 1999, IRS would not process an offer application that was incomplete. In 1999, IRS made all offer applications eligible for processing, except those from taxpayers in bankruptcy proceedings or from taxpayers who had not filed all required federal tax returns. Instead of returning an incomplete offer to the taxpayer, IRS started working with taxpayers to obtain the information needed to process the offer. This change in processability criteria increased the number of workable offers. Elimination of partial payment installment agreements. Prior to 1998, IRS allowed partial payment installment agreements with payment periods that could last 15 years and longer. In 1998, IRS counsel determined that IRS did not have the authority to enter into installment agreements that would not fully pay the liability within the 10-year statutory collection period plus a 5-year extension. IRS officials stated that as a result of this decision, more taxpayers turned to the OIC Program. However, IRS officials told us they could not quantify the impact that this policy change has had on the demand for offers. Expanded bases for accepting offers to include effective tax administration. In response to the Restructuring Act, IRS expanded the bases for considering offers to include effective tax administration (ETA), which requires considering such factors as equity and hardship. Although the expansion had the potential to increase the demand for offers, IRS officials told us that they do not track the number of offers submitted on the basis of these factors. However, IRS data on the number of offers accepted by type showed that there were 261 ETA offers in fiscal year 2000 and 272 ETA offers in fiscal year 2001, suggesting that the impact on demand may have been small. More payment options. In 1999, IRS made available a long-term deferred payment option that allows taxpayers to pay the offer amount over the remaining statutory collection period. This change had the potential to increase demand, but IRS officials could not quantify the impact. Three of the program changes that increased demand also increased the number of processing steps and staff hours needed per case. Changing the processability criteria resulted in IRS staff’s spending time to work with taxpayers to complete offer applications. Expanding the bases for accepting offers means that before rejecting an offer based on doubt as to liability or doubt as to collectibility, IRS must determine whether the factors considered under ETA or special circumstances criteria apply.Making more payment options available increased the amount of staff time required to calculate offer amounts. However, IRS could not quantify the impact of these changes. According to IRS officials, other program changes also added the following steps to the process and increased staff hours per case. Independent administrative review. The Restructuring Act required that IRS establish procedures for an independent administrative review of any proposed offer rejection before notifying the taxpayer. IRS extended the review to include offers to be returned for failure to provide requested financial information. Consequently, the independent administrative reviews increased staff hours per case and added a step to the offer process for rejected offers and returned offers. However, IRS does not currently track the time spent on offers by the independent reviewers. Revised offer form. IRS had to revise the offer form to reflect changes required by the Restructuring Act. Since an offer in compromise is a legal contract between the taxpayer and IRS, the form had to be revised so that the offer contract and the acceptance letter would have the same terms. Form revisions added a step to the offer process for taxpayers who had to resubmit their offers on current forms. Between fiscal years 1997 and 2001, IRS took several actions to manage the growing inventory and processing time, including shifting significant numbers of staff to the OIC Program from other field collection activities. However, the growth in staffing was outpaced by the increases in demand and complexity of case processing in terms of processing steps and hours needed to process a case. Despite more than doubling direct staff time and taking other actions, IRS was unable to reduce inventory and processing times. Staff hours grew. IRS officials told us that as the demand for offers grew, the cost of staffing the program grew as well. IRS reassigned staff to the OIC Program from other collection programs, such as delinquent account and tax return investigations. As table 1 shows, the number of direct collection field staff hours charged to the OIC Program more than doubled, from about 728,000 hours in fiscal year 1997 to about 1.6 million hours in fiscal year 2001. At the same time, the number of direct hours charged to all field collection activities declined by about 30 percent, from about 12.7 million hours in fiscal year 1997 to about 8.9 million hours in fiscal year 2001. With the growth in OIC Program hours and the decrease in total collection hours, the share of total direct field collection staff hours devoted to the OIC Program grew from about 6 percent in fiscal year 1997 to 18 percent in fiscal year 2001. IRS officials told us that having devoted such a large proportion of collection resources to the OIC Program may be negatively impacting other collection programs. As shown in table 2, while the percentage of OIC Program staff categorized as professionals has decreased slightly between fiscal years 1997 and 2001, they continued to account for about three-quarters of offer staff. These staff, generally revenue officers at the GS-11 and GS-12 grade levels, investigate offers, negotiate with taxpayers, and make the decision to reject or accept an offer. During the same time period, the percentage of lower-grade paraprofessional staff increased slightly. These staff, generally tax examiners at the GS-4, GS-5, and GS-6 grade levels, perform less complex tasks, such as working with taxpayers to prepare a complete offer application. In addition to increasing staff, IRS took other actions to improve the efficiency of the program. These actions included creating an offer specialist position for revenue officers to exclusively process offers and make the program more consistent; revising offer processing procedures, including streamlining investigations of certain offers with liabilities of $50,000 or less; and revising the offer application package so that taxpayers can better understand what documents must be submitted for IRS to consider an offer. In addition, IRS used an outside contractor to conduct a review of the OIC Program to find ways to improve the offer process and reduce the inventory of unresolved offers. Some of the initiatives resulting from these efforts will be discussed later. Demand exceeded the number of offer cases closed by staff. As shown in figure 5, there was a large increase in offer dispositions after fiscal year 1999. However, in spite of the increases in offer staffing and dispositions, the demand for offers, as measured by the number of workable offers received each year, generally exceeded the number of cases staff closed. Increases in staff hours per case, caused in part by additional processing steps, contributed to the inability of staff to keep up with demand. IRS has begun implementing a new strategy for processing offers, consisting of several separate initiatives intended to reduce inventory and processing time. Less complex offers will be processed centrally using standardized procedures intended to reduce staff hours per case and allow processing by lower-grade staff, while more complex offers will continue to be processed by higher-grade professional staff. Overall, IRS is projecting that standardization will allow fewer, lower-grade staff to process more cases. The accuracy of IRS’s projected results for the initiatives is uncertain. Many of the underlying assumptions have little empirical basis—in some cases, program managers had no choice but to rely on their professional judgment. This uncertainty underscores the importance of timely program performance data and evaluations. However, as of January 2002, IRS had not completed plans for either a performance data system or evaluations for most of the initiatives making up the new strategy. IRS has two key initiatives under way to reduce inventory and processing times—centralized processing and fast track processing. Centralized processing will use lower-grade staff to process new, less complex cases centrally and free up higher-grade staff for other field collection activities. Fast track processing will use both higher- and lower-grade field staff to close all less complex cases in the existing field inventory during fiscal year 2002. IRS centralized the locations where all offers are received and initially processed into two IRS centers—Brookhaven and Memphis—in August 2001. The new process is illustrated in figure 6. Lower-grade staff, known as process examiners (GS-4, GS-5, and GS-6), initially process new offer applications, determining eligibility for consideration and assembling case files. Other lower-grade staff, known as offer examiners (GS-7 and GS-9), work the less complex offers to completion using standardized procedures. The criteria for centralized processing include a tax liability of less than $50,000; wage or self-employment income; no employees; personal income tax, penalty assessment, or employment tax liability; and simple assets such as a personal residence. More complex new offers are sent to the field where higher-grade offer specialists (generally GS-11 and GS-12) work the cases to completion. These cases take longer to investigate and may require face-to-face meetings with the taxpayer. IRS began using fast track processing in January 2002. Fast track cases must meet essentially the same criteria as cases processed centrally. However, IRS has designed fast track processing to take less time than centralized processing. Under fast track, field staff would spend less time verifying a taxpayer’s financial information and taxpayers would not be required to provide supporting documentation. Instead, IRS would rely on electronically available data to verify financial information. IRS expects to stabilize inventory and keep up with the flow of new cases by the end of fiscal year 2002. As table 3 shows, IRS is projecting that centralized and fast track processing would reduce fiscal year 2002 ending inventory to 48,000 cases—a level that IRS expects to maintain through fiscal year 2004. IRS projects that it can maintain this inventory level while reducing total full-time equivalent positions (FTE) and using lower-grade staff. More specifically, IRS projects that in fiscal year 2004 it will close 40 percent more cases using 10 percent fewer FTEs and lower-grade FTEs than in fiscal year 2001. IRS’s projections for centralized processing and fast track processing were based on a series of assumptions regarding offer submissions, percentage of offers meeting centralized criteria, number of cases meeting fast track criteria, direct staff hours needed per case, and staffing levels. Specifically, IRS made the following assumptions. Offer submissions, or new offer applications, would grow at a rate of 10 percent a year from fiscal years 2002 through 2004. Fifty-one percent of the submissions would meet the criteria for centralized processing, but the percentage would increase to 70 percent in fiscal year 2003. Thirty-three thousand cases in the field inventory at the beginning of fiscal year 2002 would meet fast track criteria. Staff in the centralized sites would take an average of 2 hours to determine processability and assemble each new case and an average of 6 hours to close those cases meeting the centralized criteria. Staff in the field would take an average of 4 hours to close cases in the existing field inventory that meet fast track criteria. Six hundred fifty FTEs would be needed in the centralized sites in fiscal years 2002 and 2003. These FTEs would be phased in during fiscal year 2002 as new staff received formal and on-the-job training. Approximately 225 offer specialists and tax examiners could close all fast track cases in the field during fiscal year 2002. Whether these projections accurately predict IRS’s future performance is uncertain. While the future is always uncertain, the extent of the uncertainty about the projections may be significant. As discussed below, some of the underlying assumptions were based on the experience of a pilot program, others lacked a basis that could be verified, and some have changed over time. For many of the assumptions, there was little empirical basis—in many cases, program managers had no choice but to rely on their professional judgment. Program officials acknowledged the uncertainty. They said that because of escalating inventory, processing time, and costs, they felt they had to take a “calculated risk” and begin implementing the initiatives. Projecting offer submissions. According to OIC Program officials, IRS’s projections for the number of offer submissions that it expects to receive through fiscal year 2004 are based on a 10 percent growth rate. IRS has revised its projections for offer submissions several times. Because the growth rate can fluctuate, and because offer submissions can be affected by factors beyond the control of IRS—such as changes in the economy—the accuracy of the current projections is uncertain. Projecting the percentage of offers meeting centralized criteria. IRS based its assumption for the percentage of cases meeting centralized criteria on a profile of the automated offer in compromise database. On the basis of that profile, IRS estimated that 51 percent of the cases involved liabilities of $50,000 or less. In fiscal year 2003, IRS plans to replace its dollar-based criteria with complexity-based criteria. IRS estimates that as a result of this change, 70 percent of the new offer submissions in fiscal year 2003 will meet the centralized processing criteria. IRS officials said that this percentage was based on professional judgment and would be revised when there is agreement on a definition of “complexity.” As a result, it is difficult to project with any certainty the percentage of offers that would meet the revised criteria. Projecting the number of fast track cases in the field inventory. IRS based its assumption for the number of fast track cases on a qualitative review of a 1-week sample of closed cases selected for its OIC quality review program in April 2001. Projecting direct staff hours per case for centralized processing. IRS based its projections for direct staff hours per case on its centralized pilot experience. However, IRS was unable to provide any data from its pilot that would support the number of direct staff hours needed to assemble and close cases at the centralized sites. Projecting direct staff hours per case for fast track processing. IRS based its projections for direct staff hours per case on OIC data and professional judgment. Projecting staffing levels for centralized processing. IRS projected centralized staffing of 650 FTEs based on professional judgment and assumed a 20 percent productivity improvement over that of the pilot. IRS officials told us that the level of staffing for centralized processing was selected to result in processing less complex cases within 6 months. Projecting staffing levels for fast track processing. IRS based its staffing levels for fast track processing on the number of cases meeting fast track criteria and the number of staff hours needed to close a case. IRS has several other initiatives under way or under consideration that are intended to limit the number of new offer submissions, reduce staff hours per case for certain categories of cases, and remove cases from existing inventory. These initiatives include the use of overtime in the field and the centralized sites, procedure and policy changes, and legislative and regulatory proposals. IRS’s projected results for these other initiatives were generally based on the professional judgment of OIC Program officials and their experiences. Because IRS has not had experience with some of these initiatives, IRS officials said they could not project results with any certainty. The possible effects of these other initiatives were not considered in IRS’s projections for centralized or fast track processing. Table 4 summarizes the expected results and status of IRS’s other initiatives. Following the table, we provide more detail on each of the initiatives. Overtime in field and centralized sites. IRS used 74,000 hours of overtime for offer work in the field during fiscal year 2001. In fiscal year 2002, IRS plans to continue the use of overtime in both the field and the centralized sites to ensure that projected staffing levels are reached. At the time of our review, the number of hours had not yet been determined and approved. Expanded return authority. To reduce the time that staff spend processing submissions that are not serious offers, IRS expanded its criteria for returning offers to taxpayers. Previously, IRS would make at least two attempts to request additional documentation to verify financial or other information from a taxpayer before an offer would be returned for failure to provide the requested information. As of September 2001, IRS makes only one attempt to request information from a taxpayer before returning the offer. Further, IRS may reject an offer if a taxpayer (1) resubmits an offer that is not materially different from a previous offer that was either rejected with appeal rights or returned; (2) resubmits an offer within 1 year of having defaulted and received a termination letter; or (3) filed an offer solely to delay enforcement action after being notified of IRS’s intent to levy or seize. As a result of its expanded return authority, IRS estimated that as many as 15,000 of the cases in its existing inventory would be closed in fiscal year 2002; future submissions would be reduced by as much as 10 percent; and 4 percent of the new offers would be closed more quickly, primarily in the centralized sites. IRS officials told us that these projections were based on professional judgment and that the results would depend on when practitioners and taxpayers learn about IRS’s new procedures. IRS officials told us that they are tracking returned offers and would be able to tell in the future whether these are good estimates. As of late November 2001, IRS told us that 700 offers or about 2 percent of submissions had been closed under the expanded return authority. A taxpayer owes $20,000 for 2 years of delinquencies—$5,000 for one year and $15,000 for the other—but he cannot full pay within the 73 months remaining before the collection statute expires. However, the taxpayer can pay $200 a month, for a total of $14,600. Under quick hits, IRS would take an installment agreement for $5,000 and put the other year of delinquency or $15,000 in a currently not collectible status. The taxpayer would be expected to make payments on the installment agreement but not on the separate delinquency that was put in a currently not collectible status. Based on installment agreement data and professional judgment, IRS estimated that under quick hits, 4,600 cases could be closed from the existing inventory in fiscal year 2002 and future offer submissions could be reduced by up to 5 percent, primarily in the centralized sites. Also, IRS estimated that 5 percent of the future inventory could be closed more quickly, primarily in the centralized sites. Frivolous offers. To discourage offers aimed at delaying collection action, IRS is requesting legislative authority to establish a $5,000 penalty for frivolous offers. IRS developed its legislative proposal for frivolous offers to supplement its expanded return procedures (also discussed above). Based on professional judgment and offer experience, IRS estimates that this proposal, if approved, would generally discourage most abuse, reduce new offer submissions by as much as 15 percent, and close 5 percent of new cases more quickly, primarily in the centralized sites. Statutory period. IRS is also requesting legislative authority for the collection statute to be suspended when an offer is submitted. As discussed above, by law IRS has 10 years from the date of assessment to collect the delinquent taxes from the taxpayer. However, when a taxpayer files an offer, the collection statute does not stop while the offer is pending. This has encouraged some taxpayers to file offers as an attempt to delay collection action while the statutes of limitation on collecting their tax debts continue to expire. IRS officials believe this proposal would reduce the number of new submissions. However, IRS cannot quantify the potential reduction of future submissions that may have been submitted in order to delay collection action. Counsel review. To reduce processing time, IRS is requesting legislative authority to change the threshold for counsel review of offers. Section 7122(b) of the Internal Revenue Code requires counsel review in all cases where the total liability is $50,000 or more. According to IRS’s quality review of a sample of closed offer cases, it took an average of 57.2 days for cases to be sent to and returned from counsel during fiscal year 2001. IRS questioned the added value of the counsel review for offers for liabilities less than $250,000 and has proposed that the threshold be raised from $50,000 to $250,000. If this authority is granted, it would reduce processing time for offers for liabilities between $50,000 and $250,000. An IRS official told us that 31.2 percent of the offers closed in fiscal year 2000 were for tax liabilities between $50,000 and $250,000 and 4.8 percent were for tax liabilities of $250,000 or more. User fee. To offset the cost of the direct staff hours used to process offers, IRS is requesting legislative authority to charge taxpayers a user fee. Although offers from low-income taxpayers and offers based on effective tax administration would be exempt, the taxpayer would need to pay a user fee when the offer is submitted and would be reimbursed later. Based on IRS’s best guess, this proposal, if approved, would reduce new offers by as much as 3 percent. As of January 2002, IRS had not completed plans for evaluating the effectiveness of most of its offer initiatives, had not completed plans for a performance data system, and had not set program goals based on an evaluation of taxpayer needs, other benefits, and costs. Without such plans and goals, IRS may not be able to determine the effectiveness of the initiatives. Program officials said that they intend to evaluate centralized processing. IRS’s Office of Program Evaluation and Risk Analysis (OPERA) has agreed to conduct an evaluation, but a plan for the evaluation had not yet been developed. Program officials told us, however, that they had put in place measures for centralized processing and that program managers were continually collecting data and making changes as centralized processing was being implemented. Officials said that such monitoring would enable them to know whether centralized processing was meeting IRS’s goals for closing cases within 6 months and for the percentage of cases closed within 6 months. An evaluation plan for the fast track program has been developed by OPERA. According to OPERA officials, the plan is designed to assess fast track as it has been implemented in the field and also to assess whether the offer program database includes sufficient information for effective program management. The planned evaluation is also intended to provide some information useful for deciding whether to expand fast track processing. OIC Program officials stated that they are considering whether to expand fast track processing to cover new, less complex cases, which are processed centrally. It is not clear whether centralized fast track would be the same as the fast track currently being implemented in the field. For example, the mix of high- and low-grade staff used in the field is different from the mix of staff being used centrally. In addition, the data being verified electronically in the field is not the same as the data being submitted in new cases. According to OPERA officials, the planned fast track evaluation is also intended to determine whether program results differ because of such variables. OIC Program officials said that they do not plan to develop evaluation plans for their other initiatives. Without such plans, it may be difficult to distinguish the impact of one initiative from that of another. Because actual inventory and processing time could be greater or less than projected, IRS managers may need to decide whether and how to make additional changes to the OIC Program. For example, if results are better than the projections, IRS may have opportunities to reassign some offer staff. If results are worse than projected, other approaches to managing inventory and processing time may need to be considered. Such decision- making would benefit from reliable, timely performance and cost data and evaluations. The Government Performance and Results Act (GPRA) of 1993 and IRS guidance both stress the benefits of first gathering and then evaluating data to help managers understand the factors that influence performance. While reliable, timely performance data and evaluations are always beneficial, the uncertainty about both the results and the costs of the offer initiatives highlights the importance of tracking and evaluating the initiatives’ performance. Planning for data collection and evaluation is also important. Systematic attention to the design of data collection and evaluation efforts can help assure the usefulness of the efforts and safeguard against using time and resources ineffectively. Before information is collected, an evaluation plan should specify details, including the data to be collected, data sources, data collection methods, basis for comparing outcomes, and an analysis plan. We recognize that collecting performance data and conducting performance evaluations have costs. Consequently, the amount of data to collect and the scope and depth of evaluations should be based on the resources required and the benefits of the information. As noted earlier in this report, IRS did not track some data that might be useful for managing the OIC Program or determining the effectiveness of the initiatives. These data might include, for example, the total staff time devoted to the OIC Program, the time taken by the independent administrative review, and the percentage of taxpayers who failed to comply with the terms of their offer, by year of acceptance. Whether such data are worth collecting depends on the extent to which they contribute to better program management or to a better evaluation of the effectiveness of the offer initiatives. As noted earlier, OPERA’s evaluation of fast track field processing includes an assessment of data needed for effective program management. Similar assessments of performance data needs for centralized processing and the other initiatives would also contribute to better program management. IRS measures processing time relative to a standard of six months, but empirical information has never been used to verify that standard as an appropriate measure of program performance. IRS officials said that the 6- month standard was based on their professional judgment about what IRS could achieve and what taxpayers would accept. In two recent reports, we discussed the benefits of setting service goals after evaluating taxpayer or customer needs, other benefits, and costs.More specifically, we discussed industry guidance for customer service that recommended setting goals based on how long customers are willing to wait for the service, the value of the service to the organization, and the costs of providing the service. Without goals for offer processing time based on such factors, IRS lacks a yardstick for measuring the effectiveness of the initiatives and lacks criteria for making strategic decisions about issues such as staffing levels. IRS has implemented the following provisions mandated by the Restructuring Act: (1) independently reviewing all proposed offer rejections before notifying taxpayers; (2) considering the facts and circumstances of each taxpayer when determining allowances for monthly living expenses; and (3) not rejecting offers from low-income taxpayers solely on the basis of the amount offered. The Treasury Inspector General for Tax Administration (TIGTA) reviewed IRS’s implementation of these Restructuring Act provisions and reported in June 2000 that IRS had modified its offer procedures to carry out the act’s requirements. Further, TIGTA, IRS officials, and the National Taxpayer Advocate found no evidence to indicate that IRS was not following the new procedures. The Restructuring Act required that IRS establish procedures for an independent review of any rejection of a proposed offer before the rejection is communicated to the taxpayer. The Restructuring Act also stipulated that these procedures should allow taxpayers to appeal the offer rejection to IRS’s Office of Appeals. To implement the requirement, IRS established an independent administrative review process. IRS went beyond the requirements of the Restructuring Act by expanding the review process to include offers being returned because the taxpayer did not provide requested financial information. IRS also modified its internal guidance by adding criteria for the independent review and delivered a 16- hour training course to all independent reviewers. In June 2000 TIGTA reported that IRS had implemented the Restructuring Act requirements for establishing an independent administrative review. TIGTA based its finding on a survey of IRS field office directors and a review of a random sample of rejected offers submitted after enactment of the Restructuring Act. The survey of field office directors showed that the independent administrative review had been implemented in all field offices. In its review of rejected offers, TIGTA found no evidence that any offer had been rejected without undergoing the administrative review before IRS notified taxpayers of the rejections and their rights to appeal them. In 2001, IRS officials from the Small Business and Self-Employed headquarters and the appeals office told us that they had seen no evidence to suggest that the independent reviews were not taking place. Furthermore, an OIC Program official told us that IRS had added internal controls to its management information system to ensure that an independent administrative review occurs before the taxpayer is notified of the rejection and his or her appeal rights. As a result of these controls, IRS’s letter notifying taxpayers of rejections cannot be system generated until the independent review has been completed and a reason code has been entered into the automated OIC information system. Although TIGTA found no evidence suggesting that the required reviews of rejected offers were not taking place, TIGTA did raise an issue about withdrawn offers. In its June 2000 report and in another report issued in May 2001, TIGTA expressed concern about IRS’s procedures that allow taxpayers to withdraw their offers. If an offer cannot be given favorable consideration, IRS allows the taxpayer to withdraw the offer and advises him or her that in withdrawing the offer, he or she loses any appeal rights. TIGTA believed that taxpayers would be better served were the proposed offer rejection to proceed through the independent administrative review process, because the taxpayer would retain the right to appeal the proposed rejection. In response to TIGTA’s concern, IRS stated that it believed that allowing for withdrawals serves the interest of both the government and the taxpayer by avoiding unnecessary costs to both parties. IRS reviewed the reasonableness of an offer based on the amount the taxpayer is willing to pay given, among other things, the taxpayer’s necessary living expenses. In 1995, IRS published national and local schedules that set limits on allowable monthly living expenses. In 1998, Congress directed IRS, in the Restructuring Act, to consider the facts and circumstances of a particular taxpayer’s case in determining whether the national and local schedules were adequate. If the facts and circumstances indicated that the use of schedule allowances would be inadequate, the taxpayer should not be limited by the national and local allowances. IRS acted as follows to address the Restructuring Act requirement regarding facts and circumstances. Issued temporary regulations in July 1999 providing that the applicability of the allowable expense standards would be determined by the facts and circumstances of each taxpayer’s case. Revised the Internal Revenue Manual to provide that the national and local standards would serve as the starting point in evaluating the taxpayer’s financial condition. If, however, the facts indicated that use of the scheduled allowances would be inadequate under the circumstances, IRS will allow the taxpayer adequate basic living expenses. Established criteria to be used by the independent reviewers in determining whether the decision to reject an offer is appropriate. According to the criteria, reviewers must determine whether the offer investigator considered the facts and circumstances of the taxpayer in deciding whether the national and local expense standards were appropriately applied. Initiated a separate review of a sample of closed offer cases as part of its collection quality review program in March 2000. According to IRS officials, in the past, few offers had been selected for review in the collection quality review program, because the numbers of offers were small in relation to other types of collection cases. In its June 2000 report, TIGTA found that IRS was considering the facts and circumstances of taxpayers when determining how much should be allowed for monthly living expenses. TIGTA reviewed a random sample of rejected offers to determine whether it appeared that any offer was rejected in which the taxpayer claimed that IRS’s allowable living expense schedules were insufficient. Also, based on TIGTA’s findings, IRS updated its procedures by adding clarifying guidelines for the way that equity in assets necessary for the production of income or health and welfare of the taxpayer’s family should be treated in analyzing a taxpayer’s offer. As mentioned above, IRS’s independent administrative reviewers are responsible for reviewing proposed offer rejections to determine, among other things, whether the facts and circumstances were considered in determining whether the national and local expense standards were appropriately applied. Reviewers told us that when they did not agree with a proposed rejection, it was generally because the decision was not fully documented. To ensure that offers from low-income taxpayers are considered, the Restructuring Act required that IRS not reject offers from low-income taxpayers solely on the basis of the amount offered. In response, IRS revised its internal guidance to provide that an offer may not be rejected solely on the basis of the offer amount. In its review of a sample of rejected offers, TIGTA found no indication that IRS had rejected any offer solely based on the low dollar amount of the offer. In addition, OIC Program officials, an appeals official, and the National Taxpayer Advocate told us that they had seen no evidence that offers from low-income taxpayers were being rejected solely on the basis of the amount offered. IRS could not produce reliable data on the effects of the 1998 IRS counsel determination that IRS did not have the authority to enter into installment agreements that would not fully pay the tax liability before the collection statute expired. IRS’s legislative proposal that would expressly allow IRS to enter into partial payment installment agreements is broadly worded and leaves considerable discretion to IRS. As of December 2001, IRS did not have a business case, implementation plan, or other written documentation describing features of the new program, including eligibility requirements, potential number of such agreements, monitoring process, staffing needs, information system needs, projected costs, and evaluation plans. In April 1998, IRS counsel determined that IRS did not have the authority to enter into installment agreements that would not provide for full payment of the taxpayer’s liability before the collection statute expired. According to IRS officials, this policy change created a situation in which some taxpayers who were willing to pay some amount would not qualify for either an installment agreement or an offer. Instead, the only option for IRS was to put the account in inactive status, creating, according to IRS officials, a new group of cases for which there was no resolution. An apparent “procedural gap” existed, because offers in compromise or enforcement actions, such as the seizure of assets, were not practical alternatives for some cases in which IRS previously would have accepted a partial payment installment agreement. As of December 2001, IRS lacked reliable data on how the prohibition of partial payment installment agreements affected taxpayers. IRS attempted to count the number of taxpayers who entered into partial payment agreements in the past, but sufficiently reliable data were not available to complete the analysis. IRS developed some general data on the potential effects that the policy change had on the installment agreement program in terms of changes in the volumes of cases and tax dollars collected through installments, but it was unable to measure actual effects. IRS officials told us that since 1998, some taxpayers who were denied a partial payment installment agreement might have submitted an offer application. However, IRS cannot quantify the number of such taxpayers, the outcome of their offers, or the increase in the number of submissions that the OIC Program may have received as a result of the installment agreement policy change. Nor was IRS able to provide a sample of actual cases that fell into the procedural gap. For example, in following up on the collection procedural gap, IRS’s Small Business and Self-Employed headquarters’ officials reviewed 23 cases that the field staff believed had no resolution. The officials concluded that all of the cases could be resolved using existing enforcement authorities. In 2001, IRS drafted a legislative proposal that would amend the Internal Revenue Code to expressly allow IRS to enter into partial payment installment agreements. Under IRS’s proposal, section 6159 would be amended to allow IRS to enter into written agreements in which a taxpayer would be allowed to make payment on any tax in installments if IRS determines that such agreement will facilitate full or partial collection of the liability. IRS officials said that the new authority to accept partial payment installment agreements would be used only in those narrow circumstances in which IRS’s only other option would be to assign the case an inactive status. (See appendix II for a copy of IRS’s proposal and examples of what would be accepted as a partial payment installment agreement.) Officials also said that acceptance of partial payment installment agreements would not prevent IRS from pursuing other collection actions against taxpayers. Specifically, they said that IRS would monitor a taxpayer’s income and assets over the life of a partial payment installment agreement. If a taxpayer’s income increased, or if assets were accumulated to allow for larger payments, then IRS would demand such payments from the taxpayer. IRS officials said the ability to monitor a taxpayer’s income and assets and to demand additional payments was a key difference between the proposed partial payment installment agreement program and the OIC Program. Under the OIC Program, a contractual agreement compromises a taxpayer’s liability. The unpaid portion is written off and IRS agrees to take no further collection action after the taxpayer meets all terms of the offer. IRS’s legislative proposal is broadly worded, granting considerable discretion to IRS to tailor the provision’s use through regulation. According to IRS officials, although it is their intention to use the provision narrowly, the proposal was intentionally written broadly so that IRS would not have to request a legislative change in order to make policy improvements. As noted by the National Taxpayer Advocate, the lack of specific guidance regarding the appropriate circumstances under which IRS would accept a partial payment installment agreement leaves open the possibility of abuse. The taxpayer advocate endorsed the proposal but suggested that Congress provide guidance as to what factors IRS should consider when entering into partial payment installment agreements. The advocate expressed concern that the availability of partial payment installment agreements provided the opportunity for certain taxpayers to abuse the system by allowing them to continue living in an affluent lifestyle encumbered by debt. The advocate also cautioned that taxpayers should not be allowed to enter into partial payment agreements until they demonstrate the willingness and ability to retire their tax debt. Although IRS officials state that the proposed authority to grant partial payment agreements is intended to be used only in narrow circumstances, the legislative proposal, as currently written, offers no provisions to ensure that these agreements are entered into only under appropriate circumstances. IRS has not described how it will evaluate agreements to ensure that revenue officers are not using the partial payment installment agreement when a seizure or an offer is, in fact, a viable alternative. IRS officials said, however, that the collection process leaves little discretion to revenue officers as to when a partial payment installment agreement would be appropriate. As of December 2001, IRS had not developed a business case, implementation plan, or other written documentation describing the features of the proposed partial payment installment agreement program. Specifically, IRS did not have written documentation on key program design issues, such as eligibility requirements for a partial payment installment agreement, the potential number of taxpayers who might request such agreements, or procedures for accepting, rejecting, reviewing, and monitoring agreements. Nor did IRS have documentation on the resources that would be required for the program, including staffing, information systems, and projected costs. Business cases, which would include such information, are commonly used management tools that provide a basis for making resource allocation decisions and for monitoring and evaluating a project’s performance. Such written documentation would provide outside stakeholders, including Congress, useful information about the impact of the legislative proposal and IRS’s capacity to manage the new program. Particularly important is the fact that IRS has not developed an evaluation plan to monitor and assess the performance of its proposed partial payment installment agreement program. As noted earlier, both GPRA and IRS guidance emphasize the importance of collecting performance data and analyzing such data to understand the factors that affect performance. Without a mechanism to track performance and evaluate the program, IRS would not have information to guide informed decision-making regarding resource allocations to the program, appropriate staffing levels, and staff productivity or to determine whether the program is operating as intended. As was the case with the OIC initiatives, the lack of information about partial payment installment agreements underscores the importance of program evaluation. Evaluations would give IRS managers a better understanding of program performance and a better basis for considering changes to improve performance. IRS’s Offer in Compromise Program is a necessary element of the agency’s overall collection effort. Because some taxpayers will inevitably be unable to fully pay their tax liabilities, IRS must have a program that can timely and fairly compromise such tax debts. However, a continued increase in the inventory of cases, processing time, and costs would put the effectiveness of the OIC Program at risk. Whether IRS’s initiatives for improving the OIC Program will succeed in reducing inventory and processing time while holding costs at a sustainable level is uncertain. Because of the uncertainty, program managers will likely have to make adjustments to the program as actual performance diverges from projected performance in unpredictable ways. Several steps, if taken now, could better prepare offer program managers for making such decisions. Goals based on an evaluation of taxpayer needs, other benefits, and costs could provide criteria for judging the effectiveness of the initiatives. Timely data could allow program managers to routinely track progress. Evaluations could determine the effectiveness of the initiatives and the reasons for their effectiveness. Armed with such an understanding, program managers would have a better basis for making future adjustments to the program. The uncertainty about the effect of the initiatives on program performance also means that the future costs of the program could be higher than projected. OIC Program costs, measured by the numbers of staff or as a proportion of collection resources, have risen significantly in recent years. IRS recognizes that the proportion of collection resources devoted to the OIC Program may be negatively affecting other collection programs. Consequently, IRS’s centralized and fast track processing initiatives are intended to increase the involvement of lower-grade collection staff in the OIC Program and to eventually free up higher-grade field staff for other collection activities. If projected results are not realized and costs continue to rise, however, Congress and IRS may need to address the question of the affordability of the OIC Program as it is presently constituted. The uncertainty about the costs of the present initiatives means that it may be premature to reconsider the program now. However, uncertainty about future program costs reinforces the importance of timely performance data and program evaluations. Such information will be critical for ongoing congressional oversight. IRS’s proposal for a partial payment installment agreement program suffers from weaknesses similar to those in the OIC Program initiatives. Little reliable information exists now about the likely effects of the program, and there is no written plan for evaluating the success of the program if the proposal is passed. Managers of such a program would benefit from timely performance data and evaluations that provide a more informed basis for making decisions about how to manage and improve the program. As IRS makes changes to its OIC Program, we recommend that the Commissioner of Internal Revenue develop evaluation plans for the various offer initiatives that include details on data to be collected, data collection methods, basis for comparing outcomes, quality of decisions, and an analysis plan and move no new initiatives into implementation without a finalized evaluation plan; determine which OIC Program performance and cost data should be collected to monitor program performance, given resource constraints, and ensure that such data are collected in a timely and reliable manner; and set goals for offer processing time that are based on taxpayer needs, other benefits, and costs. In addition, we recommend that the Commissioner of Internal Revenue prepare documentation for its proposal to allow partial payment installment agreements. The documentation should describe key features of the proposal, including the benefits to taxpayers; the processes for accepting, rejecting, reviewing and monitoring the agreements; resource needs; the number of taxpayers that could be affected; and plans for evaluating the impact of the program. On March 13, 2002, we received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. III). The commissioner generally concurred with our recommendations and stated that our report is comprehensive and accurately accounts for the factors that influence the offer inventory. The National Taxpayer Advocate also provided comments, which are reprinted in appendix IV. The advocate agreed with our findings and expressed support for IRS’s proposal to allow partial payment installment agreements. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its issue date. We will then send copies of this report to the Commissioner of Internal Revenue and other interested parties. We will also make copies available to others who request them. If you have any questions or would like additional information, please call me or Charlie Daniel at (202) 512-9110. Key contributors to this report are Susan Malone and Sharon K. Caporale.
A growing backlog of cases and longer processing times have prompted concern about the management of the Internal Revenue Service's (IRS) Offer in Compromise (OIC) Program. OIC inventory and processing time have grown despite significant increases in program staff. Program changes increased the demand for offers, the number of processing steps, and the number of staff hours needed to process the case. Yet, the demand for offers exceeded staff's capacity to process them. The extent to which IRS' current initiatives would reduce the OIC Program inventory and processing time is uncertain. The current initiatives are intended to separate the processing of less complex and more complex offers, with lower-grade staff using standardized procedures to process less complex offers and higher-grade staff specializing in more complex offers. IRS projects that the initiatives will stabilize the inventory and keep up with the flow of new offers by the end of fiscal year 2002. IRS met the requirements of the IRS Restructuring and Reform Act of 1998 by independently reviewing all proposed offer rejections, considering the facts and circumstances of each taxpayer when determining allowances for monthly living expenses, and not rejecting offers from low-income taxpayers solely on the basis of the amount offered. IRS lacks data on the effect on taxpayers of its 1998 decision that the agency lacked the authority to enter into partial payment installment agreements. IRS officials said the policy change created a situation in which taxpayers who were willing to pay some of their tax liability might not qualify for either an installment agreement or an offer. According to these officials, the only other option was to put such taxpayers' accounts into inactive status.
The National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to, among other things, develop a space launch system as a follow-on to the Space Shuttle and as a key component in expanding human presence beyond low-Earth orbit. In 2011, NASA formally established the SLS program in response to this direction, and the Congress has provided continued support for the program. For example, the Congress has appropriated additional funding for SLS in each of the past 3 fiscal years above the level requested by the program. The cumulative additional funding totals about $610 million more than requested for SLS for fiscal years 2013, 2014, and 2015. NASA plans to develop three SLS launch vehicle capabilities, complemented by Orion, to transport humans and cargo into space. The first version of the SLS is a 70-metric ton launch vehicle known as Block I.the first in 2018 and the second in 2021/22. The vehicle is scheduled to fly an uncrewed Orion some 70,000 kilometers beyond the moon during NASA has committed to conduct two test flights of the Block I vehicle— the first test flight, known as EM-1, and to fly a second mission—EM-2— beyond the moon to further test performance with a crewed Orion vehicle. After 2021, NASA intends to build 105- and 130-metric ton launch vehicles, known respectively as Block IA/B and Block II, which it expects to use as the backbone of manned spaceflight for decades. NASA anticipates using the Block IA/B vehicles for destinations such as near- Earth asteroids and Lagrange points and the Block II vehicles for eventual Mars missions.infrastructure and systems needed to support processing and launch of Orion and SLS at Kennedy Space Center. See figure 1. The JCL is a quantitative probability analysis that requires the project to combine its cost, schedule, and risks into a complete quantitative picture to help assess whether the project will be successfully completed within cost and on schedule. NASA introduced the analysis in 2009, and it is among the agency’s initiatives to reduce acquisition management risk. The move to probabilistic estimating marks a major departure from NASA’s prior practice of establishing a point estimate and adding a percentage on top of that point estimate to provide for contingencies. NASA’s procedural requirements state that Mission Directorates should plan and budget programs and projects with an estimated life-cycle cost greater than $250 million based on a 70 percent JCL, or at a different level as approved by the Decision Authority, and any JCL approved at less than 70 percent must be justified and documented. NASA Procedural Requirements (NPR) 7120.5E, NASA Space Flight Program and Project Management Requirements, paras. 2.4.4 and 2.4.4.1 (Aug. 14, 2012) (hereinafter cited as NPR 7120.5E (Aug. 14, 2012)). 2018 as schedule reserve and the $1.3 billion difference between the $8.4 billion goal and the $9.7 billion baseline as funding for that schedule reserve. Unlike cost reserves, however, that funding largely corresponds to those 11 months and cannot be used separately from schedule to address problems as they arise. In December 2014, we testified that the ground systems and Orion programs will likely not be ready to support EM-1 before November 2018 even if SLS is able to meet its earlier internal goal. During this testimony, NASA witnesses stated that the SLS program would not be able to meet its internal goal of December 2017 and that the program would likely slip the internal goal to summer 2018. See table 1 for more specifics on estimated launch dates. NASA generally followed best practices in preparing the SLS cost and schedule baseline estimates for the limited portion of the program life cycle covered, that is, through launch readiness for the first test flight of SLS. We found that the SLS program cost and schedule estimates for this limited portion of development substantially met three of four cost characteristics—comprehensive, well documented, and accurate—and both schedule characteristics—comprehensive and well constructed— that GAO considers best practices for preparing a reliable estimate. However, because the cost estimate only partially met best practice criteria for credibility, the fourth cost characteristic, the estimates could not be deemed fully reliable. See figure 2. While the cost and schedule estimates were prepared largely in accordance with best practices, they only represent costs for the first flight of SLS, EM-1, as opposed to the program’s full life cycle. In May 2014, we recommended that NASA establish a separate cost and schedule baseline for missions beyond EM-1 and report this information via its annual budget submission. Additionally, we recommended that NASA establish life-cycle cost and schedule baselines for each upgraded block of the SLS. NASA partially concurred with our recommendations, citing that actions it plans to take to track costs and actions already in place, such as establishing a block upgrade approach for SLS, met the intent of our recommendations. To this point, however, NASA has not put forth any estimates or baselines projecting the costs of future blocks of the SLS. Comprehensive: The SLS cost estimate substantially met the criteria for being comprehensive through launch readiness for EM-1 but did not include any costs beyond the first flight. To develop the estimate, officials used a detailed work breakdown structure—the structure used to define in detail the work necessary to accomplish program objectives—that is traceable to the cost of each work element and the contract statement of work and documented ground rules and assumptions. To fully meet the criteria for being comprehensive, however, the estimate should define in detail all costs through the expected life of a program. The estimate satisfied NASA’s cost estimating approach for Human Exploration programs by including life-cycle costs through launch readiness for EM-1, but did not include any costs for deployment and operation and maintenance of SLS beyond the first flight. These costs will likely far exceed the costs of development through the first flight. For example, in October 2009, the Review of U.S. Human Spaceflight Plans Committee reported that the fixed costs of the facilities and infrastructure associated with the Shuttle program were about $1.5 billion a year. Given that NASA hopes to operate the SLS for decades, it is reasonable to expect that the deployment and operation and maintenance costs—which should be included in a reliable estimate of life-cycle costs—alone for the SLS will outweigh the agency’s current estimated cost of $9.7 billion. NASA has stated that cost estimates do not need to cover the program from “cradle to grave.” Rather, NASA’s position is that the program’s estimate is only meaningful up to the time that the SLS is delivered for its first launch, because the agency is taking what it calls a capability approach. Therefore, NASA has only estimated costs of the program to the point at which the capability will be achieved. Furthermore, NASA has yet to determine the number of launches, their missions, or the operating lifetime for the program, which according to agency officials makes it difficult to estimate the total costs of the program. Nevertheless, Office of Management and Budget guidance and GAO’s Cost Assessment Guide indicate that life-cycle cost estimates should encompass the full life cycle of a program. Well Documented: The SLS cost estimate substantially met the criteria for being well documented; however, some explanations to support the estimate were missing. A well-documented cost estimate includes thorough documentation and is traceable to information sources. We found that the SLS cost estimate documentation discusses, and is consistent with, the program’s technical baseline and provides evidence that the cost estimate was reviewed by management. The estimate also included explanations for how the estimates for the underlying components were created through an assessment of likely costs for each part of the system supplemented with an engineering review. In some instances, however, explanations of how historical data were normalized, that is, adjusted to support the estimate, were missing. For example, the cost estimate does not explain how historical costs for the space shuttle main engine were normalized to support the estimate. The purpose of data normalization is to make a given data set consistent with and comparable to other data used in the estimate so that they can be used for comparison analysis or as a basis for projecting future costs. Insufficient documentation of how the historical data were adjusted can hinder understanding and proper use of the estimate. Accurate: The SLS cost estimate substantially met the criteria for being accurate, but the continued accuracy of the estimate is in question because officials have no plans to periodically update the estimate. Accurate cost estimates are based on assessments of most likely costs, adjusted properly for inflation, and contain few, if any, minor mistakes. In addition, a cost estimate should be updated regularly to reflect significant changes in the program. The SLS cost estimate meets most of these characteristics as it is based on an assessment of likely costs, is adjusted properly for inflation, and contains few if any mistakes. Contrary to best practices, however, NASA does not periodically update the estimate based on actuals, which limits its use as a management tool for monitoring progress and planning future work. The program prepared its cost estimate and JCL in calendar year 2013. GAO’s cost estimating best practices call for estimates to be continually updated through the life of the project, ideally every month as actual costs are reported in earned value management reports. Best practices also call for a risk analysis and risk simulation exercise—like the JCL analysis—to be conducted periodically through the life of the program, as risks can materialize or change throughout the life of a program. Unless properly updated on a regular basis, the cost estimate cannot provide decisionmakers with accurate information to assess the current status of the project. Agency officials have indicated that the SLS program has no plans to update the cost and schedule estimates underlying the JCL or the JCL itself, which calls into question the continued accuracy of the estimates. NASA’s policy for space flight program and project management requires a program’s committed cost estimate to be updated (rebaselined) if it exceeds the external baseline committed cost by 30 percent or more, and if a project is rebaselined, the JCL should be recalculated and approved as part of the rebaselining process. The NASA Cost Estimating Handbook, however, indicates that program cost estimates should be updated when program content changes and as programs move through their life-cycle phases and conduct milestone reviews, and recognizes that estimates regularly updated based on actual program performance give decisionmakers a clearer picture for major decisions. In addition, through our work assessing large scale programs at the Department of Defense we have found that some programs update cost estimates annually and regularly report progress relative to both threshold and objective targets. Credible: The SLS cost estimate only partially met the criteria for credibility because the SLS program did not cross-check the results of the estimate and did not commission an independent cost estimate. The purpose of developing a separate independent estimate and cross- checking the estimate is to test the program’s estimate for reasonableness and, ultimately, to validate the estimate. Consistent with best practices, the program conducted a risk and uncertainty analysis that calculated the likely cost and schedule consequences on the program for each risk identified and conducted a duration sensitivity analysis to determine how varying the lengths of different tasks affected the program. Contrary to best practices, however, the SLS program did not cross- check the results of its cost estimate. The main purpose of cross- checking is to determine whether alternative estimating methods produce similar results. If cross-checking confirms the results of the estimate, then confidence in the estimate increases, leading to greater credibility. In addition, project officials did not commission an independent cost estimate—a separate estimate produced by an organization outside of the SLS program chain of command—which is considered one of the best and most reliable estimate validation methods because it provides an independent view of expected program costs that tests the program office’s estimate for reasonableness. An estimate that has not been reconciled with an independent cost estimate has an increased risk of being underfunded because the independent cost estimate provides an objective and unbiased assessment of whether the project estimate is realistic. Because the cost estimate only partially met the criteria for credibility, the estimate does not fully reflect the characteristics of a quality estimate and cannot be considered fully reliable. While the program did not commission an independent estimate, NASA’s Independent Program Assessment Office (IPAO)—which reviews NASA programs at key decision points in the life cycle to support approval decisions by the agency leadership—did review the program’s cost estimate at the program’s Key Decision Point C (KDP-C) review. KDP-C is the point in NASA’s project life cycle where baseline cost and schedule estimates are established and projects begin implementation. During this review, the IPAO found that the SLS JCL process and cost model were sound; however, the IPAO also found that the program’s initial SLS cost estimate appeared optimistic relative to predictions based on historical data from similar programs. For example, the IPAO reported that the program was underestimating the likely range of cost growth for four key elements—software development, core stage qualification, core stage testing, and procurement of the interim cryogenic propulsion stage. Senior agency officials indicated that, based in part on the results of the IPAO assessment, the program increased its estimate and the agency established higher cost and schedule baseline commitments for the program. Comprehensive: The SLS schedule estimate substantially met best practice criteria for being comprehensive through launch readiness for EM-1 but the schedule estimate did not account for work beyond that point. A comprehensive schedule includes all activities for both the government and its contractors necessary to accomplish a project’s objectives as defined in the project’s work breakdown structure. The SLS schedule reflected all activities in the program cost work breakdown structure, resources were appropriately allocated to the schedule, and the schedule realistically reflected how long each activity would take. Contrary to best practices, however, the schedule estimate, like the cost estimate, did not fully account for the deployment or operation and maintenance of the program—specifically work for flights beyond EM-1. Well Constructed: The SLS schedule estimate substantially met best practice criteria for being well constructed but not all activities on the critical path are directly affecting the finish date of the project. A schedule is well constructed if all its activities are logically sequenced in the most straightforward manner possible and it has a reliable critical path that determines which activities drive the project’s earliest completion date. We found relatively few instances of activities that were not logically sequenced, and anomalies within the schedule were, in general, justified by program officials. For example, program officials explained that some of the anomalies were due to activities representing external deliveries to vendors. We found, however, that the schedule did not fully meet the criteria for a well-constructed schedule because not all activities on the critical path are truly affecting the finish date of the project, which could mask delays in the schedule. The SLS program has limited cost and schedule reserves to address potential issues as it enters its most challenging period. Schedule reserve is extra time, with the money to pay for it, in the program’s overall schedule in the event that there are delays or unforeseen problems. For the SLS program, the 11-month difference between the program’s internal launch readiness goal and its committed schedule baseline represents the program’s schedule reserves. Cost reserves are additional funds that can be used to mitigate problems during the development of a program. For example, cost reserves can be used to buy additional materials to replace a component or, if a program needs to preserve schedule, cost reserves can be used to accelerate work by adding extra shifts to expedite manufacturing and save time. Because NASA anticipated a relatively flat budget for SLS, the agency chose to limit cost reserves and rely on schedule reserve—the 11 months between the internal launch readiness goal in December 2017 and the committed baseline in November 2018—as the primary way to mitigate risk. The SLS program, however, is planning to use 7 of the 11 months of schedule reserve, which would delay its planned goal for launch readiness for EM-1 from December 2017 to, tentatively, July 2018. At this point, the agency has not, however, delayed its baseline commitment date of November 2018. As a result, the agency would have only 4 months of schedule reserve remaining between July 2018 and November 2018 to address any further problems that it may encounter. See figure 3. Complex development efforts like SLS must plan to address myriad risks and unforeseen technical challenges. As mentioned above, cost and schedule reserves are one way to address risks and challenges. NASA’s Marshall Space Flight Center, which manages the SLS program, has guidance requiring programs to present their planned cost and schedule reserves for approval prior to key milestones, but the guidance does not establish specific requirements for reserve levels. However, other NASA centers, such as the Goddard Space Flight Center—the NASA center with responsibility for managing other complex NASA programs such as the James Webb Space Telescope—has requirements for the level of both cost and schedule reserves that projects must have in place at KDP- C. At KDP-C, Goddard flight projects are required to have cost reserves of 25 percent or more and 1 month of schedule reserve for each year of development from KDP-C to the start of integration and testing, 2 months per year for integration and test through shipment to the launch site, and 1 week per month from delivery to launch site to actual launch. As a result of flat funding requests, the SLS program has very low levels of cost reserves compared to other programs and to cost reserve guidance of NASA centers. The IPAO noted that the program’s planned cost reserves at the time of its review—6 percent—were too low and compared poorly to other development programs which normally had 30 percent cost reserves at a similar stage of development. To execute within the anticipated flat funding profile, the program extended its development schedule and limited the amount of cost reserves available—about $50 million each year, which is about 3.7 percent of the fiscal year 2016 budget request for the program. Program officials stated that these cost reserves were completely allocated to technical risks during the budget planning process, which leaves the program with schedule reserves as its sole resource to address unanticipated issues throughout the year. Operating with schedule reserves as the only option for addressing challenges, however, increases risk to the program’s launch readiness date because any issue that occurs will impact the overall schedule. On a program like SLS such challenges are likely. For example, the current internal launch date delay is due, at least in part, to problems that are requiring the program to modify one of the four contracts for its major elements (the core stage, boosters, main engines, and interim upper stage). According to program officials, the program is modifying the core stage contract, in part, because the tooling that will be used to manufacture the 212-foot-tall core stage was vertically misaligned by the subcontractor during its installation. According to officials, the misalignment would have prevented production of the core stage. The necessary repairs are currently scheduled to be completed in August 2015. To address this challenge, as mentioned above, the program is anticipating using 7 of its 11 months of schedule reserve and will have only 4 months of schedule reserve to address risks with 3.5 years remaining until the program’s committed baseline launch readiness date. The program, however, has yet to begin integration and testing where we have previously found projects can expect to encounter challenges that will impact schedule. Similarly, as part of its analysis of the SLS cost and schedule estimates, the IPAO reported that, based on its review of 20 historical NASA projects, the majority of schedule growth occurs after critical design review, which for SLS is currently scheduled for summer 2015. While the current delay has not impacted the program’s November 2018 baseline commitment, it will increase risk to the committed date because—as noted above—the project has limited cost reserves and will now have limited schedule reserves to address any future problems or delays at the point when problems are most likely to occur. Using schedule reserve alone, rather than in combination with cost reserves, does not provide the program with the same level of flexibility to mitigate risks to maintain planned cost and schedule. GAO’s Cost Estimating and Assessment Guide states that all development programs should have cost reserves. Problems always occur, and program managers need ready access to funding in order to resolve them without adversely affecting the program. In the case of SLS, if further problems arise associated with the program’s main cost drivers—such as design and manufacturing of the core stage or developing the integrated flight software—the program officials will likely have to use what remains of the program’s schedule reserve to resolve these problems. As the program has limited cost reserves, this would put the committed launch readiness date at risk. This approach—using only schedule reserve to address challenges—also leaves the program in the position of potentially having to, for example, descope planned work or to further delay test events or launch readiness in order to address technical risks and challenges, should those challenges exceed available resources. Both options have long-term risks. For example, if certain development efforts are deferred beyond EM-1, technical risk to EM-1, EM-2, or both may increase and could put additional cost and schedule pressure on EM-2—the first crewed flight— because more work would then be required within EM-2’s schedule before the program could achieve launch readiness. However, if development work—such as data analysis or a test event—is eliminated altogether, that loss of potential information may increase technical risk to EM-1. Similarly, delaying planned work is generally not a viable long-term solution. NASA has found that deferring planned work to stay within available budget often leads to increased costs in the long term. For example, in October 2010, the Independent Comprehensive Review Panel that reviewed the James Webb Space Telescope program found that delaying work due to inadequate cost reserves did not control program costs. Doing so delayed and increased costs—typically two or threefold—due to inefficiencies visited upon other dependent tasks. NASA is using contractor earned value management (EVM) data as an additional means to monitor costs for SLS, but the EVM data remain incomplete and provide limited insight into progress toward the program’s Program officials external committed cost and schedule baselines.indicated that the current SLS contractor performance measurement baselines—which establish the program scope, schedule, and budget targets to measure progress against—are all based on the program’s more aggressive internal goal for launch readiness for EM-1 in December 2017 and not its external committed date of November 2018. EVM systems rely on performance measurement baselines as the basis for measuring performance and use past performance trends to project an estimate at completion (EAC). An EAC is a calculation of the cost to complete authorized work based on a contractor’s EVM performance to date. We reviewed the EAC provided by three prime contractors and found that they were below what our calculations predicted. Specifically, our analysis of the contractors’ EVM data for the three prime contracts— main engines, boosters, and core stage—indicates that the contracts could incur cost overruns ranging from about $367 million to about $1.4 billion, which is substantially higher than the combined overrun of $89 million that the three contractors were projecting at the time of our review. These projections do not take into account all of the in-house work conducted for the SLS program. The SLS program is in the process of instituting a program-level EVM system that may improve insight into the program’s progress by providing aggregated tracking of both in-house and contractor performance on the SLS program that will present a more comprehensive assessment of program progress at least to internal cost and schedule goals. The SLS program-level EVM system, however, is not yet fully implemented. Specifically, because the core stage contract is being modified, EVM data will have to be reset to new performance measurement baselines. The Office of Management and Budget Circular A-11 and Capital Programming Guide requires EVM for major acquisitions with developmental effort. According to this guidance, when there is both government and contractor work, the data from the two EVM systems must be consolidated at the reporting level for total program management and visibility. Pulling together EVM data from multiple levels into a program-level report gives officials a comprehensive outlook of its cost and schedule and provides the project manager with early warning of potential cost and schedule overruns. In November 2012, we recommended that NASA establish a time frame by which all new spaceflight projects will be required to implement NASA’s newly developed program-level EVM system. This system should improve management and oversight of its spaceflight projects by providing a consolidated tracking mechanism for all of the contract and NASA work conducted for a program. NASA officials stated that this SLS program- level EVM system would allow the program to better monitor progress relative to the committed cost and schedule baselines by comparing the aggregated EAC for both NASA and contractor work to the committed cost and schedule baselines. The delay in implementing program-level EVM data is, in part, due to the fact that the program was still in the process of modifying the core stage contract. According to program officials, they identified discrepancies between the agency’s and the contractor’s positions during the core stage contract integrated baseline review—a review conducted by program management to ensure a mutual understanding of the contract’s performance measurement baseline. Specifically, program officials determined that the core stage contract needs to be modified to accommodate delays associated with addressing technical issues with misaligned tooling and to resolve differences between the program and the contractor regarding the level of funding available to begin work on the exploration upper stage. Further, NASA officials stated that the program-level EVM will not fully reflect program progress until a new internal program launch readiness goal is set and contract performance measurement baselines are updated to reflect these new goals for all NASA and contractor activities. The officials also stated that the internal program launch readiness goals will not be set until after the agency has a better understanding of the expected fiscal year 2016 appropriations and has completed the planning for the fiscal year 2017 budget request. Operating without program-level EVM leaves SLS at risk of encountering unexpected cost and schedule growth. Both contractor and program-level EVM data, however, is only reported relative to the December 2017 date, according to program officials. The potential impact of cost and schedule growth relative to the program’s external committed cost and schedule baseline of November 2018 is neither reported nor readily apparent and renders the EVM data less useful in support of management decisions and external oversight. Major NASA programs are required by statute to report annually to certain congressional committees on changes that occurred over the prior year to the program’s committed cost and schedule baseline, among other As this report reflects cost and schedule overruns that have things.already occurred, it does not serve as a mechanism to regularly and systematically track progress against committed baselines so that decisionmakers have visibility into program progress and can take proactive steps as necessary before cost growth or schedule delays are realized. The SLS program has not updated its cost and schedule estimates since 2013, rendering them less reliable as program management and oversight tools. Since the estimates were created, the program has received substantial funding increases in 2014 and 2015 but has also realized both expected and unexpected risks that are forcing the program to delay its internal goal for launch readiness. The program’s cost and schedule estimates, however, do not reflect all of these changes or the 2 years of additional contractor performance data that the program has accumulated since 2013. Without regular updates, developed in accordance with best practices—including cross-checking to ensure credibility and thorough explanations of how the updates were prepared— cost estimates lose their usefulness as predictors of likely outcomes and as benchmarks for meaningfully tracking progress. Updated cost and schedule estimates would provide program and agency officials with a more informed basis for decision making and provide the Congress with more accurate information to support the appropriation process. In addition, tracking and reporting progress relative to the external cost and schedule baseline commitments on only an annual basis and only once issues have already occurred limits NASA’s ability to continually monitor progress and forecast potential risks to these cost and schedule baselines, such that proactive action can be taken by decisionmakers. Until a comprehensive program-level EVM system, reporting to a realistic performance measurement baseline, is fully implemented, agency leadership and external stakeholders will lack an accurate measure of the program’s true progress. Further, by pursuing internal launch readiness dates that are unrealistic, the program leaves itself and others in a knowledge void wherein progress relative to the agency’s commitments is difficult to ascertain. As the EVM system only tracks progress toward the program’s internal goals, the program lacks a mechanism to track progress to its external cost and schedule baseline commitments. Without such a tracking tool, stakeholders will lack early insight into potential overruns and delays. To ensure that the SLS cost and schedule estimates better conform with best practices and are useful to support management decisions, GAO recommends that the NASA Administrator direct SLS officials to update the SLS cost and schedule estimates, at least annually, to reflect actual costs and schedule and record any reasons for variances before preparing their budget requests for the ensuing fiscal year. To the extent practicable, these updates should also incorporate additional best practices including thoroughly documenting how data were adjusted for use in the update and cross-checking results to ensure they are credible. To provide more comprehensive information on program performance, the NASA administrator should direct the SLS program to expedite implementation of the program-level EVM system. To ensure that decisionmakers are able to track progress toward the agency’s committed launch readiness date, the NASA administrator should direct the SLS program to include as part of the program’s quarterly reports to NASA headquarters a reporting mechanism that tracks and reports program progress relative to the agency’s external committed cost and schedule baselines. NASA provided written comments on a draft of this report. These comments are reprinted in appendix II. In written comments on a draft of this report, NASA concurred with our three recommendations. NASA agreed that updating cost and schedule estimates is a program management best practice and indicated that moving forward the program would use its budget formulation processes to update its cost and schedule estimates for the first demonstration of the initial SLS capability on Exploration Mission 1 (EM-1) following the enactment of NASA’s final appropriations each fiscal year. We are encouraged that NASA plans to update its cost estimate for SLS annually. To satisfy the recommendation, we would expect that any updates would address the deficiencies we identified in NASA’s original estimate for SLS, including thoroughly documenting how data were adjusted for the update and cross-checking the results to ensure credibility. NASA also recognized that having an EVM system in place is a program management best practice useful for ensuring effective program control. NASA indicated that it is in the process of implementing program-level EVM but that the system would not be fully in place until the stages contract is modified and an independent baseline review is completed in the third quarter of fiscal year 2016. We appreciate that NASA plans to implement the program level EVM system as soon as the stages contract is rebaselined, but we remain concerned that the stages contract will remain undefinitized until spring 2016. We reported in July 2014 that the stages contract had remained undefinitized for an extended period and that leaving the SLS contracts undefinitized for extended periods placed the government at increased risk of cost and schedule growth and limits the program’s ability to monitor contractor progress. Likewise, until certified EVM data from the stages contract is available, the SLS program will lack a key tool for tracking and forecasting progress as the program approaches its committed November 2018 launch readiness date. NASA also agreed that the program should implement a reporting mechanism that tracks and reports program progress relative to the agency’s external committed cost and schedule baselines. NASA indicated that the SLS program-level EVM system would satisfy this recommendation when it is fully implemented and included as part of the SLS program’s normal quarterly reporting to the Exploration Systems Development Division at NASA Headquarters. Given that EVM data is typically reported relative to contractual targets, in this case the program’s internal goals, we would expect that these quarterly reports would also include additional elements addressing progress relative to the program’s external committed cost and schedule baselines. We are sending this report to NASA’s Administrator and to appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix III. To assess the reliability of the National Aeronautics and Space Administration’s (NASA) Space Launch System (SLS) cost and schedule estimates, we determined the extent to which the estimates were consistent with best practices for cost estimating and scheduling as identified in GAO’s Cost Assessment Guide and Schedule Assessment Guide. We examined documents supporting the cost and schedule estimates, such as detailed spreadsheets that contain cost, schedule, and risk information and the timing and availability of funding and reserves. We also met with independent reviewers of the SLS Joint Cost and Schedule Confidence Level (JCL), reviewed their report on the program, and determined the extent to which the SLS program addressed any concerns the reviewers raised concerning the JCL estimate. In addition, we met with program and agency officials to discuss the baseline cost and schedule estimates, potential program schedule changes, and the program’s cost and schedule reserve postures, among other issues. We did not assess the credible or controlled criteria of the schedule estimate as the estimate was completed to support a JCL. The controlled criterion deals, in part, with updating the schedule periodically. A JCL, however, is not designed to be used as an updating tool. The credible criterion relates, in part, to a risk assessment of the schedule. The summary schedule estimate, however, was developed specifically for use in developing the SLS program’s JCL, which requires its own risk assessment to calculate probability of meeting cost and schedule baselines, and would make the schedule estimate risk assessment unnecessary in this instance. To assess the availability of program cost and schedule reserves for resolving development challenges, we analyzed budget requests, appropriations, and program budget projections. We also reviewed prior NASA reviews that called the program’s reserves into question, briefings to program and agency management that confirmed the program’s internal goals, and program risk assessments that outlined program risks and potential impacts. In addition, we met with senior agency officials, program management, and program budget specialists to discuss the program’s budget and reserve postures. To assess the insight that the program’s earned value management (EVM) system provides into progress relative to the program’s cost and schedule baseline commitments we obtained and analyzed contractor cost and schedule reporting, or EVM, data for the three contractors for which it was available, determined whether the contractors are forecasting cost and schedule growth, and calculated our own forecasts of likely cost and schedule growth. We also collected available program- level EVM data to determine whether it could be used as an indicator for program progress. We met with program EVM and schedule managers to discuss any concerns with contractor cost and schedule reporting as well as the creation of the program-level EVM process. We conducted this performance audit from September 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cristina T. Chaplain (202) 512-4841 or chaplainc@gao.gov. In addition to the contact named above, Shelby S. Oakley (Assistant Director), Jennifer K. Echard, Laura Greifner, Kristine Hassinger, Sylvia Schatz, Dina Shorafa, Ryan Stott, Ozzy Trevino, and John S. Warren, Jr. made key contributions to this report.
SLS is NASA's first heavy-lift launch vehicle for human space exploration in over 40 years. For development efforts related to the first flight of SLS, NASA established its cost and schedule commitments at $9.7 billion and November 2018, respectively. The program, however, has continued to pursue more aggressive internal goals for cost and schedule. GAO was asked to assess a broad range of issues related to the SLS program. This report focuses on NASA's cost estimate for the initial phases of SLS and other management tools needed to control costs. Specifically, this report examines the extent to which SLS's (1) cost and schedule estimates for its first test flight are reliable; (2) cost and schedule reserves are available to maintain progress toward this flight test; and (3) EVM data provides meaningful insight into progress. To do this work, GAO examined documents supporting the cost and schedule estimates, contractor EVM data, and other relevant program documentation, and interviewed relevant officials. The cost and schedule estimates for the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) program substantially complied with five of six relevant best practices, but could not be deemed fully reliable because they only partially met the sixth best practice—credibility. While an independent NASA office reviewed the estimate developed by the program and as a result the program made some adjustments, officials did not commission the development of a separate independent estimate to compare to the program estimate to identify areas of discrepancy or difference. In addition, the program did not cross-check its estimate using an alternative methodology. The purpose of developing a separate independent estimate and cross-checking the estimate is to test the program's estimate for reasonableness and, ultimately, to validate the estimate. The continued accuracy of the estimates is also questionable because officials have no plans to update the original estimates created in 2013. GAO's cost estimating best practices call for estimates to be continually updated through the life of the program to provide decisionmakers with current information to assess status. Moreover, as stressed in prior GAO reports, SLS cost estimates only cover one SLS flight in 2018 whereas best practices call for estimating costs through the expected life of the program. Limited cost and schedule reserves place the program at increased risk of exceeding its cost and schedule commitments. Although the SLS program is committed to a November 2018 launch readiness date, it has been pursuing an internal goal for launch readiness of December 2017, with the time between December 2017 and November 2018 being designated as schedule reserve. The SLS program expects to use a significant amount of schedule reserve, in part to address some technical challenges, and plans to shift its internal goal from December 2017 to tentatively July 2018. This shift will reduce the amount of available schedule reserve from 11 months to just 4 months. In addition, the program planned for cost reserves of less than 4 percent each year and has already allocated those funds for this year, which leaves no reserve funding available to address unanticipated issues. Earned value management (EVM) data for SLS remains incomplete and provides limited insight into progress toward the program's external committed cost and schedule baselines because it tracks progress relative to the program's internal goals—which have proven unrealistic. EVM data is intended to provide an accurate assessment of program progress and alert managers of impending schedule delays and cost overruns. GAO analysis of available SLS contractor EVM data indicated that the contractors may incur cost overruns ranging from about $367 million to about $1.4 billion, which is significantly higher than what the contractors were reporting—$89 million. SLS is implementing a program-level EVM system that, once complete, will include all contractor work and work conducted in-house by NASA and may provide more comprehensive information on program progress relative to internal goals. Tracking to internal goals, however, provides limited information relative to progress toward external commitments. At present, the SLS program lacks comprehensive program-level reporting to alert managers of impending delays and cost overruns to external commitments. NASA should direct SLS program officials to update the cost and schedule estimates at least annually, and to implement a mechanism that reports progress relative to external committed cost and schedule baselines on a quarterly basis, among other actions. NASA concurred with GAO's recommendations.
Mortgage servicers are the entities that manage payment collections and other activities associated with loans. Servicing duties can involve sending borrowers monthly account statements, answering customer- service inquiries, collecting monthly mortgage payments, and maintaining escrow accounts for property taxes and insurance. In the event that a borrower becomes delinquent on loan payments, servicers also initiate and conduct foreclosures. Several federal regulators share responsibility for regulating the banking industry in relation to the origination and servicing of mortgage loans. OCC has authority to oversee nationally chartered banks and federal savings associations. The Federal Reserve oversees insured state-chartered banks that are members of the Federal Reserve System, bank holding companies, and entities that may be owned by federally regulated depository institution holding companies but are not federally insured depository institutions. The Federal Deposit Insurance Corporation (FDIC) oversees insured state-chartered banks that are not members of the Federal Reserve System and state-chartered savings associations. The Bureau of Consumer Financial Protection oversees many of these institutions, as well as all mortgage originators and servicers that are not affiliated with banking organizations, with respect to federal consumer financial law. Beginning in September 2010, several servicers announced that they were halting or reviewing their foreclosure proceedings throughout the country after allegations that the documents accompanying judicial foreclosures may have been inappropriately signed or notarized and after completion of self-assessments of their foreclosure processes that federal banking regulators directed them to conduct. In response, the banking regulators—OCC, the Federal Reserve, OTS, and FDIC—conducted a coordinated on-site review of 14 mortgage servicers to evaluate the adequacy of controls over servicers’ foreclosure processes and to assess servicers’ policies and procedures for compliance with applicable federal and state laws. Regulatory staff told us that as part of these reviews, their examiners evaluated internal controls and procedures for processing foreclosures and reviewed samples of individual loan files to better ensure the integrity of the document preparation process and to confirm that files contained appropriate documentation. Examiners reviewed more than 2,800 loan files comprising approximately 200 foreclosure loan files with a variety of characteristics from each servicer to test the institutions’ controls and governance processes with respect to foreclosures. Generally, the examinations revealed severe deficiencies in three primary areas: shortcomings in the preparation of foreclosure documentation; inadequate policies, staffing, or oversight of foreclosure processes; and insufficient oversight of third-party service providers, particularly foreclosure attorneys. On the basis of their findings from the coordinated review, OCC, the Federal Reserve, and OTS issued formal consent orders against each of the 14 servicers under their supervision in April 2011 (see fig. 1). According to bank regulatory staff and these consent orders, each of the 14 servicers is required to enhance its compliance, vendor management, and training programs and processes. In addition, because examiners reviewed a relatively small number of foreclosure files, the consent orders require each servicer to retain an independent firm to conduct a review of foreclosure actions on primary residences from January 1, 2009, to December 31, 2010, to identify borrowers who suffered financial injury as a result of errors, misrepresentations, or other deficiencies in foreclosure actions, and to recommend remediation for borrowers, as appropriate. Servicers proposed third-party consultants to conduct the foreclosure review and submitted engagement letters outlining their foreclosure review processes to the regulators by July 2011 as required by the orders. OCC reviewed and approved the engagement letters for banks under its supervision in late September 2011 and released the engagement letters in November 2011 on the OCC website. With the exception of one institution, the Federal Reserve approved the engagement letters for servicers under its jurisdiction by February 2012. As required in the consent orders, the foreclosure review process has two components, a file review (look-back review) and a process for eligible borrowers to request a review of their particular circumstances (borrower outreach process). For the look-back review, the consent orders require the third-party consultant to submit an engagement letter outlining their plan for review subject to the regulators’ approval. Consultants are required to review various categories of loans, pursuant to regulators’ guidance and approval. These categories may vary by servicer but include, for example, files in every state where the institution conducted foreclosures, foreclosures where the borrower had a loan modification in place, or files that were handled by certain law firms where documentation errors have previously been found. The consent orders allow third-party consultants to use statistical techniques to select samples of files from some categories of loans for review. As required in the consent orders, the engagement letters describe procedures consultants will use to increase the size of samples depending on the results of the initial reviews. Consultants are not allowed to use sampling, but instead must review 100 percent of files in some high-risk categories, including certain bankruptcy cases and files involving borrowers protected by the Servicemembers Civil Relief Act (SCRA). The Federal Reserve is also requiring 100 percent review of files in several additional high-risk categories, including foreclosure-related complaints filed before the borrower outreach process was launched, foreclosure actions where a complete request for a loan modification was pending at the time of the foreclosure, and foreclosure actions that occurred when the borrower was not in default. The second component of the foreclosure review, the borrower outreach process, is intended to complement the look-back review and help identify borrowers who may have suffered financial injury. According to regulatory documents and staff, the purpose of the outreach is to provide a robust process so that eligible borrowers who believe they suffered financial injury within the scope of the consent orders have a fair opportunity to request an independent review of their circumstances and, potentially, to obtain remediation. Regulatory staff noted that requiring institutions to hire a consultant to review files to identify the harmed pool of consumers as part of an enforcement action is typical. They said that including an outreach component in addition to a file review is unique and unprecedented in their experience. They also emphasized that the two components are intended to work together to provide a full and fair opportunity to identify as many financially injured borrowers as possible and the final results could not be fully evaluated until both the look-back file review and request-for-review process are completed. A Federal Reserve official testified that the borrower outreach process was critical to helping ensure that borrowers who suffered financial injury are identified and appropriately compensated. Acting Comptroller Walsh stated in a speech that the two processes combined are intended to maximize identification of and remediation for borrowers who have suffered financial injury as a result of the deficiencies identified in the orders. Consultants are required to review all eligible requests for review submitted through the borrower outreach process. To make eligible borrowers aware of the opportunity to request a foreclosure review, regulators required servicers to develop an outreach process. The servicers’ borrower outreach plan includes multiple methods, including direct mail, print advertising, a toll-free phone number, a website, online marketing, and engaging a third party for community outreach. Since the servicers had contact information for all of the eligible borrowers, direct mail was the primary outreach method chosen. On behalf of the participating servicers, a third-party administrator began mailing uniform outreach letters on November 1, 2011, to 4.3 million These outreach letters describe the request-for-review borrowers. process and include a request-for-review form for borrowers to complete and submit if they believe they suffered financial injury as described in the outreach letter (see fig. 2). The third-party administrator took steps to update addresses of the eligible borrowers who may have lost their homes to foreclosure. A single, coordinated website, toll-free phone number, and national advertising campaign were launched in January and February 2012, to provide information about the request-for-review process. The regulators directed the servicers to develop their outreach plan in consultation with the third-party consultants and approved the plan. As of March 2012, borrowers may also submit requests for review The original deadline via the independent foreclosure review website.for submitting requests for review was April 30, 2012, but regulators decided in February to extend the deadline to July 31, 2012. On June 21, 2012, regulators extended the deadline again to September 30, 2012. A second round of national advertising occurred in April and May 2012, and a third round is planned before the deadline. Additionally, a second mailing to eligible borrowers who have not responded is scheduled for June 2012. The mailing directs borrowers to call the toll-free phone number or access the independent foreclosure review website for information or to submit a request-for-review form. In addition to the servicers’ coordinated efforts, regulators also have posted information about the foreclosure review on their agencies’ websites and issued press releases. Further, OCC has distributed public service announcements to small publications and radio stations, and the Federal Reserve developed a video to inform borrowers about the review process. As part of the outreach approach, the servicers formed a consortium to develop the initial outreach letter and request-for-review form with input from third-party consultants and approval from the regulators. The servicers and regulators did not test these communication materials with the borrowers or their community group advisers. Regulators consulted with and incorporated feedback from consumer groups on subsequent advertising and mailings to improve the format and clarity of current materials. However, according to representatives of these groups and our readability tests, the initial materials and the independent foreclosure website may be difficult for some borrowers to understand. In addition, the materials did not include specific information about the type of potential remediation borrowers could receive, which could affect borrowers’ motivation to respond and submit a request for review. Servicers formed a consortium to develop the initial communication materials, including the outreach letter and request-for-review form mailed to eligible borrowers. Because the consent orders did not outline the specifics of a borrower outreach process, regulators provided servicers and consultants guidance in July 2011 outlining their expectations for mailing notifications to eligible borrowers and national advertising, among other requirements. Representatives of servicers with whom we spoke told us that after receiving this guidance the servicers decided to form a consortium to develop a coordinated outreach process and uniform communication materials. A representative of one servicer and regulatory staff said that this approach would reduce potential confusion among borrowers that could result if each servicer had developed separate advertisements, websites, and outreach letters. Therefore, the servicers worked together to develop initial drafts of the communication materials, relying primarily on the expertise of their internal marketing departments and class action lawsuit notices as a model for notifying borrowers of the request-for-review process. The third-party consultants reviewed the communication materials and provided their input. After the consultants’ review, the regulators also provided comments on the outreach plan and content of the communication materials and ultimately gave their final approval. Although servicers developed the initial communication materials with input from third-party consultants and regulators, the servicers and regulators did not test the materials with the target audience. Our previous reports and federal guidelines about using plain language in public documents have emphasized the importance of testing communication materials, such as conducting focus groups or assessing their readability, before implementing them. For example, in a previous report we have stated that consumer testing can validate the effectiveness of messages and information or measure readers’ ability to comprehend them. We also have found that in order to develop clear and consistent audience messages, testing and refining language are important. The Plain Writing Act of 2010 states that starting October 13, 2011, agencies must use plain writing when issuing new or substantially revised documents, including documents that explain to the public how to comply with a requirement that the federal government administers or enforces. The act defines “plain writing” as “writing that is clear, concise, well-organized, and follows other best practices appropriate to the subject or field and intended audience.” In addition, federal guidelines developed to help executive agencies implement the Plain Writing Act of 2010 state that testing documents, including applications and websites, should be an integral part of the plain-language planning and writing process, especially when writing to millions of people. Finally, the Securities and Exchange Commission’s handbook for companies preparing required disclosure documents to investors in easy-to-understand language states that testing documents with a focus group can provide helpful feedback on how well the document communicates information and identify any confusing language. Representatives of one servicer and a consultant we interviewed said the consortium considered testing the communication materials with borrowers or conducting focus groups, but that the time frames were too short to take these steps and incorporate any changes by the November 2011 deadline by which regulators expected the outreach campaign to be launched. The servicer representative noted that because regulators provided guidance in July 2011 and initially expected an August 2011 launch, the servicers had only 60 days to develop the coordinated communication materials. tests with focus groups could take 6 to 8 weeks. Federal Reserve staff said they wanted to get the outreach process started quickly so that financially injured borrowers could receive remediation as soon as possible. According to these staff, no formal readability tests or focus groups with the target audience were conducted, partly due to their interest in expediting the remediation process. However, they consulted with staff in the agency’s Division of Consumer and Community Affairs for feedback on improving the communication materials to help ensure consumers could understand them. OCC staff also confirmed that no formal testing of the communication materials was conducted, but OCC also provided the materials to its Public Affairs and Community Affairs groups, which reviewed the materials for readability, and incorporated changes. According to this representative, conducting Readability tests of the outreach letter and request-for-review form mailed to eligible borrowers and the website language indicate that these materials were written at a level above the reading proficiency of many borrowers. Federal plain language guidelines note that technical terms may be necessary, but that agencies should define them and avoid legal and technical jargon, where possible. At the same time, the guidelines state that agencies should take into account their audience’s current level of knowledge when preparing documents and that the documents should be easy to understand. An assessment of the reading level of the U.S. According to regulatory staff, the August 2011 deadline was extended when it became apparent that the coordinated approach would require additional time. population indicated that nearly half of the adult population is estimated to read at or below the eighth-grade level. We have previously reported that to help ensure that the complex information public companies are required to disclose is written in plain language and is understandable, the Securities and Exchange Commission recommends that materials be written at a sixth- to eighth-grade level. However, one consumer group conducted a readability test of the language in the communication materials mailed to eligible borrowers and found that they were written at a second-year college reading level. Because the scheduled second wave of mailings and advertising direct borrowers to the independent foreclosure review website to obtain more information about the review and submit a request-for-review form, we conducted readability tests of the language used in the online request-for-review form. We used three tests that score how hard a piece of writing is to read based on quantitative measures, such as average number of syllables in words or numbers of words in sentences. One of these tests used the same method the consumer group used to evaluate the outreach letter and request-for-review form. These tests indicate that the website is written at an average of an eleventh-grade reading level, which is lower than the test results of the outreach letter and paper request-for-review form, but still above the average reading level of the U.S. population. Certain sections of the website required higher or lower reading levels to be understandable. For example, the legal section of the online submission form where borrowers acknowledge that they are requesting a review of their foreclosure and certify that the information is truthful were written at a fifteenth-grade level, the equivalent of 3 years of college education. However, one test indicated that the language used on the part of the form where borrowers input their contact information required only an eighth-grade reading level. As a whole, these tests are one indicator that portions of the foreclosure review communication materials may be too complex to ensure effective communication of all the relevant information. The readability tests have some limitations and regulatory staff told us that they considered plain language guidelines when evaluating the materials. We note that the readability ratings only reflect the length of sentences and the length in syllables of individual words in the sentences and do not reflect the complexity of ideas in a document or how clearly the information has been conveyed. As the content in these materials refer to mortgages, some complex terms and phrases, such as foreclosure and loan modification, may be unavoidable. Regulatory staff told us that they were aware of the plain language guidelines and discussed using plain language so that the materials were likely to be understood. For example, they noted that they did not include unnecessary legal and technical language, but said it was difficult to convey complex mortgage and legal terms in simple language that would still clearly and precisely present the intended message. Federal Reserve and OCC staff noted that to the extent the Plain Writing Act applies to the servicers’ borrower outreach communication materials, they believed they had met the act’s requirements. In addition to stating that agencies should take the audience’s current level of knowledge into account, federal guidelines on using plain language also state that agencies should use language the audience knows and feels comfortable with when creating documents, including websites. Representatives of consumer groups we interviewed expressed concern about the initial lack of materials available in languages other than English. According to 2008-2010 Census Bureau American Community Survey data, about 12.7 million adults in the United States— 5.5 percent of the total U.S. adult population—reported speaking English not well or not at all. In addition, as shown in figure 3, populations with limited English proficiency tend to be more concentrated in certain parts of the country. To the extent that these concentrations are also in areas with high numbers of foreclosures that servicers did not target with Spanish-language advertising, limited English proficiency could affect borrowers’ ability to complete their request-for-review form. We have previously reported that a lack of proficiency in English can affect financial literacy—the ability to make informed judgments and take effective actions on the current and future use and management of money. This report also stated that limited English proficiency can be a significant barrier to completing applications (such as the request-for-review form), asking questions about additional fees on credit card statements or correcting erroneous billing statements, and accessing educational materials such as print advertising or websites that are not available in languages other than English. Further, this report noted that having limited proficiency in English exacerbates the challenges of understanding complex information in financial documents. This report also acknowledged that factors other than language often serve as barriers to financial literacy for people with limited English proficiency, including a lack of familiarity with the U.S. financial system, cultural differences, mistrust of financial institutions, and income and education levels. Federal Reserve staff said that they required the servicers to handle the borrower outreach communication with non-English speaking borrowers in accordance with the servicers’ existing policies and procedures pertaining to such borrowers, which must comply with existing laws and regulations. However, because the initial communication materials were not available in languages other than English, borrowers with limited English proficiency may not have had the same opportunity as proficient English speakers to request a foreclosure review. Regulators did not initially solicit input from consumer and community groups when evaluating the language used in the communication materials but have since taken steps to address these groups’ concerns. Representatives of several consumer and community groups we interviewed said that they have direct experience working with distressed borrowers or in developing national outreach campaigns. Regulators acknowledged that they initially did not obtain input from these groups when evaluating the early communication materials, but they have since held several meetings with selected groups to obtain their feedback on the outreach process and requested feedback from them on the current advertisements and mailings, as well as certain prior communications. For example, Federal Reserve and OCC staff noted that both regulators incorporated feedback from these groups to enhance readability, include more Spanish translations, and improve how borrowers might respond to second print advertisement and the content and the exterior of the second mailing. Regulators also made changes to increase accessibility for non- English speaking borrowers that are consistent with the feedback from consumer groups, such as requiring servicers to add frequently asked questions and a guide to filling out the request-for-review form in Spanish to the independent foreclosure review website. In addition, regulatory staff said they required servicers to include references to available assistance in other languages at the call center on the independent foreclosure review website and in communication materials. The regulators also have taken their own initiative to enhance the communication materials. For example, they have posted on their agencies’ websites an archived version of the two webinars they hosted to educate community groups that assist borrowers with housing issues about the foreclosure review process as well as English and Spanish transcripts of the webinar. The agencies also consulted with the U.S. Department of Justice in December 2011 on the measures taken by the agencies to ensure that the independent foreclosure review is accessible to non-English speakers. In addition, the Federal Reserve released a YouTube video that provides information about the foreclosure review in Spanish and English. Further, OCC produced public service announcements and distributed them to more than 700 Spanish-language newspapers and 500 Spanish-language radio stations. Consumer group representatives involved in discussions about outreach with the regulators told us the recent improvements were positive, but said that they would like to see documents and information on the website offered in additional languages, language further simplified, and legal terms explained. For example, the webinar materials provide tips on how to answer request- for-review form questions that define terms, clarify questions, and indicate what additional documentation to reference; however, this information is not available on the independent foreclosure review website where borrowers are encouraged to submit their request-for-review forms. Although regulators have ensured that some Spanish language materials are available, these materials may still be difficult for Spanish-speaking borrowers to understand. We have previously reported that in some cases even translations of materials may not be fully comprehensible if they are not written using colloquial or culturally appropriate language. In addition, a 2004 report by the National Council of La Raza noted that literal translations of financial education materials from English to Spanish are often difficult for the reader to understand. Federal Reserve staff acknowledged that some terms do not translate well, and said they consulted with two consumer groups with Spanish translation capability as well as native Spanish speaking staff in the Division of Consumer and Community Affairs for advice on terms to use. Our analysis of the Spanish guide to the request-for-review form available on the independent foreclosure review website indicated that the Spanish translation in the guide uses language similar in complexity to that of the English form, which we found requires a reading level higher than the national average. In addition, the English outreach letter is not translated, and some of the key information, such as the purpose of the review or the deadline for submitting the form, is not included in the cover of the Spanish guide, although regulatory staff noted that the deadline is included in bold text on the second page of the guide. Further, some of the terms and phrases that have been translated literally may be difficult to understand. For example, the term eligible is used in the English and Spanish documents, but this term has a different meaning in each language. In Spanish, “eligible” means “available” (that is, an option one is allowed to choose), rather than “qualified to participate or be chosen” as it indicates in English. Further, the Spanish word “administrador” is used to refer to both the mortgage servicer and the third-party administrator collecting request-for-review forms on the servicers’ behalf, which could be confusing given the different roles of these two entities and that the review process is intended to be independent of the servicer. Regulatory staff said that to distinguish between the two functions, the term is capitalized when referring to the third-party administrator. Further, because Spanish readers must refer to the guide and the English form simultaneously, they could make mistakes in recording information on the English form. According to regulator guidance to consultants, if borrowers do not select any specific areas of financial injury but sign the request-for- review form and provide current contact information, consultants will review the case for all types of financial injury. However, if borrowers select areas of financial injury on their request-for-review forms, consultants will review those areas specifically, so mistakes in filling out the form could affect which aspects of borrowers’ foreclosure cases the consultants review. The content of the foreclosure review communication materials includes general information about the nature and terms of the request-for-review process. The communication materials follow regulators’ guidance on the content of the materials issued to borrowers, which includes why the borrower is being contacted, how eligibility will be determined, how borrower information needed will be collected to conduct the foreclosure review, how borrowers may contact their servicer, and when to expect a response. For example, to describe the nature of the foreclosure review process, the letter states that the purpose is to identify customers who may have been financially injured due to errors, misrepresentations, or other deficiencies during the foreclosure process. To identify the borrowers affected, the outreach letter states the eligible population of customers is borrowers whose primary residence was in foreclosure between January 1, 2009, and December 31, 2010. Additionally, the outreach letter outlines the steps of the review process and states that borrowers will receive a letter with the findings of the review. The information in the outreach letter is similar to what is typically included in a class action lawsuit notification. Regulatory staff and servicers informed us that they generally modeled the communication materials on class action lawsuit notifications. For example, sample communication materials for class action lawsuits designed by the Federal Judicial Center (FJC) include specific information about the nature and terms of a class action, including what the lawsuit is about, who is eligible, and participants’ legal rights and options. In addition to information about the nature and terms of the review, best practices and consumer groups also suggest including specific information about remediation in communication materials to help motivate eligible participants to respond. For example, outreach models for class action lawsuits and industry examples include specific information about the amount or type of remediation participants can expect to receive. While these models contain features that may not be applicable to each aspect of the foreclosure review, they do provide insights into the types of information that might incentivize individuals to participate and therefore improve response rates. Federal Judicial Center—Sample class action communication materials, including notices and flyers, provide specific financial remediation information, such as stating a minimum, maximum, and average amount of compensation, by category of participant, that a class member may receive from a settlement (fig. 4). Another example provides the amount of compensation a class member may receive depending on the number of claim forms received. To develop these models, the FJC conducted “plain language” testing with nonlawyers, focus groups of ordinary citizens from diverse backgrounds, and survey testing for reading comprehension. According to these tests, participants’ motivation to read and comprehend class action notices can significantly improve as a result of changing the language, organizational structure, format, and presentation of the notice. National Association of Consumer Advocates—Foreclosure class action guidelines this group developed recommend, among other things, that participants should be informed of the total amount of relief to be granted, stated in dollars, and the nature and form of the These guidelines individual relief each class member could obtain.also note that participants should be informed of the full range of recoveries that they could obtain, either at trial or through the settlement. National Mortgage Settlement—A settlement between 49 state attorneys general, the federal government, and the five leading mortgage servicers for improper foreclosure practices will result in approximately $25 billion in monetary sanctions and relief. Summary documentation provided on the National Mortgage Settlement website specifically states the total amount of the settlement and the approximate amount eligible borrowers can expect to receive. For example, of the approximately $25 billion total, this documentation states that about $1.5 billion of the settlement funds will be allocated to compensation for borrowers who were not properly offered loss mitigation or who were otherwise improperly foreclosed on. It also provides the specific amount that those borrowers will be eligible to receive—a uniform payment of approximately $2,000 per borrower, depending on the level of response. In addition, the summary documentation states that servicers are required to provide specific amounts of assistance to servicemembers whose foreclosure violated SCRA. Groups Experienced with Counseling, Representing, or Educating Distressed Borrowers—Consumer and community groups, including housing counselors, have advocated for including specific information about remediation that can help motivate participants to respond. Representatives of consumer groups we interviewed said that providing this information to borrowers and their advocates would allow them to make informed choices about submitting a request for review. They noted that even identifying the types of remediation available by category, such as moving expenses or costs due to delay, could be helpful. Consumer groups also informed us that such motivation is important because borrowers may be reluctant to submit their request-for-review form due to mistrust in government and the fatigue of repeated attempts to resolve a mortgage-related issue with a servicer. They said that borrowers who already have been through a taxing loan modification process and have little confidence in the system may be reluctant to go through this process again without a clear outcome. In a previous report, we discuss borrowers’ frustration, as reported by housing counselors, with delays in loss mitigation processes and borrower fatigue as a result of lost documentation, long trial modification periods, wrongful denials, difficulty contacting their servicer, and questions about the loan modification program or application. Regulatory staff noted that settlements in class action lawsuits are different from the foreclosure review, and therefore may not be a fair comparison. For example, they noted that settlements typically involve a predetermined total amount of remediation that is to be divided up, often proportionally, and then paid to the participating class members, all of whom are assumed to have suffered the injury common to the class. On the contrary, as part of the foreclosure review, servicers are not required to provide a predetermined total amount of remediation to financially injured borrowers identified in the foreclosure review. Rather, Federal Reserve officials clarified that the servicers are required to pay whatever total amount is appropriate to remediate the financial injury. In addition, OCC staff noted that, unlike a class action lawsuit settlement where the class of injured borrowers is identified and the range of remediation is known at the outset, the 4.3 million borrowers includes borrowers who may or may not have suffered financial injury within the scope of the regulators’ consent orders. Further, regulatory staff noted that the National Mortgage Settlement, much like the class action settlements referenced above, involves a predetermined total amount of monetary sanctions and consumer relief, unlike the remediation that servicers must provide financially injured borrowers identified during the foreclosure review. Federal Reserve officials stated that although the regulators have not yet made detailed information about the amounts and types of remediation that may be provided to financially injured borrowers publicly available, the August 29, 2011, guidance from the regulators to the third- party consultants identifying types of injuries that may warrant remediation have been made publicly available, including in the testimony of an OCC official. Further, OCC staff noted that the request-for-review form, independent foreclosure review website, and agency websites include information that describes the types of financial injury that would be covered. At the time of our review, the regulators had not yet announced guidance regarding the amount of financial remediation that would be provided. However, OCC and Federal Reserve officials told us that public release of a financial remediation framework that contains detailed information regarding dollar amounts that may be associated with particular injury types was forthcoming. Testimony by Suzanne G. Killian, Federal Reserve Board, Committee on Oversight and Government Reform, U.S. House of Representatives, on March 19, 2012. also mentions, as discussed earlier, that financial remediation guidance is being considered that will clarify expectations as to the amount and type of compensation recommended for certain categories of injury to help ensure consistent recommendations across the servicers for borrowers who suffered similar types of injury. Regulator-sponsored webinars for community groups stated similarly that the financial remediation framework will address borrowers’ questions about the kinds and amounts of remediation that will be offered for different types of injuries. Specific information about potential remediation could be difficult to present in a simple manner given the 22 potential types of injury agencies identified and various unique borrower circumstances that could affect the type and amount of remediation borrowers will receive. Federal Reserve staff explained that providing borrowers specific information about remediation also is difficult because, as noted earlier, regulators have not set a predetermined total amount of remediation, and the foreclosure review is not yet complete so consultants have not yet identified all financially injured borrowers. Regulators are developing financial remediation guidance that is intended to serve as a baseline standard yet provide flexibility to the consultants to address the borrower’s direct financial injury. As of May 2012, regulatory staff said that they were still in the process of preparing such guidance, which the regulators intend to publish when it is finalized. As a result, representatives of some servicers we interviewed told us they could not include remediation information in the communication materials. Federal Reserve staff and one servicer also expressed concern that providing this information might confuse borrowers or raise false expectations for what compensation they might receive. However, a recent OCC speech provides some specific information about the potential range of remediation categories, which consumer groups said could help increase borrowers’ motivation to submit a request for review. For example, the remarks state that remediation for financial injuries may include, but is not limited to, lump- sum payments, rescinded foreclosures, reimbursements of lost equity, repayment of out-of-pocket expenses resulting from the error plus interest, correction of erroneous amounts owed in applicable records, and correcting credit reports. Similar information on a potential range of remediation categories is not discussed in any of the regulators’ other communication materials that we reviewed. Given the potential difficulties on reaching and motivating this population, without financial remediation information available, borrowers might not be motivated to respond. The initial coordinated servicer outreach plan approved by regulators provided for a uniform outreach process with additional targeted outreach to African-American and Spanish-speaking borrowers. In developing the outreach activities, servicers did not analyze the target audience for characteristics—such as those associated with low financial literacy—that may have limited some borrowers’ ability to respond to outreach activities. To address concerns that borrowers may not respond to outreach from servicers, a third-party entity serves as the contact point for borrower mailing and questions. While OCC and the Federal Reserve have acknowledged community groups as effective messengers to reach the target audience and have encouraged servicers to coordinate with these groups, servicers have leveraged outreach through community groups to varying degrees. Regulators regularly monitored the status of the outreach activities, but did not compare respondents to nonrespondents to determine whether certain groups of borrowers were underrepresented in the response to the initial outreach activities. Without this analysis, the extent to which the outreach process has effectively complemented the file-review process to identify borrowers who may have suffered financial injury is unclear. Regulators approved a uniform process to reach eligible borrowers, with additional targeted outreach limited to African-American and Spanish- speaking borrowers. According to Federal Reserve staff, the 14 servicers covered by the consent order service more than two-thirds of U.S. mortgages. According to the outreach plan developed by servicers and approved by OCC and the Federal Reserve, the target audience of 4.3 million eligible borrowers—that is, borrowers whose loans on their primary residences had been in some stage of foreclosure in 2009 or 2010—is concentrated in those states that experienced higher foreclosure rates, but broadly represents the U.S. population as a whole covering all ages and income levels. Therefore, servicers determined that the best way to reach their target audience of all eligible borrowers was to use direct mail—the same outreach letter and request-for-review form were mailed to all eligible borrowers—and to place advertisements in four national publications.analysis, the servicers selected these publications for their large circulation both nationally and in states with heavy foreclosure volume, as well as for their broad appeal among both men and women and across different ages and income levels. In addition to print advertisements, servicers provided online paid search advertising to assist borrowers using the Internet to find the independent foreclosure review website and OCC ran English-language public-service radio and newspaper advertisements in small radio stations and newspapers in 38 states. According to regulator guidance on the outreach process, the process was intended to be robust and to ensure that all borrowers had a fair opportunity to file a request-for-review form. According to Federal Reserve staff, the regulator had already planned to conduct some outreach in Spanish prior to meeting with community groups. one additional national print publication that primarily targets the African- American community. Our prior work has found that effectively reaching targeted audiences through outreach activities requires analysis of the target audience, including dividing the audience into smaller groups of people who have relevant needs, preferences, and characteristics. For example, one way to divide the foreclosure review target audience into smaller groups would be to analyze the geographic location of the target audience by Metropolitan Statistical Area (MSA) or zip code, rather than by state. These divisions could enable more refined outreach—such as concentrated advertising in local publications and on local radio stations, or holding community outreach events in addition to direct mail and national advertising—in those areas with high concentrations of the target audience. As illustrated in figure 5, our prior work analyzing foreclosure trends among nonprime loans found that as of June 2009—near the beginning of the review period—concentrations of loans in the foreclosure process varied by congressional district, even in those states with high default and foreclosure rates such as California and Florida, indicating that targeted outreach in these areas could be more likely to reach eligible borrowers. Other outreach campaigns have used this type of analysis to target their outreach activities, including a congressionally appropriated national loan modification scam alert campaign conducted by NeighborWorks America, a government-chartered, nonprofit corporation.minority homeowners and mortgage performance to identify areas within hardest hit states for targeted outreach—for example, areas of California south of San Francisco. While the regulators have taken a number of steps to improve the servicers’ outreach associated with the independent foreclosure review over time by improving the format of communication materials, incorporating feedback from consumer and community groups, and increasing outreach to particular populations, among other things, opportunities for further improvement remain. An effective outreach process is designed to reach all segments of its audience, regardless of such factors as reading level and language spoken, among others. Neither the servicers nor the regulators conducted readability testing or focus groups with the target audience of eligible borrowers, and regulators initially did not solicit input from consumer or community groups familiar with these borrowers. Readability tests of the outreach letter, request-for-review form, and website indicate that a high school or even a college reading level may be required to understand them; however, the use of some complex terms may be unavoidable. In addition, although some information is now available on the website in Spanish, the initial communication materials were not available in languages other than English. Our previous reports and federal plain language guidelines indicate that whether agencies are preparing documents or requiring private sector companies to prepare them, testing communication materials is a sound practice to help ensure that the audience can understand them and use them to take action. Moreover, complexity in the communication materials may prevent people from becoming sufficiently aware of the foreclosure review, and the prospect of confusing or complex forms may discourage people from participating. In addition, borrowers with low financial literacy, including those with limited English proficiency, may have difficulty accessing and understanding the materials, potentially affecting the likelihood of them requesting a review. Because communication materials were not tested and were written at a high reading level, some eligible borrowers might have had difficulty understanding them. To the extent the accessibility of the communication materials affects certain groups’ likelihood of responding, they may not have had a fair opportunity to request a foreclosure review as the regulators intended the outreach process to provide. With the second wave of advertising and the additional mailing directing eligible borrowers to the independent foreclosure review website, ensuring that the online request-for-review form is understandable is especially important. In addition, although the communication materials provide information about the purpose, scope, and process for the foreclosure review, and types of financial injuries covered, as well as disclosing that borrowers could be eligible for compensation, they do not include specific information about the potential types or amounts of remediation borrowers may receive. Specifically identifying that the types of remediation may consist of such items as lump-sum payments, rescinding foreclosures, repayment of out-of-pocket expenses, or correcting credit reports could help motivate borrowers to respond. Industry best practices and examples for notifying borrowers about class action lawsuits, which regulatory staff and servicer representatives used as a model in developing the materials, include specific information about the types and amounts of remediation for which participants could be eligible. Without specific information about remediation in communication materials, some borrowers may not be motivated to submit a request-for-review form. Finally, the planning, and in particular, evaluation of the borrower outreach process were based on limited analysis of eligible borrowers. Although servicers conducted some targeted outreach to African- American and Spanish-speaking borrower, in part due to feedback from consumer groups, the outreach process was largely uniform. Our prior work has found that analyzing the target audience by various characteristics and identifying messengers the audience will consider credible helps ensure that the outreach is effective. However, in approving the outreach plan regulators did not require servicers to conduct such analysis and although the regulators have encouraged servicers to work with community groups that have experience as trusted advisers to distressed borrowers, servicers have done so to varying degrees. We have also found that evaluating the effectiveness of past activities is important before expanding them, such as by conducting additional advertising or mailings to eligible borrowers. Regulators have monitored the status of outreach activities, but have not analyzed the differences in characteristics between respondents and nonrespondents in planning the additional outreach efforts. This analysis could help identify whether any groups of borrowers, particularly those borrowers with characteristics that could make them less likely to respond to the request for review, are underrepresented. The results of such analysis also could provide regulators, third-party consultants, and servicers with the information to target additional outreach to any underrepresented groups or to make changes to the file-review sampling process to ensure that all borrowers are fairly represented. We acknowledge that because the borrower outreach and look-back review are complementary, the outcomes of the foreclosure review cannot fully be evaluated until the look-back review is completed. However, until analysis of the characteristics of respondents compared to nonrespondents is conducted, the potential that certain subgroups of eligible borrowers do not have a fair opportunity to request a foreclosure review remains. OCC and the Federal Reserve have taken steps to improve the outreach from the initial roll-out. To further increase the possibility that all borrowers have a fair opportunity to request a foreclosure review, the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System should take the following actions: Enhance the readability of the request-for-review form on the independent foreclosure review website so that it is more understandable for borrowers, such as by including a plain language guide to the questions. Require that servicers include a range of potential remediation amounts or categories in communication materials and other outreach, such as direct mailings to borrowers, public service announcements, the independent foreclosure review website, regulators’ websites, and officials’ testimonies and speeches. Require servicers to identify trends in borrowers who have and have not responded by factors such as MSA, zip code, servicer, and borrower characteristics and report to the regulators on weaknesses found. If warranted, regulators should require that servicers, in consultation with their third-party consultants, conduct more targeted outreach to better reach underrepresented groups, such as considering more credible messengers to reach these groups. If such action cannot be taken prior to the deadline for requests for review, regulators should consider expanding the look-back review to better ensure coverage for underrepresented groups. We requested comments on a draft of this report from OCC and the Federal Reserve. We received written comments from OCC and the Federal Reserve that are presented in appendixes II and III, respectively. Both agencies emphasized that the outreach process that we focused on in this report is one part of a larger effort to identify financially harmed borrowers for remediation. The Comptroller of the Currency noted in his written comments that OCC shares the goals reflected in the report and is in the process of addressing each of the recommendations. The Comptroller’s letter also provides a more detailed list of initiatives OCC has undertaken related to the borrower outreach process, which is consistent with the actions we summarized in the draft report. The Director of the Division of Consumer and Community Affairs of the Board of Governors of the Federal Reserve System also noted that the Federal Reserve has begun implementing each of the recommendations. First, the letter states that the agencies plan to post a plain language guide to completing the request-for-review form to the agencies’ and independent foreclosure review websites. Second, the letter states that once a framework describing the range of potential remediation is finalized the regulators will issue press releases, post the framework on the regulators’ and independent foreclosure review websites and in frequently asked questions, and hold briefings with consumer and community groups. Third, the letter states that the Federal Reserve is conducting analysis to identify any gaps in respondents by geography and certain borrower characteristics, which will be publicly released to promote targeted outreach. The Director’s letter noted the limitations of readability formulas in assessing how well the foreclosure review communication materials could be understood. We had also acknowledged these limitations in the draft report and noted that they are just one indicator of the readability of the materials. However, the results of these formulas when combined with other evidence, such as feedback from consumer and community groups who have had direct interaction with distressed borrowers, suggested that more could be done to clarify the communication materials. The plain language guide for borrowers completing the request-for-review form that the Federal Reserve is in the process of completing is an important step in addressing readability. The Director’s letter also stated that the comparison between the foreclosure review communication materials and class action lawsuit settlement materials was “imprecise and not appropriate.” In the draft report, we acknowledged the differences between these two activities and the difficulty in providing specific information on remediation in the case of the foreclosure review. Because the borrower outreach process and materials were generally modeled after class action lawsuit activities, we considered it applicable criteria and presented the comparison as an illustrative example of the type of information that has been found to be helpful in motivating participation. The steps the regulators are taking to publicly release information on the types and amounts of remediation that financially harmed borrowers might receive as a result of the foreclosure review is another important step toward promoting participation. We received technical comments from each regulator, which we incorporated where appropriate. In OCC’s letter, the Comptroller of the Currency stated that the agency provided us with specific report line edits that reflected both substantive comments as well as technical or editorial suggestions. The substantive comments emphasized that the outreach component of the foreclosure review was an additional step not typically taken in enforcement actions and provided more information on the actions the agencies and servicers took to reach out to potentially harmed borrowers. In addition, OCC staff raised concerns about the context for our criteria on plain language in the technical comments. The draft report acknowledged the unprecedented nature of the review, and we made changes to the draft report to reflect the other comments, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report focuses on the design and implementation of the borrower outreach process to determine how well information about the foreclosure review is communicated to eligible borrowers with different characteristics. Specifically, this report addresses (1) the extent to which the development of the approach and content of the communication materials and website reflected best practices; and (2) the extent to which the planning and evaluation of the outreach and advertising approach considered the characteristics of the target audience. To determine the extent to which the development of the approach and content of the communication materials and foreclosure review website reflected best practices, we (1) reviewed regulator documents, engagement letters, and outreach materials; (2) conducted readability testing of the online outreach letter and request-for-review form; (3) analyzed data on the extent of limited English proficiency in the United States and the effects of limited English proficiency on financial literacy; and (4) assessed the extent to which the remediation content in the communication materials reflects best practices. To do this, we reviewed key documents, including regulator guidance on outreach activities, the outreach plan, engagement letters between servicers and third-party consultants, and outreach materials, such as the outreach letter, request- for-review form, foreclosure review website, online request-for-review form, frequently asked question guide accompanying the foreclosure review website, print advertising materials, the reminder postcard, and community group webinar materials. We compared these documents with best practices we had previously established and guidelines established by federal agencies related to testing materials with the target audience prior to use and ensuring that materials are clearly written and take into account the audiences’ current level of knowledge. We considered these outreach practices applicable to the outreach campaign for the foreclosure review as they were developed to elicit a one-time action— similar to filing a request-for-review form—from the target audience. In our prior work analyzing the planning, implementation, and evaluation of outreach campaigns, we developed standards by conducting an expert panel of 14 senior management-level experts in strategic communications who identified key planning, implementation, and measurement components for consumer education and outreach. The experts were selected for their experience overseeing a strategic communications or social marketing campaign or other relevant experience and represented private, public, and academic institutions. In addition, we considered our prior work analyzing suggested improvements to the content and format of communication materials. Specifically, for our prior work on credit card disclosures, we conducted interviews, reviewed documents, and analyzed more than 280 comment letters requested by the Federal Reserve in 2005 from issuers, consumer groups, and others as part of the Federal Reserve’s preparation to implement new credit card disclosure requirements. Further, we considered the Office of Management and Budget’s Final Guidance on Implementing the Plain Writing Act of 2010 and accompanying Federal Plain Language Guidelines. We also considered the Securities and Exchange Commission’s A Plain English Handbook: How to Create Clear SEC Disclosure Documents. Finally, to describe the reading level of the U.S. population, we reviewed findings from the 1992 National Adult Literacy Survey on adult reading comprehension levels and the subsequent 2003 National Assessment of Adult Literacy (renamed from 1992). No further assessments have been conducted since 2003. To evaluate the readability of the English language materials on the website, particularly the outreach letter and request-for-review form, we used computer-facilitated formulas to predict the grade level required to understand the materials. Readability formulas measure the elements of writing that can be subjected to mathematical calculation, such as the average number of syllables in words or number of words in sentences in the text, but do not reflect the complexity of ideas in a document or how clearly the information has been conveyed. We edited the text to help ensure that the tests returned accurate results and applied the following industry-standard formulas to the documents: Flesch-Kincaid Formula, Gunning Frequency of Gobbledygook Readability Test (FOG), and McLaughlin Simplified Measure of Gobbledygook Formula (SMOG).Using these formulas, we measured the grade levels at which the website was written, both for each page of the website separately and for the website as a whole (see table 1). We did not verify the accuracy of the formulas implemented by these tests, but we used multiple tests to collaborate the results. Despite limitations, we determined that these tests were sufficiently reliable for our purposes. To analyze the quality of the translation and the readability of the Spanish-language outreach materials on the independent foreclosure review website—specifically, the Spanish- language guide for the request-for-review form—a trained translator (1) compared the translation of the Spanish- and English-language materials to assess the extent to which they provided the same information, (2) analyzed the Spanish-language materials for readability, and (3) reviewed the placement and content of the in-language Spanish statements in the English-language materials. The translator’s conclusions were then reviewed by a native Spanish speaker with professional experience translating and writing official documents. We determined that this review was sufficiently reliable for our purposes. To describe the potential pool of eligible borrowers with limited English proficiency, we analyzed data on the extent of limited English proficiency and considered the effects of limited English proficiency on financial literacy. To describe the U.S. population of individuals with limited English proficiency, we updated analysis we conducted for a prior report on financial literacy.Census Bureau’s 2008-2010 American Community Survey. As noted in our prior report, the Census Bureau does not define the term limited English proficiency. As such, we replicated the measures of the limited English proficient population we used in our prior report based on questions in the American Community Survey that asked “Does this We obtained and analyzed data from the United States person speak a language other than English at home?” “What is the language?” “How well does this person speak English?” For our purposes, we included in the limited English proficiency estimate individuals over the age of 18 who self-reported that they speak English “not well” or “not at all.” Because the American Community Survey data are a probability sample based on random selections, this sample is only one of a large number of samples that might have been selected. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. In this report, All Public User Microdata Area-level percentage estimates derived from the 2008-2010 American Community Survey have 95 percent confidence levels of plus or minus 4.5 percentage points or less, unless otherwise noted. We determined that these data were sufficiently reliable for our purposes. To describe the potential effects on financial literacy from limited English proficiency, we reviewed our prior work and relied on those findings. The work conducted for the prior report included (1) reviewing relevant literature related to financial literacy among immigrants and people with limited English proficiency; (2) conducting interviews with and gathering relevant studies and educational materials from federal agencies, organizations that provided financial literacy and education, and organizations that serve or advocate for populations with limited English proficiency; and (3) conducting a series of 10 focus groups to discuss the barriers that individuals with limited English proficiency may face in improving financial literacy and conducting their financial affairs. We against class action communication materials for homeowners, construction workers, and product liability suits developed by the FJC. The FJC developed illustrative notices of proposed class action certification and settlement by studying empirical research and commentary on the plain language drafting of legal documents, testing notices with nonlawyers for comprehension, evaluating them for readability, testing their effectiveness before focus groups composed of ordinary citizens from diverse backgrounds, and conducting a survey. We also reviewed guidelines available from the National Association of Consumer Advocates (NACA) on the form and content of notices for class action settlements. These guidelines address issues such as the scope of class member releases, attorneys’ fees, and notice of settlement. NACA maintains the comprehensive Standards and Guidelines for Litigating and Settling Consumer Class Actions 176 F.R.D. 375, first published in 1998, and fully updated in 2006, to help ensure that class actions do not lead to restrictions on challenging abusive business practices. The guidelines were intended to be used by consumer class action attorneys as a standard for how to properly proceed, manage, and settle a class action case. They were also intended to be used by courts as a guidepost to judge the merits of cases before them. The standards and revised standards were drafted by a committee of consumer attorneys. After initial drafts were completed, the draft guidelines were submitted for comment to all sectors of the legal community, including professors, think tanks, and the defense bar. After these comments were received and considered, a final draft was published. Further, we reviewed summary documentation from the National Mortgage Settlement between the state attorneys general, and the Departments of Justice, the Treasury, and HUD against the five largest mortgage servicers for errors in foreclosure practices to provide the context of a current example of a large-scale settlement involving similar stakeholders and issues similar to those of the foreclosure review. To determine the extent to which the planning and evaluation of the outreach and advertising approach considered the characteristics of the target audience, we analyzed key documents discussed above as well as the indicators and analysis prepared by the third-party administrator to monitor implementation of outreach activities and meeting agendas between regulators and servicers. We compared these documents to best practices for outreach campaigns on analyzing the target audience, identifying credible messengers, and evaluating outreach activities identified in our prior work. As discussed earlier, we considered these practices applicable to the outreach campaign for the foreclosure review because they are specific to campaigns designed to elicit a one-time action. In addition, we considered practices we identified in our previous work on evaluation strategies for information dissemination activities. These strategies were developed based on analysis of case studies of how five federal agencies evaluated their media campaigns or instructional programs. Finally, we considered our internal control standards on managing risk and other control activities. To identify additional characteristics to consider in an analysis of the target audience of eligible borrowers, we reviewed our prior work on foreclosure trends by congressional district as well as work on financial literacy. For our analysis of foreclosure trends, we analyzed data from LoanPerformance’s Asset-backed Securities database for nonprime loans originated from 2000 through 2007. The database contains loan-level data on the majority of nonagency securitized mortgages in subprime and Alt-A pools. For example, for the period 2001 through July 2007, the LoanPerformance database contains information covering, in dollar terms, an estimated 87 percent of securitized subprime loans and 98 percent of securitized Alt-A loans. For the purposes of the analysis conducted for our prior report, we defined a subprime loan as a loan in a subprime pool and an Alt-A loan as a loan in an Alt-A pool. We focused our analysis for that report on first-lien purchase and refinance mortgages for one-to-four-family residential units. In preparing our previous report we tested the reliability of these data by reviewing documentation on the process the data providers use to collect and ensure the reliability and integrity of their data, and by conducting reasonableness checks on data elements to identify any missing, erroneous, or outlying data. We also interviewed LoanPerformance representatives to discuss the interpretation of various data fields. We concluded that the data were sufficiently reliable for our purposes. Nonprime loans do not represent the entire universe of loans—for example, they do not include prime loans. However, we determined that these data were applicable as an illustrative example of how analysis conducted below the state level could reveal significant concentrations of certain groups. For our analysis of characteristics associated with low financial literacy, we considered our prior work on financial literacy and trends among certain demographics with lower financial literacy. This work included a survey, conducted by a private research firm under contract to GAO, from late July to early October 2004, to determine the extent of consumers’ knowledge of credit reporting issues. The telephone survey was conducted with 1,578 randomly sampled noninstitutionalized U.S. adults aged 18 and over and the results of this survey generally have confidence intervals of plus or minus 6 percentage points. We noted in our previous report that the practical difficulties of conducting a sample survey may introduce errors into estimates made from them. These errors include sampling, coverage, measurement, nonresponse, and processing errors. We made efforts in our prior work to minimize each of these. In addition, we generated an average or mean score for the survey as a whole. We then analyzed responses for all the survey questions and scores based on groups of questions for the national sample and cross-tabulated them across different demographic groups and across consumers with different credit-related experiences. Differences across demographic groups and across consumers with different credit-related experiences were tested for statistical significance at the 95-percent confidence level. In addition to cross-tabulations, we used a regression analysis of demographics and other factors that we thought would be associated with consumers’ knowledge of credit reporting issues. We concluded that these data were sufficiently reliable for our purposes. We confirmed these results during an October 2011 forum convened by GAO on key issues related to financial literacy, including identification of special populations that need sustained financial literacy efforts. Forum participants included representatives from federal, state, and local government agencies; academic experts; nonprofit practitioners; and representatives from the private sector. We also reviewed our prior work on the financial literacy challenges faced by speakers of limited English (discussed earlier).addition, we reviewed a government-chartered, nonprofit corporation’s methodology to identify and reach out to their target audience for an outreach campaign targeted to similar borrowers in foreclosure or at risk of foreclosure. Finally, four of the eight consumer groups whom we had previously interviewed for this work responded to our request to identify characteristics they considered important to understanding the target audience of eligible borrowers. To determine how well the development of the approach and content of the communication materials and website reflected best practices and the extent to which the planning and evaluation of the outreach and advertising approach considered the characteristics of the target audience, we conducted interviews with the following regulator staff and key stakeholders: Staff from OCC and the Federal Reserve. Representatives of five mortgage servicers and three third-party consultants responsible for providing third-party reviews of these servicers’ foreclosure activities. To identify a representative mix of servicers and third-party consultants to interview, we considered servicers overseen by both regulators and those that are considered some of the larger servicers. Representatives from the third-party administrator hired by servicers to administer the outreach process, including sending out mailings, receiving borrowers’ request-for-review forms, and operating the toll- free customer assistance phone number and website. Representatives of 11 consumer groups, community groups, and a mortgage servicing industry association. To identify these groups, we considered organizations identified by OCC and the Federal Reserve as stakeholders, groups receiving funding from servicers to conduct community outreach, organizations that provided testimony on the foreclosure review process, and organizations we had identified during the course of other work on foreclosures, including the Department of the Treasury’s Home Affordable Modification Program. In addition, we reviewed speeches, testimony, and responses to official Questions for the Record on the outreach process provided by OCC and Federal Reserve officials, representatives of third-party consultants, and a representative from a consumer group and a mortgage servicing industry association. We conducted this performance audit from February 2012 through June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karen Tremba (Assistant Director), Jonathan Kucskar, Grant Mallie, Patricia MacWilliams, Marc Molino, Jill Naamane, Carl Ramirez, Robert Rieke, Jennifer Schwartz, Andrew Stavisky, and James Vitarello made key contributions to this report.
In April 2011 consent orders, the Office of the Comptroller of the Currency (OCC), Federal Reserve, and the Office of Thrift Supervision directed 14 mortgage servicers to engage third-party consultants to review 2009 and 2010 foreclosure actions for cases of financial injury and provide borrowers remediation. To complement these reviews, the regulators also required servicers to establish an outreach process for borrowers to request a review of their case. This report examines (1) the extent to which the development of the outreach approach and content of the communications materials and website reflected best practices, and (2) the extent to which the planning and evaluation of the outreach and advertising approach considered the characteristics of the target audience. To conduct this work, GAO reviewed the design and implementation of borrower outreach activities and materials against best practices and federal guidelines and interviewed representatives of servicers, consultants, community groups, and regulators. Regulators and servicers have gradually increased their efforts to reach eligible borrowers and have taken steps to improve communication materials. Conducting readability tests or using focus groups are generally considered best practices for consumer outreach, but regulators and servicers did not undertake these activities. Staff at the Board of Governors of the Federal Reserve System (Federal Reserve) said that this was, in part, a trade off to expedite the remediation process. Regulators also did not solicit input from consumer groups when reviewing the initial communication materials. Readability tests found the initial outreach letter, request-for-review form, and website to be written above the average reading level of the U.S. population, indicating that they may be too complex to be widely understood. Regulatory staff noted limitations to such readability tests and told us they discussed using plain language, but that the use of some complex mortgage and legal terms was necessary for accuracy and precision. Clear language on the independent foreclosure review website is particularly important as current outreach encourages borrowers to submit requests for review online. Communication materials developed by mortgage servicers with input from regulators and consultants included information about the purpose, scope, and process for the foreclosure review and noted that borrowers may be eligible for compensation. However, the materials do not provide specific information about remediation—an important feature to encourage responses as suggested by best practices and reflected in notification examples GAO reviewed. Without informing borrowers what type of remediation they may receive, borrowers may not be motivated to participate. The outreach planning and evaluation targeted all eligible borrowers with some analysis conducted to tailor the outreach to specific subgroups within the population. In approving the outreach plan, regulators considered the extent to which the plan promoted national awareness and was appropriate to reach the demographics of the target audience. The outreach process was largely uniform with some targeted outreach to Spanish-speaking and African-American borrowers. GAO has previously found that effective outreach requires analysis of the audience by shared characteristics, but regulators did not call for servicers to analyze eligible borrowers by characteristics, such as limited English proficiency, that may have affected their response. While regulators have identified community groups as effective messengers and encouraged servicers to reach out to them, servicers have leveraged these groups to varying degrees. According to consumer groups, borrowers may have ignored communication materials because they did not understand who provided the information and believed the materials were fraudulent. Regulators regularly monitored the status of the outreach activities and analyzed the effect of advertising on response rates. GAO has previously found that analyzing past performance when expanding activities is important. Regulators did not analyze characteristics of respondents and nonrespondents in introducing a second wave of outreach activities. Without this analysis, regulators may not know if certain groups of borrowers are underrepresented in the review. As a result, whether additional outreach to target these groups or changes to the file review process are need. OCC and the Federal Reserve should enhance the language on the foreclosure review website, include specific remediation information in outreach, and require servicers to analyze trends in borrowers who have not responded and, if warranted, take additional steps to reach underrepresented groups. In their comment letters, the regulators agreed to take actions to implement the recommendations, while the Federal Reserve took issue with GAO’s criteria. OCC also took issue with GAO’s criteria in its technical comments.
The C-17 is being developed and produced by McDonnell Douglas. The Congress has authorized procurement of 40 C-17 aircraft through fiscal year 1996. As of October 1, 1995, McDonnell Douglas had delivered 22 production aircraft to the Air Force. In November 1995, the Department of Defense (DOD) announced plans to buy an additional 80 C-17 aircraft. In addition to procuring the aircraft, the Air Force is purchasing spare parts to support the C-17. The Air Force estimates the total cost for initial spares—the quantity of parts needed to support and maintain a weapon system for the initial period of operation—for the first 40 C-17s to be about $888 million. In January 1994, we reported that the Air Force had frequently ordered C-17 spare parts prematurely. We noted that premature ordering occurred because the Air Force used inaccurate and outdated information, bought higher quantities than justified, or did not follow regulations governing the process. As a result, DOD revised its guidance to limit the initial procurement of spares, and the Air Force canceled orders for millions of dollars of C-17 parts. Initial spares for the C-17 are being procured under two contracts. Some are being provided under the C-17 development contract through interim contractor support. That support, which started in mid-1993, involves providing spares and technical support for two C-17 squadrons through June 1996. As of May 31, 1995, the Air Force had spent about $198 million for interim contractor support. The remaining initial spares are being procured under contract F33657-81-C-2109 (referred to in this report as contract-2109). Under this contract, the Air Force, as of May 31, 1995, had obligated $120 million for initial spares, but negotiated prices for only about $29 million of the spares. The $91 million balance was the amount obligated for parts ordered on which prices had not been negotiated. McDonnell Douglas produces some spare parts in its facilities at the Transport Aircraft Division at Long Beach, California, where the C-17 is being produced, or at other locations, such as its Aerospace-East Division at St. Louis. It also subcontracts for the production of parts. The subcontractors may be responsible for all aspects of part production or McDonnell Douglas may furnish materials or complete required work. The Air Force paid higher prices for 33 spare parts than appears reasonable when compared to McDonnell Douglas’ historical costs. The 33 spare parts were ordered under contract-2109 and manufactured by McDonnell Douglas’ St. Louis Division. The Long Beach Division had previously purchased them from subcontractors for production aircraft at much lower costs. The St. Louis Division’s estimated costs were from 4 to 56 times greater than the prices that Long Beach had paid outside vendors several years earlier. The parts were in sections of the C-17 assembled by the Long Beach Division for the first four aircraft, but assembled by the St. Louis Division for subsequent aircraft. For 10 parts, McDonnell Douglas had previously purchased the complete part from a subcontractor. For the other 23 parts, it had furnished material to a subcontractor that manufactured the part. While our examination of price increases was limited to 33 spare parts, an Air Force-sponsored should-cost review identified potential savings of $94 million for the C-17 program if work is moved from McDonnell Douglas’ St. Louis Division to outside vendors or other McDonnell Douglas facilities. Air Force officials said that the $94 million savings related only to components for production aircraft. They said that the savings would be higher if spare parts were included. We identified 10 parts—7 hinges on the air inlet door to the C-17’s air conditioning system, 2 cargo door hooks, and a door handle on the C-17’s vertical stabilizer access door—that McDonnell Douglas had previously purchased complete from a subcontractor at much lower costs. Information on previous purchase costs, McDonnell Douglas’ manufacturing costs, and the price that the Air Force paid for each of these spare parts are included in appendix I. Details on one of the hinges follow. The Air Force paid $2,187 for one hinge on the air inlet door to the C-17’s air conditioning system. The hinge (see fig. 1) is aluminum, about 4 inches long, 2 inches wide, and ranges from about 1/16 of an inch to 1-3/8 inches thick. The Long Beach Division, which assembled the air conditioning inlet door for initial production, purchased 14 of these hinges from a subcontractor in 1988 for use on production aircraft at $30.60 each. It had also paid the vendor $541 for first article inspection and $2,730 for reusable special tooling. These costs, however, would not have been incurred on future orders. In 1992, McDonnell Douglas transferred the air conditioning inlet door assembly work to its St. Louis Division and that division made the hinge for production aircraft and for the spare part order. The estimated cost for the spare hinge was $1,745, and, with overhead, profit, and warranty factors, the Air Force paid $2,187 for it. The fact that the subcontractor had made the hinge from a special casting while the St. Louis Division machined the hinge from bar stock could be one cause of the higher price. We identified 23 parts—21 different cargo door hooks and 2 different hinge assemblies—where McDonnell Douglas had previously furnished material to a subcontractor who produced the parts at much lower costs. Information on previous purchase costs and McDonnell Douglas manufacturing costs are included in appendix II. Details on one of the door hooks follow. The Air Force paid $12,280 for one of the hooks. The hook (see fig. 2) is made of steel and is about 7 inches high, 3-1/2 inches wide, and about 4-1/2 inches thick. For the early production aircraft, the Long Beach Division had furnished material valued at $715 to an outside vendor in 1992 who manufactured this hook for $389 (exclusive of the material value). After initially using hooks for production aircraft provided from the Long Beach Division’s inventory, the St. Louis Division made them starting with production aircraft number 12. For the spares order under contract-2109, the St. Louis Division estimated “in-house” manufacturing costs (exclusive of material costs) at about $8,842. McDonnell Douglas officials said that the primary reason for moving various work from the Long Beach Division to the St. Louis Division was to recover from being behind schedule and that sufficient time was not available to procure parts from vendors. McDonnell Douglas officials also said that now that production deliveries are on schedule, they will be reviewing parts to identify the most affordable and effective manufacturing source and that 17 of the 33 parts have been identified as candidates to move out of St. Louis to achieve lower C-17 costs. DOD advised us that DPRO officials at McDonnell Douglas had estimated the cost difference between production by McDonnell Douglas versus subcontractors for the 33 parts to be $141,000 and, after further analysis,had determined that $65,000 was excessive. McDonnell Douglas refunded that amount in December 1995. Our review of the data submitted to support the pricing of selected spare parts orders showed that McDonnell Douglas’ St. Louis Division used outdated pricing information when proposing costs under intercompany work orders with the Long Beach Division for the C-17 spares. The St. Louis division used labor variance factors based on the second quarter of 1992 for proposing labor hours required for items produced in 1994. Most of these orders were negotiated with DCMC in mid-1994. As of May 31, 1995, DCMC had negotiated prices for 95 contract items made by the St. Louis Division with a total negotiated value of about $966,000. We reviewed data for 37 of these items with a negotiated total value of $347,000. We reviewed only labor variance factors and did not address other rates and factors such as the miscellaneous production factor. We found that the selected items were overpriced by $117,000, or about 34 percent of the negotiated value of the items reviewed. For example, McDonnell Douglas, in developing the basic production labor hours estimate for a hinge assembly multiplied machine shop “target” hours by a variance factor of 2.33 and sheet metal target hours by a variance factor of 2.5. Data for the first quarter of 1994 showed a conventional machine shop variance of 1.26 and a sheet metal variance of 1.60. Because most work for this item took place in the first half of 1994 and the prices were negotiated in June 1994, the 1994 variance rates should have been used for pricing the item. Instead, McDonnell Douglas used rates based on the second quarter of 1992, which were higher. A price of $42,587 was negotiated based on the 1992 data. Using the data for the first quarter of 1994, the price would have been $26,458, a difference of $16,129, or about 38 percent lower than the negotiated price. After we brought these issues to the attention of DOD officials, they acknowledged that more current labor variance data should have been used and sought a refund. McDonnell Douglas made a refund of $117,000 in December 1995. Our review indicated that the profits awarded for some orders under contract-2109 appear higher than warranted. DFARs requires the use of a structured approach for developing a government profit objective for negotiating a profit rate with a contractor. The weighted guidelines approach involves three components of profit: contract type risk, performance risk, and facilities capital employed. The contracting officer is required to assess the risk to the contractor under each of the components and, based on DFARs guidelines, calculate a profit objective for each one and, thus, an overall profit objective. As a general matter, the greater the degree of risk to the contractor, the higher the profit objective. For example, the profit objective for a fixed-price contract normally would be higher than that for a cost-type contract because the cost risk to the contractor is greater under the former. Consequently, in its subsequent price negotiations, the government normally will accept a higher profit rate when a contractor is accepting higher risks. The price of spare orders under contract-2109 were to be negotiated individually. However, rather than calculate separate profit objectives and negotiate profit rates for individual orders, DPRO and McDonnell Douglas negotiated two predetermined profit rates, documented in a memorandum of agreement, that would apply to subsequent pricing actions. The profit rates were 10 percent for parts that McDonnell Douglas purchased from subcontractors, and 15 percent for spare parts that McDonnell Douglas manufactured. Our review indicates that the use of these rates for many later-priced spares resulted in higher profits for the contractor than would have been awarded had objectives been calculated and rates negotiated when the orders actually were priced. Based on profit rates of 6 percent for purchased parts and 13 percent for parts made in-house, both of which could have been justified according to our calculations, McDonnell Douglas would have received less profit. For example, applying these lower profit rates to the $29 million of negotiated spare part orders as of May 31, 1995, would have reduced the company’s profit by $860,000. After we presented our information in October 1995, DCMC directed that the memorandum of agreement, which was scheduled to either expire or be extended on November 1, 1995, be allowed to expire and that future profit objectives be established on an order-by-order basis. DOD officials agreed that a single profit analysis should not be used for C-17 spare parts. In developing a profit objective for contract-2109, the contracting officer assigned a value for contract type risk based on firm, fixed-price contracts. However, negotiations of prices for spare part orders were conducted, in many cases, after the vendor or McDonnell Douglas had incurred all costs and delivered the spares. These conditions lowered the contractor’s risk for those parts far below what normally would be expected for a firm, fixed-price contract. The risks were more like those that exist for cost-type contracts, for which the weighted guidelines provide lower profit objective values. Of the 40 parts made in-house that we reviewed, McDonnell Douglas had delivered 25 (63 percent) of the parts at the time of price negotiations with the government. Five of the remaining 15 items were delivered during the month of price negotiations, and all were delivered within 3-1/2 months of price negotiations. Of the 55 “buy” spare parts we reviewed, McDonnell Douglas had established prices with its vendor for 45 (82 percent) of the parts. Using one order as an example, McDonnell Douglas (1) negotiated spare parts prices with its subcontractor on January 25, 1993; (2) negotiated prices with the government on April 11, 1994; and (3) scheduled the parts for delivery on May 27, 1994. Thus, for both make and buy items, a substantial portion of the contractor’s costs had been known at the time of the price negotiations. Section 217.7404-6 of DFARs requires that profit allowed under unpriced contracts reflect the reduced risk associated with contract performance prior to negotiations. Consistent with this requirement, the weighted guidelines section (215.971-3) requires the contracting officer to assess the extent to which costs have been incurred prior to definitization of a contract action and assure profit is consistent with contractor risk. In fact, the guidelines provide that if a substantial portion of the costs has been incurred prior to definitization, the contracting officer may assign a contract type risk value as low as zero, regardless of contract type. A DPRO representative said that, in negotiating the memorandum of understanding, DPRO knew that the two profit rates for later application would not be perfect in every case. He said, however, that they were expected to be off in one direction as often as in the other, creating an overall fair agreement. The representative noted, for example, that while deliveries for the orders we reviewed were near the negotiation dates, the memorandum’s rates also would apply to orders with deliveries more than 2 years in the future, where minimal costs have been incurred. In addition, the representative stated that a significant number of parts would be undergoing design changes because a baseline configuration for the C-17 did not exist. The representative explained that McDonnell Douglas is responsible for replacing spares affected by design changes until 90 days after reliability, maintainability, and availability testing, which was completed on August 5, 1995, and that any additional cost for such replacements would have to be absorbed by McDonnell Douglas. Finally, the representative noted that the minimal cost history on C-17 spares would indicate a higher than normal contract type risk. We have no evidence to support the DPRO official’s view that profits based on the rates in the memorandum of agreement would balance out over time. In fact, DCMC let the agreement lapse and will calculate profit objectives and negotiate profit rates on an order-by-order basis. In addition, we noted that McDonnell Douglas initially received a 2-percent warranty fee on contract-2109 orders to cover both the risk of design changes and provide a standard 180-day commercial warranty. Furthermore, the profit agreement stated that McDonnell Douglas could submit additional warranty substantiation at any time and, if the data supported a different percent for warranty, the government would consider adjusting the percentage. Thus, the warranty fee is the contract mechanism the parties agreed to use to address the risks of replacement parts because of design changes. The contracting officer, in developing a profit objective for buy orders (complete spare parts purchased from an outside vendor) under contract-2109, used a higher rate for performance risks than was warranted. The DFARs’ weighted guidelines provide both standard and alternate ranges for the contracting officer to use in calculating performance risk, which is the component of profit objective that addresses the contractor’s degree of risk in fulfilling the contract requirements. The standard range applies to most contracts, whereas the higher alternate is for research and development and service contracts that involve low capital investment in buildings and equipment. The guidelines provide that if the alternate range is used, the contracting officer should not give any profit for the remaining component, facilities capital employed, which focuses on encouraging and rewarding aggressive capital investment in facilities that benefit DOD. DCMC officials said that the alternate range was used in calculating the performance risk component on contract-2109 because McDonnell Douglas’ system could not provide an estimate to be used for purposes of calculating the facilities capital component. DPRO officials said that since the negotiation, McDonnell Douglas has developed the means to estimate facilities capital employed on its spares proposals. They said that using the standard range for performance risk and including facilities capital employed for spares orders yields a profit objective that is substantially the same as the profit objective calculated using the alternate range for performance risk. DOD concurred that DPRO should not have utilized the alternate range for performance risk, but repeated the DPRO’s assertion that using the standard range and including facilities capital employed yields essentially the same results. We reviewed DCMC’s data and found that using the alternate range for the performance risk component does not result in a substantially similar profit objective to that calculated by applying a factor for facilities capital employed. The contracting officer’s use of the alternate range for performance risk, combined with the use of a fixed-price value for contract type, led to the negotiation of a profit rate of 10 percent for the buy orders; in contrast, we calculated that using a cost-type contract risk factor, the standard range for performance risk, and McDonnell Douglas’ estimate of facilities capital employed would have resulted in an overall profit objective of 6 percent for the buy orders. In commenting on a draft of this report, DOD said that it had taken appropriate action to address our finding of overpricing. In addition to recovering $182,000, DOD indicated that DPRO at McDonnell Douglas will now screen all spares orders containing items to be made in-house to (1) look for possible conversion to buy items and (2) ensure that labor data is correct for all items made in the St. Louis Division. Moreover, DOD stated that DPRO no longer relies on a single profit analysis and, by completing a separate analysis for each order, DPRO will address the contract risk associated with each order. DOD acknowledged that it is possible to take issue with the contracting officer’s selection of risk factors and that DPRO should not have used the alternative range for performance risk in its profit analysis. However, DOD asserts that it would be misleading to infer that unjustified profits were paid to the contractor. We do not infer that the contractor received $860,000 in unjustified profits. Determining the appropriate amount of profits is a matter to be negotiated between DPRO and the contractor. However, we noted that (1) lower rates were justified under the weighted guidelines and (2) rates of 6 percent for purchased parts and 13 percent for parts made in-house could be justified. While the results of our review cannot be be projected to all C-17 spare parts, using the lower profit rates for the $29 million of negotiated spare parts orders as of May 31, 1995, would have reduced the company’s profit by $860,000. Our subsequent analysis raises some questions about the DOD statement that DPRO, by making a separate profit analysis for each order, will address the contract type risk associated with each order. Our review of an order negotiated in January 1996 based on a separate profit analysis indicated that the DPRO’s profit analysis continues to not reflect the reduced risk when most costs have been incurred prior to price negotiations. While the negotiated profit rate was 8.6 percent, or 1.4 percent lower than the previously negotiated rate, the amount of profit allowed for contract type risk continues to appear higher than justified by the weighted guideline and DFARs. In this regard, DPRO noted that McDonnell Douglas’ cost “amounts to only 46 hundreths of one percent” and “you are being paid all your costs and the parts have already been shipped, thereby reducing your risk to a very low degree.” However, the contract risk factors were at the midpoint range and higher for a firm, fixed-price contract. The stated reason for this was that the design could change, necessitating a recall. While DPRO discontinued using the memorandum of understanding profit rates, we remain concerned that the negotiated profit rates may not reflect the reduced contract type risk when essentially all costs have been incurred. DOD’s comments are reprinted in their entirety in appendix III. To select spare parts for our review, we analyzed reports developed by McDonnell Douglas’ data system that included historical and current information on spare parts orders—for example, the negotiation date, negotiation amount, and delivery date on current/previous orders. For our review, we only considered spare parts orders for which prices had been negotiated as of May 31, 1995. As of that date, prices for orders involving 696 spare parts had been negotiated, with a value of about $29 million. We selected spare parts for a more detailed review based on current/previous cost, intrinsic value, and nomenclature. Our selection of parts was judgmental and our results cannot be projected to the universe of C-17 parts. We reviewed the contractor’s and the DPRO’s contract and pricing files, and discussed the pricing issues with selected contractor and DCMC officials. As a result of rather significant cost increases for a number of spare parts that had the manufacturing/assembly effort transferred to the contractor’s plant in St. Louis, we obtained additional documentation from the contractor’s plant in St. Louis and DPRO. We reviewed the DFARs guidance relating to the use of weighted guidelines in establishing a profit objective. We also reviewed the memorandum of agreement that was negotiated by DPRO for contract-2109 and discussed the basis for the negotiated profits with DOD and DPRO officials. In assessing the value assigned to contract type risk, we reviewed data on 95 spare parts with a total negotiated price of about $3 million out of 696 spare parts with a total negotiated price of about $29 million, or about 14 percent of the parts. Our review of selected spare parts cannot be projected to all C-17 spare parts. However, to illustrate the potential effect of lower profit rates, we calculated a potential reduction using spare parts orders negotiated as of May 31, 1995. We conducted our review between November 1994 and September 1995 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of Defense and the Air Force; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4841. The major contributors were David Childress, Larry Aldrich, Kenneth Roberts, and Larry Thomas. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the pricing of certain spare parts for the C-17 aircraft, focusing on those spare parts that experienced significant price increases when McDonnell Douglas decided to produce them in-house rather than purchase them from outside vendors. GAO found that: (1) GAO's review indicates that the Air Force paid higher prices for spare parts than is justified; (2) for 33 selected spare parts formerly procured under subcontracts, costs are from 4 to 56 times higher after McDonnell Douglas moved the work in-house; (3) for example, McDonnell Douglas paid an outside vendor $389 to machine a door hook that it subsequently machined in-house at its St. Louis Division at an estimated cost of $8,842; (4) costs for some spare parts are higher than justified because McDonnell Douglas used outdated pricing data that overstated its proposed prices; (5) in developing the proposed costs of selected spare parts, McDonnell Douglas used outdated labor variance factors, which resulted in prices being overstated by 34 percent ($117,000) for 37 parts; (6) the profits awarded on some orders under contract-2109 appear higher than warranted; (7) the contracting officer used Defense Federal Acquisition Regulation Supplement guidelines to calculate profit objectives and negotiate profit rates with the contractor that are documented in a memorandum of agreement; (8) the contracting officer developed the government's objectives based on the risks of a fixed-price contract; (9) however, most costs were known when the order prices were negotiated; therefore, the contractor's risks were lower than in a fixed-price environment; (10) also, the contracting officer used a higher performance risk factor than appears appropriate when McDonnell Douglas is buying spare parts from subcontractors; and (11) based on profit rates that GAO's calculations suggest could have been justified, McDonnell Douglas would have received less profit. GAO also found that: (1) as GAO discussed its findings with Department of Defense (DOD) officials during GAO's review, they began taking actions to address those findings; (2) for example, the Defense Contract Management Command's Defense Plant Representative Office at McDonnell Douglas calculated that the overpricing of spare parts was $182,000 and recovered that amount from McDonnell Douglas in December 1995; and (3) DOD stated that other actions are being taken to prevent these overpricing problems on other spare parts.
The National Defense Authorization Act for Fiscal Year 1997 directed the Secretary of Defense, in consultation with the Chairman, Joint Chiefs of Staff, to conduct a review of the defense program. The legislation required DOD to report on a number of topics, including the defense strategy, the force structure best suited to meet the strategy, and the appropriate ratio of combat to support forces. During the QDR process, DOD considered three alternatives for implementing the national defense strategy to shape and respond to current needs and prepare the force for the future within an expected budget of about $250 billion annually (constant 1997 dollars). One alternative focused on current dangers and called for maintaining the current force structure and investment levels. Another alternative focused on future dangers and allocated more resources to modernizing for the future but significantly reduced the current force. The final alternative, selected by DOD, targeted infrastructure activities, called for modest force structure cuts, and increased modernization funding to $60 billion per year. According to the QDR, this option retains sufficient force structure to meet current requirements and reallocates resources to invest in force modernization. A principal objective of the QDR was to understand the financial risk in DOD’s program plans and devise ways to manage that risk. The QDR noted that past years’ procurement funds were used for unplanned operating expenses. The QDR concluded that as much as $10 billion to $12 billion per year in future procurement funding could be diverted for unplanned operating expenses. The QDR also noted that the migration of procurement funding is caused by unprogrammed operating expenses from underestimating day-to-day operating costs, unrealized savings from initiatives such as outsourcing or business process reengineering, and new program demands. To address this financial instability, the QDR directed cutting some force structure and personnel, shedding additional excess facilities through more base closures and realignments, streamlining infrastructure, and reducing quantities of some new weapon systems. Congress establishes minimum active duty personnel levels for each service as part of the annual national defense authorization process. Thus, congressional approval for the QDR active duty personnel reductions will be needed because they would reduce the number of personnel below the current approved levels. The QDR directed the services to cut 61,700 active, 54,000 reserve, and 60,800 civilian personnel by fiscal year 2003, except for 7,700 of the civilian cuts that DOD expected to achieve by fiscal year 2005. DOD expected to save about $3.7 billion annually by fiscal year 2003 as a result of these cuts. The QDR personnel cuts are in addition to those cuts the services had planned in the fiscal year 1998 FYDP through fiscal year 2003, which was prepared before the QDR. Appendix I shows the total projected personnel reductions by service through fiscal year 2003. The level of personnel cuts called for in the QDR was based on DOD’s plan to achieve dollar savings that would (1) reduce the possibility that procurement funds would be used for unplanned expenses and (2) enable DOD to increase and maintain procurement funding at $60 billion annually.In March and April 1997, DOD officials concluded that a 10-percent force structure cut would result in an unacceptable risk in implementing the national military strategy and that the potential savings from infrastructure initiatives identified during the QDR process would not be sufficient to ensure that procurement funding would not be used for unplanned expenses. As a result, senior civilian officials and the service chiefs agreed that the services needed to eliminate the equivalent of about 150,000 active military personnel, which Office of the Secretary of Defense (OSD) officials estimated would save between $4 billion and $6 billion annually by fiscal year 2003. The Secretary of Defense directed the service chiefs to develop initiatives to achieve personnel cuts and assess how to allocate the cuts among active, reserve, and civilian personnel. In May 1997, the Secretary of Defense approved the services’ proposals to eliminate about 175,000 active, reserve, and civilian personnel and save an estimated $3.7 billion by 2003, as shown in table 1. The savings estimates vary among the services because of the different levels of active, reserve, and civilian personnel cuts and the extent of outsourcing included in the services’ plans. For example, the Navy and the Air Force plan to cut about 30,000 and 46,000 active, reserve, and civilian personnel, respectively. Despite the larger personnel cut, the Air Force’s estimated savings are significantly lower than the Navy’s because most of the Air Force cuts will occur primarily from replacing military and civilian personnel with contractors, which saves only a portion of current salaries. In contrast, the Navy plans to eliminate personnel primarily by reducing force structure, such as surface combatants, which will save all of the current and future salaries. The Army plan was based on the assumption that it had to eliminate the equivalent of 45,000 active personnel. The Army decided to cut its active, reserve, and civilian personnel each by the equivalent of 15,000 active personnel. The active cuts were based primarily on transferring some active combat service and combat service support missions to the reserves and allocating percentage cuts to most of the major command institutional forces. The Army decided to cut the reserves by 45,000, which it believed to be the equivalent of 15,000 active positions, based on the assumption that three reserve component positions equaled the cost of one active position. In allocating the cuts between the reserve components, the Army considered an analysis of forces that indicated about 6,300 Army Reserve and 62,000 Army National Guard forces were not included in current war plans. After considering this analysis and other factors, the Army decided to cut the Army Reserve by 7,000 personnel and the Army National Guard by 38,000 personnel. After the release of the QDR, Army National Guard officials stated that they were not included in the process used to determine the scope of the cuts and that they have yet to reach agreement with Army headquarters on all the personnel cuts. The Army reserve components have agreed to cut 20,000 reserve personnel by fiscal year 2000 and defer allocation of the remaining 25,000 cuts. The Army civilian cuts were based primarily on a plan to compete 48,000 civilian positions, with the assumption that private contractors would win one-half of the competitions. However, the Army’s plan was not based on a study of missions and functions by location. The Army assumed that all eligible positions in commercial activities would be competed and that it could reclassify some positions that cannot currently be competed. The remainder of the civilian cuts were based on efforts to reengineer the Army Materiel Command and reduce the number of military technicians in the reserve component. The Navy proposed reducing its active, reserve, and civilian personnel by about 4.5 percent each. The majority of the active military cuts were based on planned force structure cuts, such as reducing the number of surface combatants and attack submarines, and transferring some active support ships to the Military Sealift Command. The Navy Reserve cuts were based primarily on plans to decommission frigates, deactivate some aircraft and helicopters, and eliminate positions that had been funded but had not been filled. The Navy expects to reduce civilian personnel primarily by workload reductions and reengineering; however, it had not initiated any studies, as of May 1997, to achieve these cuts. Unlike the Army and the Air Force plans, the Navy plan assumes very few reductions from outsourcing because the Navy, in its fiscal year 1998 budget, had programmed savings of $2.5 billion from outsourcing by fiscal year 2003. The Air Force planned to achieve the majority of its personnel cuts from outsourcing and the remainder through consolidating fighter and bomber squadrons and streamlining headquarters. The Air Force relied on an ongoing study, known as Jump Start, to determine the potential for reducing active military and civilian positions by outsourcing. This study examined the potential for outsourcing at wing level rather than relying exclusively on a broad, headquarters-only assessment of all personnel that could potentially be outsourced. After the QDR, the Air Force identified some problems with the data used to determine the potential number of cuts; therefore, it programmed a smaller personnel reduction than that identified in the QDR report. The Marine Corps plan to reduce active personnel was based primarily on reducing and reorganizing the Marine Corps Security Battalion, which provides security for Navy installations. The Marine Corps also proposed to cut some administrative support in headquarters activities, but it had not identified any specific actions as of May 1997. The Marine Corps had also not developed specific plans to reduce reserve and civilian personnel. Not all of the QDR personnel cuts were included in the fiscal year 1999 FYDP. In addition, there is considerable risk that some of the cuts included in the FYDP may not be achieved because (1) the Army has not agreed on the allocation of 25,000 of the 45,000 reserve component cuts, (2) significant reductions in the Air Force and the Army are based on implementing aggressive outsourcing plans, and (3) some of the Army and the Navy civilian reductions are contingent on the outcome of reengineering studies. On the other hand, all of the services, except the Air Force, have plans to achieve the majority of their active military cuts by the end of fiscal year 1999. For example, the Navy plans to achieve about 14,200, or 79 percent, of its active military cuts in fiscal year 1999 through force structure reductions, such as decommissioning surface combatants. Moreover, the Navy has plans to achieve its reserve cuts, and the Army has specific plans to achieve 20,000 reserve component cuts. Also, the Marine Corps has plans to achieve the majority of its active and reserve cuts. Although outsourcing is only a small part of the Navy’s QDR cuts, the Navy has an aggressive outsourcing program that involves risk because the Navy has not identified the majority of the specific functions that will be studied to achieve the expected savings. Details of the services’ plans are included in appendixes II through V. The Air Force did not program about 5,600, or 20 percent, of its active military and 2,300, or 13 percent, of its civilian QDR reductions in the fiscal year 1999 FYDP. The Air Force double counted some potential outsourcing savings, and OSD deferred most of the Air Force’s plans to restructure fighter squadrons and consolidate bomber squadrons because it determined that the plans were not executable at this time. According to an OSD official, OSD was concerned that the restructuring plan could be construed by Congress as violating its guidance to refrain from any planning for future base closures. Likewise, the Air Force reserves will not be reduced by 700 personnel because, after the QDR was released, the Air Force decided to increase the reserve end strength to cover an existing wartime shortage, according to Air Force officials. These actions will reduce the Air Force’s planned recurring savings to about $600 million compared with the $790 million it had planned to achieve by fiscal year 2003. The QDR directed that the Army reduce its reserve components by 45,000 personnel. In June 1997, at a meeting convened to reach agreement on how the reductions should be allocated, the reserve component agreed to reduce end strength by 20,000 by fiscal year 2000—17,000 in the Army National Guard and 3,000 in the Army Reserve. However, officials within the Army do not agree on how the remaining 25,000 personnel will be cut. For budgeting purposes, the Army allocated 21,000 personnel to the National Guard and 4,000 to the Army Reserve in the fiscal year 1999 FYDP. However, National Guard officials stated that they did not agree to the additional cuts. A significant portion of the active military and civilian cuts in the Air Force, and the civilian cuts in the Army, are based on plans to conduct private-public competitions to determine whether functions could be done more economically by contractors or an in-house workforce consisting of civilian employees. In developing their plans, the services made different assumptions about the personnel cuts that could be achieved through these competitions. The Air Force identified the specific functions that will be studied by base; however, it made some assumptions that could overstate the number of civilian cuts. On the other hand, the Army had not identified the majority of the specific functions by location to be competed, and its plan assumes that all eligible civilian positions in commercial activities can be competed. Although the fiscal year 1999 FYDP reflects a lower number of reductions through outsourcing than the Air Force’s May 1997 plan, the Air Force made some assumptions that could make it difficult to achieve about 6,900 civilian cuts. According to Air Force officials, the fiscal year 1999 FYDP reflects that about 22,000 military and 16,000 civilians will be eliminated through outsourcing by fiscal year 2003. To estimate the potential personnel cuts from outsourcing, the Air Force assumed that all of the military positions included in its Jump Start study, and all of the military and civilian positions included in a separate outsourcing study of targeted functions at four bases, would be contracted out. This assumption differs from past Air Force experience, which shows that a civilian workforce wins 40 percent of all competitions and that 60 percent of the work is contracted out. Air Force officials noted that a standard 12-percent overhead factor must now be included in the government cost estimate, which they believe will result in more functions being contracted out. Our recent review of the 12-percent overhead rate suggests the potential, though not the certainty, for more competitions to be won by the private sector. If more functions are contracted out, then more civilian positions will be eliminated. However, the Air Force commercial activities manager stated that the Air Force has not had sufficient experience with the 12-percent overhead factor to determine if it will change the mix of functions that remain in house or are contracted out. If the A-76 change does not result in contractors winning more competitions and the results are similar to past results, we estimate that the Air Force may not be able to eliminate as many as 6,900 civilian positions. The Army plans to compete 48,000 positions to achieve the majority of its civilian reductions; however, the Army made some assumptions that could make it difficult to achieve all of the planned cuts. For example, the Army assumed that it could compete all 34,000 civilian positions in commercial activities except those exempted by legislation, such as firefighters and security guards. However, unlike the Air Force, the Army has not done a study to determine if all positions can be competed. The Air Force found that it is not practical to compete many positions in commercial activities because the positions are spread across many units and locations. The Army announced studies covering about 14,000 of the 34,000 positions; however, it has not identified the specific functions or location of the remaining positions to be studied. Army officials stated the major commands would identify the functions to be studied as part of their future annual budgets. Finally, the study universe also included some positions that involved performance of inherently governmental functions and therefore cannot be competed. Army officials stated that, as part of the Defense Reform Initiative, a study is underway to determine if all positions are consistently and properly classified throughout DOD. Army officials believe this review will reclassify about 14,000 positions to commercial activities, which will then enable the positions to be competed. However, the Army does not have analysis to support this figure. The Navy planned to achieve about 1,300, or 7 percent, of its active military and about 1,200, or 14 percent, of its civilian QDR personnel reductions through outsourcing. However, the Navy now plans to achieve about 660 of its active military QDR personnel cuts through outsourcing because its initial plan did not adequately consider the impact that outsourcing would have on sea-to-shore rotation. In addition to the QDR reductions, the Navy has programmed savings of $2.5 billion in its fiscal year 1999 budget based on plans to study 80,500 positions—10,000 military and 70,500 civilian—by fiscal year 2003. OSD has identified Navy outsourcing as an area in which planned savings may not be fully achieved. However, the Navy has not identified the majority of the specific functions that will be studied to achieve the projected savings and has not adjusted its personnel levels to reflect the effects of this outsourcing initiative. Navy officials stated that each year the major commands will identify the functions to study as part of their annual budgets. The Army’s plan to eliminate about 5,300 civilian personnel in the Army Materiel Command through reengineering efforts involves risk because the Command does not have specific plans to achieve these reductions. The Army plans to use the results of reengineering studies to identify ways to cut these positions; however, the studies are not scheduled to start until after fiscal year 2000. Moreover, in February 1998, we reported that Army efforts to reengineer the institutional forces have not been successful. For example, the Army initially identified 4,000 active military institutional positions that it planned to transfer to operational forces. However, our work showed that the underlying basis for most of these personnel savings are questionable. The Navy plan assumes that it will be able to eliminate about 1,100 civilian personnel through reengineering and reducing the workload at the Navy Facilities Engineering Command field structure. The Navy started a study in January 1998 and plans to complete it by July 1998. To achieve the savings target, Navy officials stated the study must identify ways to reduce the workforce by 30 percent. Service officials believe the majority of the planned personnel cuts will not affect their ability to implement the national military strategy because they will reduce infrastructure or, to the extent that the cuts involve combat forces, implement missions more cost-effectively without any significant loss in capability. Almost one-half of the active military cuts will involve replacing military personnel with reserve forces, civilian employees, or contractors rather than eliminating functions outright. For example, the Air Force plans to eliminate about 22,000 active military personnel through outsourcing. Air Force officials noted that these personnel are not military essential because they do not deploy, are not required to support overseas rotation needs, and primarily involve infrastructure functions such as logistics and base operating support. Moreover, on the basis of past experience, the Air Force expects that 75 percent of these personnel will be replaced by either civilian employees or contractors. The other 25 percent will be eliminated because A-76 studies should result in more efficient organizations requiring fewer personnel. The Navy plans to eliminate about 5,400 active military personnel by reducing the number of surface combatants from the current level of 128 to 116 and decommissioning 2 attack submarines. According to Navy officials, the surface combatant reductions are possible because the newer ships entering the fleet provide greater combat capability. Similarly, the Marine Corps plans to eliminate 1,200 active military positions by restructuring its security battalion, which provides support to the Navy. Navy officials agreed with the Marines Corps’ proposed restructuring, which will eliminate personnel associated with missions that are no longer valid and reorganize personnel to provide the same level of support more efficiently. The Army believes that the plan to reduce active personnel can be accomplished without significantly increasing the risk associated with implementing the national military strategy. The Army plans to achieve almost one-half of its QDR-directed active cut by transferring 7,100 active military combat support and combat service support positions to the reserve component. Army officials believe that this plan will enable it to execute the national military strategy with an acceptable level of risk, assuming adequate resourcing for active and reserve components, availability of increased sea- and airlift, funding for equipment modernization, and improvements to existing intelligence and communication systems. However, the Army had not finalized its plan on which combat support and combat service support missions would be transferred to the reserve component. Our February 1997 report on Army support forces highlighted shortages in active support forces. We reported that a smaller active Army support force did not appear feasible because it could increase the Army’s risk of carrying out current defense policy.Specifically, our report stated that about 79,000 support forces needed in the first 30 days of the first major theater war would arrive late because the Army lacks sufficient numbers of active support forces and must rely on reserve forces, which generally require more than 30 days to mobilize and deploy. The May 1997 QDR report recognizes that one of the primary sources of instability in DOD’s current plans is the possibility that planned procurement funding may need to be used for other activities and that unrealized savings is one of the key components of this problem. The report discusses several factors that contribute to funding migration, stating that “migration also occurs when the savings planned to accrue from initiatives like competitive outsourcing or business process reengineering fail to achieve their expectations fully.” OSD has established two principal mechanisms for monitoring the services’ progress in achieving personnel cuts, according to OSD officials. First, it expects to review the services’ plans for achieving personnel cuts during annual reviews of the services’ budgets. Second, the Defense Management Council, which was established in November 1997 by the Secretary of Defense to oversee progress in achieving defense reform initiatives, will monitor the services’ progress in meeting outsourcing goals. The Council is chaired by the Deputy Secretary of Defense and includes representatives from OSD, the Joint Staff, and the services. In preparing their fiscal year 1999 budgets, the services used different methods to reflect the personnel and dollar savings associated with outsourcing, which could make it more difficult for DOD officials to understand the services’ assumptions and plans for outsourcing and monitor their progress. For example, the Navy’s projected personnel levels included in its budget for fiscal years 1999 through 2003 reflect the force structure cuts planned to meet QDR-mandated personnel levels but do not reflect further cuts that could result from outsourcing. The Navy plans to compete 10,000 military and 70,500 civilian positions by fiscal year 2003, and its budget assumes that these competitions will achieve $2.5 billion in savings through fiscal year 2003. However, because the Navy does not know how many civilians and military positions will be reduced as a result of these competitions, it did not adjust the personnel figures in its budget to reflect the projected effects of outsourcing. In contrast, the Air Force’s projected personnel levels in the fiscal year 1999 budget reflect large cuts in military and civilian personnel from outsourcing. In preparing their budgets, each service assumed a different rate of savings as a result of public-private competitions. For example, the Army assumed it would save 20 percent, the Air Force 25 percent, and the Navy 30 percent of its current personnel expenses. OSD officials stated that they are aware that the services used different methods for reflecting the personnel and dollar impacts of outsourcing and that the fiscal year 1999 FYDP reflects these different approaches. The Acting Director of OSD’s Program Analysis and Evaluation Office has established a task force to ensure that consistent and comparable approaches are used for personnel and dollar savings. The personnel cuts directed by the QDR were driven primarily by the need to identify dollar savings that could be used to increase modernization funding. However, DOD may not achieve all the personnel cuts and associated savings. With the exception of the Air Force, the services have plans that should enable them to achieve the majority of the active military cuts by the end of fiscal year 1999. However, these cuts depend on Congress reducing the current minimum active duty personnel levels. There is considerable risk that the active military cuts in the Air Force, the reserve component cuts in the Army, and the civilian cuts in all the services may not be achieved by fiscal year 2003 because the services’ plans are not complete and depend on outsourcing and reengineering initiatives that are based on optimistic assumptions or largely undefined to date. OSD recognizes that the planned savings from these initiatives have not always been achieved, which contributes to the migration of procurement funding. Therefore, it is critical that DOD monitor the services’ progress in achieving the personnel cuts and savings. DOD provided written comments on a draft of this report, which are reprinted in appendix VI. In its comments, DOD wanted to clarify several key issues to avoid over emphasizing negative aspects of the QDR personnel reductions. For example, DOD noted that the QDR process began by developing an overarching defense strategy followed by assessments of the force structure, readiness, and modernization to implement the strategy. It believed the report’s emphasis on dollar savings ignores the Department’s strategy assessment and noted that the resulting balanced program recommended by the QDR is based on modest reductions and restructuring of U.S. military forces to meet present threats. DOD also noted that the QDR was a blueprint to revolutionize business affairs and promote more efficient infrastructure, with many of the details to be fully developed through the programming and budget cycles. Moreover, DOD stated that our report is apparently based on information available as of May 1997 and does not reflect the implementation details that were developed during the fiscal year 1999 budget cycle. Finally, DOD noted that we were critical of QDR decisions to downsize the Army’s active, reserve, and civilian components. It stated these decisions were based on a careful analysis of the risks, the potential impact on readiness, and the ability to execute the cuts. Our report specifically recognizes that the QDR included more than personnel reductions and notes that we will be reporting separately on other aspects of the QDR, such as the process for determining the force structure and modernization requirements. We believe that the risk associated with the services’ plans to implement the personnel cuts and achieve the expected savings is an important linkage. Specifically, the personnel cuts account for the majority of the savings DOD expects from the QDR to increase modernization funding. Although this report reflects our analysis of the services’ initial plans when the QDR was released in May 1997, it also assesses OSD and service actions to implement the QDR personnel reductions as of February 1998. For example, the report reflects OSD’s decision in December 1997 to defer much of the Air Force tactical fighter and bomber consolidation plans and the Marine Corps’ decision, made after the fiscal year 1999 budget was finalized, to reduce fewer reserve personnel than directed in the QDR. The report also includes our analysis of the services’ outsourcing and reengineering plans as of February 1998. With regard to the Army, we found that some details of the Army’s plan to reduce personnel, such as the number of active support forces to be cut, had not been finalized as of May 1997. Moreover, our report shows that the Army faces certain risks to execute some of the reserve and civilian cuts, such as the lack of agreement within the Army on how the majority of the reserve component cuts will be allocated. Likewise, the majority of the Army’s civilian cuts are based on outsourcing and reengineering efforts. However, the Army, unlike the Air Force, has not identified the majority of the specific functions by location to be studied but plans to rely on the major commands to identify functions over the next several years. In addition, the Army is also counting on some functions being reclassified so they can be competed. DOD also provided technical comments, which were incorporated as appropriate. To determine the basis for DOD’s decision to reduce personnel, we interviewed senior DOD civilian and military officials to obtain information on the decision-making process that led to the personnel cuts and obtained documentation on the services’ proposals to cut active, reserve, and civilian personnel. To obtain information on how the services plan to achieve the cuts and how these cuts will impact the services’ ability to execute the national military strategy, we interviewed officials who were involved in developing and refining the individual service plans and reviewed service studies and analyses that supported the proposed cuts. We also obtained documentation from the services on data included in the fiscal year 1999 FYDP and compared this data with the services’ May 1997 plans. Finally, we interviewed the Acting Director, OSD Program Analysis and Evaluation Office, regarding DOD’s plans to monitor the services’ progress in implementing the cuts and obtained and analyzed information concerning the services’ methods for reflecting in their budgets the potential impact of outsourcing. We conducted our work from May 1997 to February 1998 in accordance with generally accepted government auditing standards. We are providing copies of this report to other appropriate congressional committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Director, Office of Management and Budget. We also will provide copies to other interested parties on request. Please call me on (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VII. Table I.1 shows the projected active military, reserve, and civilian end strength for fiscal year 2003 if all personnel cuts programmed in the fiscal year 1998 Future Years Defense Program (FYDP) and directed in the Quadrennial Defense Review (QDR) are implemented. The QDR directed that the Army cut 15,000 active military, 45,000 reserve component, and 33,700 civilian personnel. These cuts represent a reduction of 3 percent for the active military, 5 percent for the reserve component, and 15 percent for the civilian personnel end strengths projected for fiscal year 2003 before the QDR. The Office of the Secretary of Defense (OSD) estimated these cuts would save $1.5 billion by fiscal year 2003. The Army has refined its plan since May 1997, but some elements still remain undefined. For example, the Army has allocated all of the active military cuts among the major commands, but the commands have not identified some of the specific functions to cut. Likewise, over one-half of the reserve component cuts have not been specifically identified, and the majority of the civilian cuts are based on aggressive outsourcing efforts that are largely undefined. Army plans to implement various initiatives to achieve the QDR cut. These initiatives are shown in table II.1. The Army plans to transfer some combat support and combat service support missions to the reserve component, which will eliminate about 7,000 active military positions. The reserve component has already taken over about one-half of these missions. According to Army officials, these transfers were based on an Army study that concluded that about 3,400 late-deploying combat support and combat service support activities could be transferred to the reserves. The remaining 3,600 positions were to be identified in the Total Army Analysis, expected to be released in March 1998. The major commands have not specifically identified about one-half of the 2,900 cuts to their institutional forces. The Army allocated percentage cuts to most of the major commands based on the judgment of senior Army leaders. For example, commands that were considered low priority received a higher percentage cut than commands that were considered a higher priority. In February 1997, we reported that allocating positions based on available budgets, without defining workload requirements, leads to across-the-board cuts that reduce funds available to all commands regardless of relative need. The Army has identified all of the cuts associated with active training support to the reserve component. These cuts were based on an Army study that concluded, among other things, that three existing active headquarters components that support reserve training could be merged into one. According to National Guard officials, the Guard supports the concept of centralized training support to the reserve component but is concerned that these reductions could result in less training support for some units. These cuts comply with section 414 (a) of Public Law 102-190, as amended,which requires the Army to provide a minimum of 5,000 active personnel to provide training support to the reserve component. The Army Materiel Command has identified all of its active military cuts. The Command was reduced by 1,900 active military personnel, but only 1,300 of these cuts will be used to meet the QDR reduction. The remaining 600 cuts will be used to offset other force structure adjustments within the Army. In developing its plans to reach the QDR cut, the Command first identified positions that it wanted to remain with the military, such as commanders and chaplains, and then decided to cut about 1,000 positions by ceasing to perform some missions. For example, the Command eliminated the new equipment training and developmental equipment testing missions. According to Command officials, project managers for new weapon systems will have to fund any new equipment training in the future. On the other hand, the Army National Guard may assume some of the developmental testing mission. The Command allocated the remaining cuts across its subordinate commands. The Army plans to cut 1,000 positions as part of a plan to reduce and relocate some of the positions that are currently in Panama. For example, the Army plans to eliminate about 400 positions in an infantry battalion that is no longer needed. The Army plans to cut its Medical Command by about 3 percent, or 800 positions, which is proportionate with the overall reduction to the active Army. According to Office of the Surgeon General officials, these cuts are based on changes in workload and populations served. Although the Medical Command has tentatively identified about 400 of its cuts, these cuts will not be finalized until the Total Army Analysis is complete. At the time that we completed our work, the Command had not identified how to allocate the remaining cuts. The Army has identified about 300 of the cuts associated with its plans to restructure military intelligence. According to Army officials, an ongoing study, expected to be completed in May 1998, will identify the remaining intelligence positions to be cut. Finally, the Army plans to cut 300 positions from the joint staff and defense agencies and 200 positions from the 82nd Airborne Division. Army officials noted that the Joint Staff and defense agencies positions were identified as part of the Secretary of Defense’s recommendation, in the November 1997 Defense Reform Initiative, to reduce headquarters. The Army plans to cut 200 positions that are no longer required in the 82nd Airborne Division. The QDR directed that the Army reserve components be reduced by 45,000 personnel. At a meeting convened in June 1997 to reach agreement on how the reductions should be allocated, the reserve components agreed to reduce end strength by 20,000 (17,000 in the National Guard and 3,000 in the Reserve) by fiscal year 2001. Officials have not agreed on how the remaining 25,000 personnel reduction would be allocated. For budgeting purposes, the Army allocated 21,000 of the remaining cuts to the Army National Guard and 4,000 to the Army Reserve. The Army National Guard plans to achieve the initial 17,000 reduction—5,000 in fiscal year 1998, 5,000 in fiscal year 1999, and 7,000 in fiscal year 2000—through attrition. It does not plan to reduce force structure along with the personnel cuts. The Army National Guard plans to distribute the reductions among the states, based on their historical ability to recruit and maintain Guard members, and reallocate personnel as necessary to ensure that units with priority missions maintain readiness. An Army National Guard readiness official stated that the reductions will probably result in understaffing some institutional force units and the combat divisions. As we reported in March 1996, DOD and Army studies noted that many Army National Guard combat units are not needed to meet the national security strategy. Although the Army has programmed a reduction of an additional 21,000 personnel between fiscal years 2001 and 2003, the National Guard opposes these cuts and therefore has no specific plans to implement them. To achieve its cuts, the Army Reserve plans to eliminate 3,000 individual mobilization augmentees in fiscal year 2000. Army officials stated that, about 1,500 personnel are in medical positions that, according to a recently completed medical reengineering initiative, are excess to requirements. The remaining 1,500 cuts will be based on ongoing reviews of all Army Reserve Individual Mobilization Augmentee positions. The Army has programmed an additional reduction of 4,000 personnel between fiscal years 2001 and 2003; however, the Army Reserve is waiting for the Total Army Analysis results to determine how these cuts will be made. Although the QDR directed that the Army cut 33,700 civilian personnel, the Army’s plans are not completely defined and based on assumptions that could make it difficult to achieve all of the cuts. For example, the Army plans to achieve the majority of these cuts through outsourcing; however, a significant portion of the plan has not been clearly defined. Moreover, the Army assumed it could cut about 5,300 positions by reengineering the Army Materiel Command, but it does not have specific plans to achieve the majority of these cuts. The Army also assumed that it could cut 2,400 military technicians, but these cuts may be delayed because they are directly tied to force structure reductions that are not currently programmed. In fact, the Army Reserve plans to reduce 200 military technicians and 200 civilians instead of 400 military technicians. The Army’s outsourcing plan may be difficult to achieve because it assumes that all eligible positions in commercial activities can be competed and that the Army can increase the study population by reclassifying some positions that cannot currently be competed. To achieve the QDR cuts, the Army plans to compete 48,000 civilian positions—34,000 positions currently in commercial activities and 14,000 positions that must be reclassified before they can be competed. The Army assumes that it can compete all of the 34,000 civilian positions in commercial activities. However, unlike the Air Force, the Army has not done any study to determine if all positions can be competed. The Air Force found that it is not practical to compete many positions in commercial activities because they are spread across many units and functions. The Army has initiated studies covering about 15,000 of the 34,000 positions in commercial activities. However, some of these positions are being used to satisfy personnel cuts that were identified before the QDR. For example, a Test and Evaluation Command official noted that almost all the positions eliminated through the ongoing studies will be used to satisfy personnel reduction targets that existed before the QDR and not to meet QDR cuts. Moreover, the Army has not identified the specific functions by command or installation for the remaining 19,000 positions to be studied. Army officials stated that the major commands would identify the specific functions to be studied as part of each year’s budget. The Army’s potential study universe includes 14,000 positions that are currently considered inherently governmental and therefore cannot be competed. According to Army officials, OSD is currently examining whether relevant civilian positions are consistently and properly classified as either a commercial activity or an inherently government function. The Army expects that this effort will result in 14,000 positions being reclassified as commercial activities and therefore becoming eligible to be competed. However, there is currently no data available to support this assumption. The majority of the civilian cuts allocated to the Army Materiel Command have not been specifically identified. The Command has identified specific plans for about 3,200 of its 8,500 civilian cuts. For example, the Command plans to eliminate the School of Engineering and Logistics at Red River Army Depot, Texas, and reduce staff oversight of installation management commandwide. The Command does not presently have specific plans for the remaining 5,300 reductions. According to Command officials, the remaining reductions are not scheduled to occur until after fiscal year 2000, which should allow the Command time to develop plans to achieve these cuts. As part of the civilian cuts, the Army plans to cut 2,200 military technicians—2,000 in the National Guard and 200 in the Army Reserve. However, the cuts in the Army National Guard may not occur. Congress sets annual end strengths for dual-status military technicians, which would have to be reduced from current levels to accommodate the planned cuts. Alternatively, 10 U.S.C. 10216 states that DOD must document reductions in force structure if it budgets for a lower number of dual-status military technicians than the authorized level. Army National Guard officials stated that they do not plan to reduce force structure as part of the QDR, so they do not intend to reduce the number of military technicians. The Army is currently working to resolve this issue. On the other hand, the Army Reserve plans to reduce force structure to eliminate its military technicians. The QDR directed that the Air Force cut 26,900 active, 700 reserve, and 18,300 civilian personnel. These cuts represent a reduction of 7 percent for the active military, 0.4 percent for the reserves, and 11 percent for the civilian personnel end strengths projected for fiscal year 2003 before the QDR. OSD estimated that these cuts would reduce personnel costs by about $790 million by fiscal year 2003. However, the Air Force did not program all of the QDR cuts in the fiscal year 1999 FYDP because (1) it decreased its estimate of the number of military and civilian positions that can be eliminated through outsourcing, (2) OSD deferred most of the Air Force’s plan to restructure its combat forces because it was not executable by fiscal year 2003, and (3) the Air Force decided not to cut reserve personnel. Therefore, the Air Force’s fiscal year 1999-2003 budget only accounts for about $600 million, or 76 percent, of the savings it planned to achieve through the personnel cuts. Furthermore, the actual personnel cuts could be significantly lower than the amount programmed because of optimistic assumptions that the Air Force made regarding the potential for outsourcing. The Air Force has not included about 5,600 of the 26,900 active military cuts directed by the QDR in the fiscal year 1999 FYDP because (1) it found problems with the outsourcing estimates it had used in May 1997 as input for the QDR cuts and (2) OSD deferred the majority of Air Force plans to restructure some fighter and bomber squadrons. These actions lowered the planned savings by about $156 million. Table III.1 shows the specific initiatives the Air Force plans to implement the QDR cuts, the differences between the May 1997 QDR plan and the fiscal year 1999 FYDP, and the estimated impact of the differences on savings. Cuts based on fiscal year 1999 FYDP (3,300) ($41) (3,400) (170) (800) (40) Implementing other OSD changes to active personnel (4,100) (4,500) (20) (5,600) ($156) Before the QDR, the Air Force began a study, known as Jump Start, to examine the potential to outsource military and civilian positions in commercial activities. In March 1997, we reported that this initiative should enable the Air Force to reduce the size of the active force.Although Jump Start was not complete in May 1997 when the QDR report was issued, the Air Force had developed preliminary estimates of the number of active and civilian personnel that could be reduced through outsourcing and used these estimates, along with other ongoing Air Force outsourcing initiatives, to develop its QDR proposal to reduce active personnel. The Air Force’s Jump Start analysis was fairly detailed. Specifically, the Air Force analyzed its commercial activities by major function and units and obtained input from both functional specialists and major commands that would be affected. However, after the QDR was released, the Air Force found that it had double counted—or overstated the potential to reduce—about 3,300 military positions in Jump Start. The Air Force corrected this error in its fiscal year 1999 budget. As a result, the fiscal year 1999 FYDP assumes that 3,300 fewer active military positions will be eliminated through outsourcing by 2003 than the amount assumed in the Air Force’s May 1997 QDR plan. The Air Force’s May 1997 QDR plan also included several initiatives to restructure some combat forces that would have eliminated about 4,800 active military positions. In May 1996, we reported that the Air Force could consolidate its fighter squadrons and maintain the same number of aircraft but carry out its missions with fewer active duty personnel. We developed options that could eliminate between two and seven squadrons. However, the fiscal year 1999 FYDP assumes that only about 1,400 of these positions will be eliminated. During the QDR, the Air Force developed a plan to increase the size of some active fighter squadrons from 18 to 24 aircraft and transfer 1 active fighter wing to the reserves. The Air Force also proposed to increase the size of some bomber squadrons from 12 to 18 aircraft. However, OSD deferred much of the fighter restructuring and the entire bomber consolidation plans because it determined they were not executable by fiscal year 2003. OSD was concerned that the restructuring plan could be construed by Congress as violating its guidance to refrain from any planning for future base closures. In preparing the fiscal year 1999 budget, OSD cut the active force by about 2,300 through other initiatives, which offset some of the planned force structure reductions. The QDR report stated the Air Force reserve component would be reduced by 700 personnel. This decision was based on an Air Force plan to cut 700 civil engineering positions in the reserves. According to Air Force officials, these positions will be eliminated because they have no wartime mission. However, the Air Force has subsequently decided to increase the reserve end strength to cover an existing shortage in security police units. Thus, the Air Force will not realize an estimated $5 million in savings. The Air Force planned to achieve all but 100 of its 18,300 civilian cuts through outsourcing. However, the fiscal year 1999 FYDP does not include 2,300 of the civilian cuts mandated by the QDR because the Air Force has revised its estimate of the number of civilians that can be outsourced. After the QDR was released, the Air Force found that it had double counted—or overstated the potential to reduce—about 2,300 civilian positions in Jump Start. Thus, the Air Force’s planned savings was reduced by an estimated $29 million. The Air Force’s difficulty in implementing QDR cuts may be compounded because of optimistic assumptions it made in calculating the active military and civilian outsourcing cuts included in the fiscal year 1999 FYDP. Our analysis of the Air Force’s outsourcing estimates shows that the Air Force may have a difficult time achieving as many as 3,000 military and 8,600 civilian cuts included in the FYDP. Examples of the problems we identified are as follows: The Air Force found an additional 700 Jump Start positions that are included in the fiscal year 1999 FYDP but had already been included in the fiscal year 1998 FYDP, and therefore, cannot be used to meet QDR cuts. The fiscal year 1999 FYDP includes 1,200 administrative positions from the Jump Start study that Air Force leaders determined are not good candidates for outsourcing because they are split between many different units and locations, thereby making it difficult to identify more economical ways of accomplishing the mission. The Air Force may have overstated 5,700 civilian cuts in Jump Start because it assumed that all the military positions being eliminated would be contracted out. However, Air Force historical experience with public-private competitions under the A-76 process shows that 40 percent of these positions would remain in house with civilian employees. An Air Force outsourcing initiative that was separate from Jump Start but was included in the fiscal year 1999 FYDP overstated civilian cuts by about 1,200 positions. Specifically, the Air Force has targeted selected functions at 4 bases with about 3,000 positions—2,000 military and 1,000 civilians—to outsource. However, the Air Force did not base its personnel cuts on its historical experience but assumed that all of these positions would be replaced by contractor personnel. On the basis of the Air Force’s past experience with cost comparison studies, we estimate that about 1,200 of these positions would remain in house with civilian employees. Air Force officials agree that these errors and assumptions will make it more challenging to achieve outsourcing estimates included in the fiscal year 1999 FYDP. The officials stated that they are continuing to work closely with the major commands to refine their estimates. The QDR report recommends that the Navy reduce 18,000 active military, 4,100 reserves, and 8,400 civilian personnel. This reduction represents about a 4.5-percent cut of the active, reserve, and civilian personnel end strengths projected for fiscal year 2003 before the QDR. The majority of the active and reserve cuts result from force structure reductions. The Navy plans to achieve the majority of the active military cuts in fiscal year 1999. The civilian cuts depend primarily on the results of reengineering studies and projected decreases in workload. The QDR cuts do not represent all the potential personnel reductions in the Navy by fiscal year 2003 because the Navy’s fiscal year 1999 budget projects savings of $2.7 billion by fiscal year 2003 from competing 10,000 military and 75,500 civilian positions. The Navy’s plan for achieving the QDR personnel cuts is summarized in table IV.1. Approximately 11,000, or 61 percent, of the active cuts are associated with force structure cuts called for in the QDR, as shown in table IV.2. The Navy plans to decrease the number of surface combatants from the current level of 128 to 116 by the end of fiscal year 2003. Navy officials said that newer ships entering the fleet will be more capable than those being deactivated. This increased capability will enable the Navy to provide forward presence and comply with other aspects of the national military strategy using fewer ships. The Navy also plans to cut about 1,500 positions by deactivating five auxiliary oiler ships staffed by active personnel and replacing them with four ships that will be reactivated and staffed by civilian personnel assigned to the Military Sealift Command. According to Navy officials, the four ships that will be reactivated will be more economical primarily because they require smaller crews. The Military Sealift Command operates its ships with smaller crews because it hires skilled mariners, whereas Navy ships often rely heavily on recruits that must be trained to replace more skilled sailors. In addition, the Navy plans to cut about 500 positions by reducing the crew size on eight multiproduct auxiliary ships. We had previously reported that the Navy could transfer these ships to the Military Sealift Command and save about $122.5 million. The Navy determined that these ships should continue to be staffed with smaller military crews because the ships can maintain battle group speeds and operate within battle group formations. The Navy plans to eliminate about 300 positions by decommissioning two attack submarines. The QDR states that this force reduction was based on changing post-Cold War requirements. In addition, the Navy plans to eliminate about 1,300 billets by decommissioning one submarine tender. According to a Navy official, this decision was made because submarine tenders are more expensive to operate than shore-based maintenance activities and the Navy is willing to service its submarine fleet with two tenders—one each in the Atlantic and Pacific Fleets. The Navy plans to deactivate two amphibious ships and cut about 600 positions. According to Navy officials, this cut will leave the Navy with the 36 amphibious ships required to satisfy the Navy’s long-term goal for 12 amphibious readiness groups. According to the Navy, this goal will allow it to meet wartime requirements and sustain peacetime operations. The QDR assumes the Navy will cut about 400 positions by deactivating two helicopter supply squadrons and relying on private contractors to provide this service. However, the Navy is reconsidering whether this option is viable for the helicopter detachment based in Guam. The QDR concluded that the Navy would maintain 10 active carrier-based airwings, but the Navy plans to reduce the size of F-14 squadrons from 14 to 10 aircraft, which will eliminate 40 aircraft and about 400 positions. A Navy official stated this action was taken because F-14s are reaching their fatigue life expectancy and are expensive to maintain. The Navy plans to fund these squadrons as if they had 12 aircraft each, which will allow the squadrons to satisfy mission requirements as well as maintain qualified pilots to transition to the F/A-18F. However, the Navy does plan to return the size of the fighter squadrons back to 14 aircraft when the F-14s are replaced by F/A-18E/Fs during fiscal years 2001-08. The Navy also plans to cut about 6,500 active military positions in infrastructure-related activities. Some of the infrastructure cuts are based on the projected force structure cuts. For example, the Navy plans to cut about 1,400 intermediate maintenance positions because a smaller force will decrease workload. The Navy also plans to reduce Atlantic and Pacific Fleet headquarters by 20 percent, or about 950 positions—490 in the Atlantic Fleet and 460 in the Pacific Fleet. The Navy has identified the specific positions within each fleet to eliminate. Finally, the Navy planned to eliminate about 1,300 positions by outsourcing selected functions in the Atlantic and Pacific Fleets. Fleet officials stated that they now plan to eliminate only about 660 military positions through outsourcing because the original plan included positions that are still needed due to sea-to-shore rotation requirements. The Navy plans to achieve about 2,200, or 52 percent, of its 4,100 reserve cuts by reducing force structure. It plans to achieve the remaining 1,900 cuts by reducing positions in various reserve support activities and eliminating funded positions it has been unable to fill. The Navy plans to eliminate nearly one-half of the positions in 1999 and the remainder between 2000 and 2003. The largest reserve force structure reduction results from the Navy’s decision to reduce the number of reserve P-3 squadrons from eight to seven and the number of aircraft per squadron from eight to six. The Navy’s plan will eliminate 22 aircraft and about 840 positions. According to a Navy official, it is difficult to meet overseas deployment requirements with reserve personnel, so the Navy decided it needed more active personnel to satisfy this requirement. The Navy also plans to eliminate about 240 positions by decommissioning four Naval Reserve frigates, two in fiscal year 2002 and two in fiscal year 2003. An additional 460 positions will be eliminated by deactivating the helicopter squadrons that support these frigates. However, a Navy official stated that the reserves may keep some of these frigates for an increased role in drug interdiction missions, which could reduce the number of positions originally scheduled to be cut. The Navy plans to revisit this decision during the fiscal year 2000 budget process. The Navy plans to reduce the size of helicopter minesweeper squadrons from 12 to 8 aircraft, which will eliminate about 115 positions. The minesweeping squadrons comprise active and reserve personnel and aircraft. Navy officials stated that the Navy decided to reduce the number of reserve aircraft because the reserve component could not adequately staff and maintain the squadrons to satisfy the 72-hour deployment requirement. The Navy also plans to deactivate one coastal minesweeper that supports the U. S. Southern Command, which will eliminate about 115 positions. The final reserve force structure change involves a decision to replace an F-14 squadron of 14 aircraft with an F-18A squadron of 12 aircraft. Navy officials noted that the F-18As are less expensive to operate and maintain than the F-14 aircraft and that the normal reserve squadron size is 12 aircraft. This action will cut about 115 positions. The Navy also plans to eliminate about 1,900 positions in various reserve support activities. Some of these cuts result from the force structure changes. For example, the Navy plans to cut about 250 maintenance positions because the reserves will have fewer aircraft to maintain. Another 185 positions will be eliminated based on the Navy’s decision to deactivate a submarine tender from the active fleet. Additional positions will be eliminated because the Navy has not been able to fill them. For example, the Seabees have not been able to fill underwater construction positions with qualified personnel primarily because they do not have the money to pay for the 6-month training course that is required for sailors to qualify for the program. The Seabees have also not been able to recruit sailors coming off active duty who possess the required skills and training. In addition, although the battalions have a wartime role, the Navy decided that it was not feasible to fund battalion positions that were not being filled. Together with reductions from Construction Battalion headquarters, the Navy plans to cut about 600 battalion positions. Likewise, the Navy plans to cut about 250 medical and 190 intelligence positions because they remained unfilled. The Navy assumed that it could cut about 3,600 positions from the Naval Facilities Engineering Command through various management efficiencies. The Navy plans to achieve one-third of the 2,500 Public Work Center cuts through productivity improvements, one-third through workload reductions, and one-third through outsourcing. The Navy plans to privatize utilities and streamline internal processes, such as acquisition reform, to achieve the productivity cuts. For the workload reductions, the Navy assumed that a 20-percent decline in the military construction program (including base closures) between fiscal year 1998 and 2003 would equate to a 20-percent reduction in personnel. Finally, the outsourcing cuts are based on Navy plans to study about 4,200 positions. The Navy has studies underway for approximately 1,100 of these positions. The Navy plans to study the remaining 3,100 positions in fiscal years 1999 through 2001; however, it has not yet identified the specific functions to be studied. The Navy plans to eliminate about 1,100 positions by reengineering the Naval Facilities Engineering Command’s field divisions. The Navy has started a study to determine how it can reduce the workforce by 30 percent through restructuring and streamlining operations and improve services to Navy customers. The study is scheduled to be completed in July 1998, and the reductions are scheduled to be implemented by fiscal year 2000. The Navy also assumes that it will be able to cut about 3,000 positions funded through the working capital fund based on projected decreases in workload. However, the Navy was not able to provide any documentation to support these reductions. The Navy’s fiscal year 1999 budget projects savings of $2.5 billion by fiscal year 2003 from competing 80,500 positions—10,000 military and 70,500 civilian positions—over the next 5 years. However, the Navy did not program any potential military or civilian personnel cuts based on its outsourcing program. If the Navy studies all 80,500 positions, about 6,500 military and 35,000 civilian positions could be cut. Alternatively, if the Navy does not succeed in outsourcing these positions, it will have to reduce the amount of its planned savings in subsequent budgets. To date, the Navy has announced studies covering about 18,500 of these positions, but it does not have the remaining 62,000 positions identified by function or location. The QDR directed that the Marine Corps cut 1,800 active, 4,200 reserve, and 400 civilian personnel. These cuts represent a reduction of 1 percent for the active military, 10 percent for the reserves, and 2 percent for the civilian personnel end strengths projected for fiscal year 2003 before the QDR. The Marine Corps did not have a specific plan for making the cuts at the time the QDR was announced. However, during the summer and fall of 1997, the Marine Corps reviewed its activities to identify ways to reduce personnel to QDR-directed levels as well as shift more resources to its highest priority war-fighting units. The Marine Corps plans to eliminate 3,000 reserve positions, 1,200 less than directed in the QDR. Marine Corps officials stated the revised plan will achieve approximately the same level of savings implied in the QDR-proposed reduction because more positions for reservists on full-time active duty are being eliminated. The Marine Corps plans to reduce the size of the Marine Security Force Battalion and eliminate headquarters administrative and support positions to achieve its active military cuts. The Marine Security Force Battalion has historically provided security for some Navy installations and some deployed Navy ships. These requirements have been reduced because the number of nuclear weapons storage and transfer sites are decreasing, nuclear weapons are no longer deployed on ships, and most Navy bases are open to the public. The Navy has agreed that the Marine Corps could reduce the size of the security battalion, eliminating about 1,200, or 40 percent, of its 3,000 positions. About 70 percent of these positions were associated with missions that were no longer valid. The remaining 30 percent involved proposals to meet the same level of support through more efficient use of personnel. The Marine Corps plans to satisfy the Navy’s security needs by increasing the number of special anti-terrorist platoons from 6 to 11, providing anti-terrorist training to the Navy personnel on Navy bases and stations as needed, and continuing to provide security guards for selected Navy activities. The Marine Corps also plans to eliminate about 600 administrative positions by improving efficiency and using new technology, although the positions have not been specifically identified. The Marine Corps developed its plan to meet active duty cuts outlined in the QDR as a part of a broader effort to shift resources to its highest priority operational requirements. In addition to developing a plan to achieve its QDR cuts, the Marine Corps also identified about 4,000 positions that could be eliminated and transferred to operational units to achieve a 90-percent staffing level in these units. For example, the Marine Corps plans to eliminate or reduce administrative and support positions, such as audio-visual and disbursing. The Marine Corps also plans to reduce or eliminate some weapon systems. For example, the Marine Corps plans to reduce about 850 positions related to cuts in certain infantry and missile systems, including some anti-tank missile positions and all 383 HAWK missile firing positions. A senior Marine Corps official said that, although these systems are valuable, they are not the most critical and that the resources can be better used in other ways. For example, despite the capability of the HAWK system, its lift requirements make movement into theater difficult. The Marine Corps plans to rely on Navy aircraft and Army Patriot missiles to replace HAWK capabilities until a future anti-ballistic missile defense is deployed. The QDR directed that the Marine Corps reduce its reserve component by 4,200. According to Marine Corps officials, the Commandant, after an extensive review of the reserve force structure, decided to cut only 3,000 positions. Marine Corps officials noted that, although fewer positions are being cut than the QDR directed, approximately the same level of savings will be achieved because the Marine Corps original estimates were based on average costs for drilling reservists who are generally paid for weekend duty, whereas the Marine Corps’ revised plan cuts more full-time reservists who have higher salaries that are comparable to active duty personnel.The Commandant chartered a reserve force structure review group to make recommendations on restructuring the Marine Corps Reserve, within QDR guidelines, to complement the active component in meeting the requirements of the war-fighting commanders in chief. The group focused on identifying forces not critical to meeting war-fighting requirements and units at sites that are underutilized, not cost-effective, or in poor condition and eliminating redundant or unnecessary headquarters overhead. An additional consideration was to minimize the impact on individual personnel whose positions will be eliminated. This consideration meant closing units in areas where there are other units within the geographical area so that personnel displaced by closings can fill positions in other local units. On the basis of the work of the force structure review group, the Commandant decided to eliminate 3,000 reserve positions: 1,434 positions from drilling units, 695 individual mobilization augmentees, and 553 positions for reservists on active duty. The Marine Corps also plans to reduce the number of new recruits by 318 positions. Full-time reserve personnel represent the highest cost category. By eliminating more of these positions than it originally estimated, the Marine Corps expects to achieve approximately the same level of savings and lose fewer personnel than directed in the QDR. To achieve the reductions in drilling units, the Marine Corps plans to deactivate units and realign personnel and organizations at various sites throughout the United States. The Marine Corps plans to deactivate one theater missile defense (HAWK) unit and reorganize another, which will eliminate about 475 positions. This action corresponds to the Marine Corps’ decision to eliminate HAWK units in the active force. The Marine Corps also plans to deactivate 10 other units, which will eliminate about 740 positions. Some of these units had difficulty in recruiting and sustaining their occupational specialties and were already understaffed, whereas others had no wartime mission. Finally, other cuts are based on plans to restructure some units so that the Marine Corps can accomplish its missions more efficiently without undermining operational effectiveness. For example, the Marine Corps plans to cut about 230 positions by deactivating one Marine Wing Support Squadron and transferring some of its functions to other squadrons. The Marine Corps plans to reduce 695 individual mobilization augmentee positions in fiscal years 1998 and 1999 through attrition. The individual mobilization augmentees currently serve in many capacities, including headquarters and support activities. The Marine Corps allocated the cuts to each functional area, such as communications and information. However, it is currently conducting a review to identify the specific positions within each area to eliminate. Full-time reserve reductions will be taken by units spread throughout the Marine Corps Reserve, including small cuts in some operating units. The largest number of reductions—175—will come from the reserve air wing, which had 1,080 full-time reserve positions. The Marine Corps plans to achieve most of these reductions by reducing air wing headquarters staff (77 positions) and eliminating HAWK units (34 positions). Headquarters and support units will be cut by higher percentages than other units. For example, Marine Corps headquarters will eliminate 49, or 37 percent, of its 133 active reserve positions, and the Marine Corps Combat Development Command will be reduced by 15, or 75 percent, of its 20 active reserve positions. In addition, the Marine Corps plans to consolidate reserve administrative activities in one location, which will eliminate about 155 active reserve positions. To achieve the civilian cuts, the Marine Corps plans to eliminate 63 positions, or 10 percent, of the civilian staff at its headquarters and other activities in the Washington, D.C., area. The remaining 237 positions to be eliminated will be prorated among other installations worldwide. The Marine Corps has not identified the actual positions to be cut. Over the next 5 years, the Marine Corps will provide funding for fewer positions, and the local commanders will have to decide which positions to cut. Because of the long lead time, Marine Corps officials believe the cuts can be achieved through normal attrition and targeted incentives and without reductions in force. Janet St. Laurent Mike Kennedy Ron Leporati Margaret Morgan Lisa Quinn The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the 1997 Report of the Quadrennial Defense Review (QDR), focusing on the: (1) basis for the personnel cuts; (2) services' plans to implement personnel cuts; (3) extent that the services believe cuts will impact their ability to execute the national military strategy; and (4) Department of Defense's (DOD) plans to monitor the services' progress in implementing the cuts. GAO noted that: (1) DOD's decision to reduce personnel as part of the QDR was driven largely by the objective of identifying dollar savings that could be used to increase modernization funding; (2) DOD officials concluded that a 10-percent force structure cut would result in unacceptable risk in implementing the national military strategy and determined that the review process had not identified sufficient infrastructure savings to meet DOD's $60-billion modernization goal; (3) thus, the Secretary of Defense directed the services to develop plans to cut the equivalent of 150,000 active military personnel to save between $4 billion and $6 billion in recurring savings by fiscal year (FY) 2003; (4) the services proposed initiatives to eliminate about 175,000 personnel and save an estimated $3.7 billion; (5) although the services relied on some ongoing studies to develop proposals to achieve the cuts, some of the analyses were limited; (6) moreover, variations existed in the services' plans; (7) considerable risk remains in some of the services' plans to cut 175,000 personnel and save $3.7 billion annually by FY 2003; (8) with the exception of the Air Force, the services have plans that should enable them to achieve the majority of the active military cuts by the end of FY 1999; (9) however, the FY 1999 future years defense program, which is the first to incorporate the QDR decisions, does not include all of the personnel cuts because the Office of the Secretary of Defense determined that some of the Air Force's active military cuts announced in May 1997 are not politically executable at this time, according to service officials; (10) moreover, plans for some cuts are still incomplete or based on optimistic assumptions about the potential to achieve savings through outsourcing and reengineering and may not be implemented by FY 2003 as originally anticipated; (11) the Air Force made an assumption that all military positions planned to be competed would be replaced by contractors rather than relying on historical experience that the civilian workforce wins 40 percent of all competitions; (12) the Air Force military personnel cuts will focus primarily on personnel assigned to infrastructure activities rather than mission forces and will involve replacing personnel with less costly civilians or contractors rather than eliminating functions; and (13) because some aspects of DOD's plan to reduce personnel will not occur or will be delayed, it is critical that the Office of the Secretary of Defense monitor the services' progress in achieving the personnel cuts and associated savings.
Before 2006, companies choosing to participate in the Medicare Advantage program were required to annually submit an ACRP to CMS for review and approval for each plan they intended to offer. The ACRP consisted of two parts—a plan benefit package and the adjusted community rate (ACR). The plan benefit package contained a detailed description of the benefits offered, and the ACR contained a detailed description of the estimated costs to provide the package of benefits to an enrolled Medicare beneficiary. These costs were to be calculated based on how much a plan would charge a commercial customer to provide the same benefit package if its members had the same expected use of services as Medicare beneficiaries. CMS made payments to the companies monthly in advance of rendering services. In 2003, Congress enacted the Medicare Prescription Drug, Improvement and Modernization Act of 2003 (MMA). MMA included provisions that established a bid submission process to replace the ACRP submission process, as well as a new prescription drug benefit, both effective for 2006. Under the bid process, an organization choosing to participate in Medicare Advantage is required to annually submit a bid for review and approval for each plan they intend to offer. The bid submission includes the organization’s estimate of the cost of delivering services (submitted on a bid form) to an enrolled Medicare beneficiary and a plan benefit package that provides a detailed description of the benefits offered. In addition, each MA organization and prescription drug plan that offers prescription drug benefits under Part D is required to submit a separate prescription drug bid form, a formulary, and a plan benefit package to CMS for its review and approval. On the bid forms, MA organizations include an estimate of the per-person cost of providing Medicare-covered services. BBA requires CMS to annually audit the submissions of one-third of MA organizations. In defining what constituted an organization for the purpose of selecting one-third for audit, CMS officials explained that they determined the number of participating organizations based on the number of contracts they awarded. Under each contract, an organization can offer multiple plans. Further, an organization like Humana Inc. can have multiple contracts. CMS contracts with accounting and actuarial firms to perform these audits. For audits of the contract year 2006 bid forms, CMS contracted in September 2005 with six firms. CMS gave the auditors guidance. It is important to note that the audit guidance includes procedures to verify information used in the projection or estimation of costs submitted in the bids, not actual results or costs each year, as the bids do not report actual costs. According to our analysis of available CMS data, CMS did not meet the statutory requirement to audit the financial records of at least one-third of the participating MA organizations for contract years 2001 through 2005, nor has it done so yet for the 2006 bid submissions. We performed an analysis to determine whether CMS had met the requirement because CMS could not provide documentation to support the method it used to select the ACRs and bids for audit, nor did CMS document whether or how it met the one-third requirement for contract years 2001 through 2006. Our analysis shows that between 18.6 and 23.6 percent, or fewer than one- third, of the MA organizations (as defined by the number of contracts each year) for contract years 2001 through 2005 were audited each year. Similarly, we determined that only 13.9 percent of the MA organizations and prescription drug plans with approved bids for 2006 were audited, as of the end of our review. Table 1 summarizes our results. As stated earlier, CMS selects organizations to meet the one-third audit requirement based on the number of contracts awarded and not the total number of plans offered under each contract. However, to present additional perspective, we also analyzed the percentage of plans audited of the total number of plans offered by each audited organization. Our analysis shows that with the exception of contract year 2002, the level of audit coverage achieved by CMS audits has progressively decreased in terms of the percentage of plans audited for those organizations that were audited. Audit coverage has also decreased in terms of the percentage of plans audited of all plans offered by participating organizations each contract year. In contract year 2006, a large increase in the number of bid submissions meant that the 159 plans audited reflected only 3.2 percent of all the plans offered. Table 2 summarizes our analysis. Regarding contract years 2001 through 2004, CMS officials told us that they did not know how the MA organizations were selected for audit, and the documentation supporting the selections was either not created or not retained. For contract year 2005 audits, CMS officials told us that the selection criteria included several factors. They said that the criteria considered included whether the MA organization had been audited previously and whether it had significant issues. With respect to contract year 2006, CMS officials acknowledged the one- third requirement, but they stated that they did not intend for the audits of the 2006 bid submissions to meet the one-third audit requirement. They explained that they plan to conduct other reviews of the financial records of MA organizations and prescription drug plans to meet the requirement for 2006. In September 2006, CMS hired a contractor to develop the agency’s overall approach to conducting reviews to meet the one-third requirement. Draft audit procedures prepared by the contractor in May 2007, indicate that CMS plans to review solvency, risk scores, related parties, direct medical and administrative costs, and, where relevant, regional preferred provider organizations’ (RPPO) cost reconciliation reports for MA bids. For Part D bids, CMS indicated it also plans to review other areas, including beneficiaries’ true out-of-pocket costs. However, when our review ended, CMS had not yet clearly laid out how these reviews will be conducted to meet the one-third requirement. Further, CMS is not likely to complete these other financial reviews until almost 3 years after the bid submission date (see figure 1) for each contract year, in part because it must first reconcile payment data that prescription drug plans are not required to submit to CMS until 6 months after the contract year is over. Such an extended cycle for conducting these reviews greatly limits their usefulness to CMS and hinders CMS’ ability to recommend and implement timely actions to address identified deficiencies in the MA organizations’ and prescription drug plans’ bid processes. In its audits for contract years 2001-2005, CMS did not consistently ensure that the audit process provided information needed for assessing the potential impact of errors on beneficiaries’ benefits or payments to the MA organizations. The auditors reported findings ranging from lack of supporting documentation to overstating or understating certain costs, but did not identify how the errors affected beneficiary benefits, copayments, or premiums. In addition, although the auditors categorized their results as findings and observations, with findings being more significant, depending on their materiality to the average payment rate reported in the ACR, the distinction between findings and observations, was based on judgment, and therefore varied among the different auditors. In our 2001 report, we reported that CMS planned to require auditors, where applicable, to quantify in their audit reports the overall impact of errors. Further, during the work for the 2001 report, CMS officials stated that they were in the process of determining the impact on beneficiaries and crafting a strategy for audit follow-up and resolution. CMS did not initiate any actions to attempt to determine such impact until after the contract year 2003 audits were completed. CMS took steps to determine such impact and identified a net of about $35 million from the contract year 2003 audits that beneficiaries could have received in additional benefits. The only audit follow-up action that CMS has taken regarding the ACR audits was to provide copies of the audit reports to the MA organizations and instruct them to take action in subsequent ACR filings. In CMS’ audits of the 2006 bid submissions, 18 (or about 23 percent) of the 80 organizations audited had material findings that have an impact on beneficiaries or plan payments approved in bids. CMS defined material findings as those that would result in changes in the total bid amount of 1 percent or more or in the estimate for the costs per member per month of 10 percent or more for any bid element. CMS officials told us that they will use the results of the bid audits to help organizations improve their methods in preparing bids in subsequent years and to help improve the overall bid process. Specifically, they told us they could improve the bid forms, bid instructions, training, and bid review process. CMS’ audit follow-up process has not involved pursuing financial recoveries from Medicare Advantage organizations based on audit results even when information was available on deficiencies or errors that could impact beneficiaries. CMS officials told us they do not plan to pursue financial recoveries from MA organizations based on the results of ACR or bid audits because the agency does not have the legal authority to do so. According to our assessment of the statutes, CMS has the authority to pursue financial recoveries, but its rights under contracts for 2001 through 2005 are limited because its implementing regulations did not require that each contract include provisions to inform organizations about the audits and about the steps that CMS would take to address identified deficiencies, including pursuit of financial recoveries. Regarding the bid process that began in 2006, our assessment of the statutes is that CMS has the authority to include terms in bid contracts that would allow it to pursue financial recoveries based on bid audit results. CMS also has the authority to sanction organizations, but it has not. CMS officials believe the bid audits provide a “sentinel or deterrent effect” for organizations to properly prepare their bids because they do not know when the bids may be selected for a detailed audit. Given the current audit coverage, CMS is unlikely to achieve significant deterrent effect, however, because only 13.9 percent of participating organizations for contract 2006 have been audited. Appropriate oversight and accountability mechanisms are key to protecting the federal government’s interests in using taxpayer resources prudently. When CMS falls short in meeting the statutory audit requirements and in a timely manner resolving the findings arising from those audits, the intended oversight is not achieved and opportunities are lost to determine whether organizations have reasonably estimated the costs to provide benefits to Medicare enrollees. Inaction or untimely audit resolution also undermines the presumed deterrent effect of audit efforts. While the statutory audit requirement does not expressly state the objective of the audits or how CMS should address the results of the audits, the statute does not preclude CMS from including terms in its contracts that allow it to pursue financial recoveries based on audit results. If CMS maintains the view that statute does not allow it to take certain actions, the utility of CMS’ efforts is of limited value. In our recent report, we made several recommendations to the CMS Administrator to improve processes and procedures related to its meeting the one-third audit requirement and audit follow-up. We also recommended that CMS amend its implementing regulations for the Medicare Advantage Program and Prescription Drug Program to provide that all contracts CMS enters into with MA organizations and prescription drug plan sponsors include terms that inform these organizations of the audits and give CMS authority to address identified deficiencies, including pursuit of financial recoveries. We further recommended that if CMS does not believe it has the authority to amend its implementing regulations for these purposes, it should ask Congress for express authority to do so. In response to our report, CMS concurred with our recommendations and stated it is in the process of implementing some of our recommendations. For information about this statement, please contact Jeanette Franzel, Director, Financial Management and Assurance, at (202) 512-9471 or franzelj@gao.gov or James Cosgrove, Acting Director, Health Care, at (202) 512-7029 or cosgrovej@gao.gov. Individuals who made key contributions to this testimony include Kimberly Brooks (Assistant Director), Christine Brudevold, Paul Caban, Abe Dymond, Jason Kirwan, and Diane Morris. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2006, the Centers for Medicare & Medicaid Services (CMS) estimated it spent over $51 billion on the Medicare Advantage program, which serves as an alternative to the traditional feefor- service program. Under the Medicare Advantage program, CMS approves private companies to offer health plan options to Medicare enrollees that include all Medicare-covered services. Many plans also provide supplemental benefits. The Balanced Budget Act (BBA) of 1997 requires CMS to annually audit the financial records supporting the submissions (i.e., adjusted community rate proposals (ACRP) or bids) of at least onethird of participating organizations. BBA also requires that GAO monitor the audits. This testimony provides information on (1) the ACRP and bid process and related audit requirement, (2) CMS' efforts related to complying with the audit requirement, and (3) factors that cause CMS' audit process to be of limited value. Before 2006, companies choosing to participate in the Medicare Advantage program were annually required to submit an ACRP to CMS for review and approval. In 2006, a bid submission process replaced the ACRP process. The ACRPs and bids identify the health services the company will provide to Medicare members and the estimated cost for providing those services. CMS contracted with accounting and actuarial firms to perform the required audits. According to our analysis, CMS did not meet the requirement for auditing the financial records of at least one-third of the participating Medicare Advantage organizations for contract years 2001-2005. CMS is planning to conduct other financial reviews of organizations to meet the audit requirement for contract year 2006. However, CMS does not plan to complete the financial reviews until almost 3 years after the bid submission date each contract year, which will affect its ability to address any identified deficiencies in a timely manner. CMS did not consistently ensure that the audit process for contract years 2001-2005 provided information to assess the impact on beneficiaries. After contract year 2003 audits were completed, CMS took steps to determine such impact and identified an impact on beneficiaries of about $35 million. CMS audited contract year 2006 bids for 80 organizations, and 18 had a material finding that affected amounts in approved bids. CMS officials took limited action to follow up on contract year 2006 findings. CMS officials told us they do not plan to sanction or pursue financial recoveries based on these audits because the agency does not have the legal authority to do so. According to our assessment of the statutes, CMS had the authority to pursue financial recoveries, but its rights under contracts for 2001-2005 were limited because its implementing regulations did not require that each contract include provisions to inform organizations about the audits and about the steps that CMS would take to address identified deficiencies. Further, our assessment of the statute is that CMS has the authority to include terms in bid contracts that would allow it to pursue financial recoveries. Without changes in its procedures, CMS will continue to invest resources in audits that will likely provide limited value.
As the primary mail carrier in the United States, USPS’ mission is to provide the nation with affordable and universal mail service. USPS’ authority was revised on December 20, 2006, with the enactment of the Postal Accountability and Enhancement Act. Through this act, Congress provided USPS with tools and mechanisms to help ensure that USPS is efficient, flexible, and financially sound. The act also introduced a rate cap for many postal services. While Congress oversees USPS and provides direction to the agency on its operations and other matters, USPS receives only a small portion of its funding from federal appropriations. According to a 2001 Mailing Industry Task Force study, the mailing industry includes businesses, organizations, and other parties (mailers) that send and rely on mail to maintain contact with their customers. The mailing industry also encompasses mail preparers, including printers and businesses that send or receive mail on behalf of a third party. Vendors and suppliers of the hardware, software, and labor related to mail processing, such as companies who help mailers improve the accuracy of their mailing lists, also are included in the mailing industry, according to this study. USPS offers several classes of mail, including First-Class, Standard, and Periodical Mail. The price for each class of mail varies, as does the level of service that USPS provides. Mailers, including both household and business customers, use First-Class Mail when sending personal mail and personalized business correspondence, such as letters, greeting cards, bills, and account statements. Mailers also may use First-Class Mail to send advertisements and merchandise. Standard Mail is the primary mail class for advertisements sent in bulk quantities and cannot be used for sending personal correspondence, such as handwritten letters, bills, or account statements. Periodical Mail primarily is comprised of newspapers and magazines. Standard Mail rates are generally lower than First-Class Mail rates, in part, because USPS typically does not provide services such as return-to-sender and forwarding for UAA Standard Mail. USPS and the mailing industry view Standard Mail as an important advertising medium for businesses, non-profit organizations, and other parties who seek to inform mail recipients about their products and services or to solicit contributions. While USPS currently receives about half of its revenue from First-Class Mail, Standard Mail became the largest class of mail (by volume) in fiscal year 2005. During 2005, mailers spent about $56.6 billion on direct mail advertising—comprising about 21 percent of all U.S. expenditures for advertising. USPS expects the volume of Standard Mail to continue to grow. UAA mail is mail that USPS cannot deliver to a specified address due to an incomplete, illegible, or incorrect address or insufficient postage, among other reasons. USPS’ treatment of UAA mail depends on the mail class. USPS forwards UAA First-Class Mail to the addressee, returns it to the sender, or, if the return address is missing, sends it to a USPS Mail Recovery Center. In general, USPS retains UAA Standard Mail and treats it as waste. Because of the large volume of UAA Standard Mail that USPS discards annually (about 317,000 tons in 2006), USPS focuses most of its mail-related recycling efforts on this material. USPS also treats the mail discarded by recipients in postal facility lobbies (discarded lobby mail) as waste. While USPS does not know how much mail post office box holders and other recipients discard in its lobbies, a USPS official stated that the amount is “trivial” relative to its total volume of mail-related waste. USPS has reported that mixed paper (i.e., UAA mailpieces, including newsprint, and discarded lobby mail) accounts for up to 70 percent of its waste stream. According to USPS, these materials can be used to make everything from low-grade paper products, such as hand towels and tablet backings, to wallboard and stock for fuel pellets that can be burned with coal to reduce harmful air emissions. Furthermore, according to the agency, the large volume of its UAA mail is an attractive and reliable source of clean mixed paper needed for manufacturing these and other products. A USPS-sponsored study reported that in fiscal year 2004, UAA mail cost the agency more than $1.8 billion, which represented about 2.6 percent of USPS’ total expenses (approximately $69 billion). About two-thirds of UAA mail costs resulted from forwarding mail to the intended recipients ($422 million—23 percent) or returning it to the sender ($822 million—44 percent). The remaining one-third of UAA mail costs are the result of processing waste ($270 million—15 percent), correcting addresses ($197 million—10 percent), processing address change requests ($132 million—7 percent), and general administration and support ($24 million—1 percent). Furthermore, according to USPS, creating and sending mail that cannot be delivered costs businesses more than $2 billion annually. With the exception of UAA Standard Mail, the responsibility for recycling discarded mail primarily lies with mail recipients. While the majority of mail is recyclable, according to the Environmental Protection Agency, mail recipients and others recycled only about 39 percent of the Standard Mail they received and subsequently discarded in 2006. Studies find that the volume of products recycled depend on, among other matters, whether a recipient knows that a product is recyclable and whether the recipient has access to a recycling program or facility. According to one study, over 40 percent of the public are unaware that it can recycle mail. Numerous stakeholders we interviewed confirmed this lack of recycling awareness. Even if individuals are aware that mail is recyclable, according to a 2005 survey conducted by the American Forest and Paper Association, residents in about 31 percent of U.S. communities (14 percent of the population) do not have access to paper recycling programs. In addition, while a 2006 survey by the Federal Trade Commission found that only a small number of victims (2 percent) reported that the theft of their identify was connected to the mail, several stakeholders told us that identity theft concerns prevent some recipients from recycling their mail. In December 2006, we reported on efforts to increase the volume of materials recycled and found that, to increase recycling, U.S. municipalities need to conduct public education campaigns and ensure that access to recycling is both convenient and easy. Further, we identified federal policy options that would help municipalities increase the volume of materials recycled, including the establishment of (1) a nationwide education campaign to inform the public about recycling and (2) programs that enable consumers to recycle products by returning them to the manufacturer or some other party for recycling. These programs are known as “take-back” programs. USPS and the mailing industry formed a Greening the Mail Task Force in 1996 to identify cost-effective ways to integrate environmental considerations into mailing practices and business processes. The task force issued a final report of its activities in 1999, including its efforts to identify “green” mail attributes (environmentally preferable attributes). According to the task force, environmentally preferable mail includes, among other attributes: Mail that contains recycled paper. Mail that uses certified paper. Mail that is designed to use materials efficiently (such as “two-way” envelopes). Mail that is accurately addressed for delivery. Mail that is targeted to recipients who may wish to receive it. USPS’ Environmental Policy and Programs organization is principally responsible for increasing the agency’s recycling of mail-related materials and is the focal point for executing its environmental policy throughout the agency. USPS also has organizations that, among their other responsibilities, attempt to increase the amount of mail with environmentally preferable attributes. For example, Address Management’s goal is to decrease the amount of UAA mail. To accomplish this, the organization provides mailers with tools to better manage the quality of their mailing lists while, according to USPS, striving to maximize its ability to efficiently deliver mail as addressed. The Product Development organization within Marketing helps manufacturers develop mail-related products that contain recycled materials. Finally, USPS’ Sales organization—also within Marketing—promotes the use of environmental preferable attributes in direct mail advertising and in USPS shipping materials. USPS and the mailing industry have undertaken numerous initiatives to increase the recycling of mail-related materials and increase the amount of mail with environmentally preferable attributes. For example, USPS has undertaken five key mail-related recycling initiatives, including the establishment of annual goals to increase its recycling revenue from $7.5 million in fiscal year 2007 to $40 million in fiscal year 2010 and a pilot recycling program in New York City. Representatives of the mailing industry and other stakeholders also have undertaken a wide range of initiatives to, among other actions, increase the amount of mail that is recycled. For example, three mailing industry associations recently introduced separate recycling awareness campaigns to encourage mail recipients to recycle their catalogs, envelopes, and magazines. In addition, the Direct Marketing Association—whose members collectively send about 80 percent of all Standard Mail—is undertaking several initiatives, including an effort to encourage mailers to use environmentally preferable mail attributes. USPS has undertaken five key initiatives to increase its recycling of mail- related materials. Specifically, USPS recently (1) established goals for increasing its recycling revenue; (2) refocused its attention on environmental matters, including mail-related recycling, and intends to require recycling where cost-effective and feasible; (3) consolidated waste management contracts to generate increased recycling revenues and reduce its waste disposal costs; (4) launched a pilot recycling program in New York City; and (5) implemented tools to track the environmental performance of its areas and districts. While USPS recently established goals for increasing its recycling revenues, inconsistencies in the way USPS collects data, if not resolved, will hamper efforts to measure its progress in meeting these goals. Furthermore, at the conclusion of our review, it was not clear whether, or to what extent, USPS would require its managers at other facilities to adopt—where applicable, feasible, mission compatible, and appropriate in view of cost and other considerations— lessons learned from its New York City pilot. In March 2008, USPS established annual goals for increasing the $7.5 million it generated from recycling mail-related materials in fiscal year 2007. Specifically, USPS intends to generate $15 million in mail-related recycling revenue in fiscal year 2008, $30 million in fiscal year 2009, and $40 million in fiscal year 2010. According to USPS, reaching its fiscal year 2010 goal could also reduce its solid waste disposal costs by $10 million annually. Thus, in fiscal year 2010, USPS could realize a full financial benefit of $50 million. To help reach its initial fiscal year 2008 goal, according to USPS officials, each of the agency’s nine geographic areas developed a plan to generate $2 million from recycling in fiscal year 2008. Longer term, according to these officials, the $40 million goal for fiscal year 2010 is based on the expectation that each of its 82 districts will generate an average of about $500,000 in recycling revenues. Such goals are a step in the right direction and address the need for USPS to generate additional revenue, which is one of the agency’s four strategic goals. However, by excluding savings that result from lower waste disposal costs, the goals do not reflect the full financial benefit attributable to mail-related recycling. This is because when USPS facility managers implement mail-related recycling programs, their facilities generate less waste, thereby reducing the facilities’ waste disposal costs (in addition to generating recycling revenue). Revising the agency’s goals to include the savings from lower waste disposal costs or adopting additional goals to reflect the full financial benefit of recycling would help focus USPS employees on the need to achieve greater cost reductions—consistent with a second USPS strategic goal. According to USPS officials, USPS is developing the capacity to track solid waste disposal volumes and intends to develop a plan for achieving its recycling goals. However, at the conclusion of our review, it had not agreed to revise its goals or to adopt additional goals for measuring its savings from lower waste disposal costs to reflect the full financial benefit attributable to mail-related recycling. Regardless of whether USPS finds it beneficial to revise its goals to reflect the full financial benefit attributable to mail-related recycling, in order to measure its progress, USPS will need to (1) specify how it will measure its progress toward its goals and (2) ensure that its organizations collect and report accurate, reliable, and consistent data, which, according to agency officials, does not presently occur. For example, according to both USPS officials and our analysis of USPS documentation some district facilities combine recycling revenues with waste disposal costs. Additionally, our analysis of USPS documentation indicates that in fiscal year 2006 at least one large district facility combined recycling revenues with waste savings attributable to recycling—which, together, comprise the full financial benefit of recycling. Inconsistent reporting practices hamper efforts to accurately measure the agency’s progress in meeting its recycling revenue goals. To partially address this problem, in March 2008, the agency’s accounting organization sent an e-mail to USPS area managers requesting that they report recycling revenues separately from waste disposal costs. While this request addresses the need to report recycling revenue separately, it does not constitute a requirement for the managers at the area-, district-, or facility-level to do so. Furthermore, the e-mail does not address matters related to the reporting of USPS’ savings from lower waste disposal costs, or require these managers to report data on their savings using a consistent method. Finally, at the conclusion of our review, USPS had neither (1) specified how it will measure progress toward its goals nor (2) required its organizations to collect and report accurate, reliable, and consistent data. Without taking further action, USPS may not be able to accurately assess its progress toward meeting its goals—regardless of which goals it eventually adopts. While USPS has had a mail-related recycling program in place since the 1990s, security concerns arising from the introduction of anthrax in the mail stream in 2001 caused USPS to deemphasize recycling until recently, according to USPS officials. In December 2007, however, USPS announced its intention to refocus its attention on environmental matters, including the recycling of UAA mail and mail-related materials. According to USPS, recycling will help protect the value of Standard Mail as a form of advertising, generate additional revenues, and reduce USPS’ waste disposal costs. Recycling UAA mail and other mail-related materials also provides USPS with a means to enhance its long-standing commitment to environmental leadership. Furthermore, recycling these materials appears to be consistent with the Postmaster General’s recent commitments to minimize the agency’s impact on every aspect of the environment and to act as a positive environmental influence in U.S. communities. As part of its refocused attention on environmental matters, in July 2007, USPS issued a revised policy—termed a Management Instruction—that addresses its waste management issues. With respect to recycling, the policy encourages district managers and installation heads to establish recycling programs to collect UAA mail and discarded lobby mail in central locations. While the policy indicates that employees at USPS’ plants and post offices “should recycle” these mail-related materials, they are not required to do so if it is not cost-effective or logistically feasible. For example, according to USPS officials, recycling may not be cost- effective or logistically feasible at facilities that lack storage space or generate a limited quantity of recyclable mail-related materials. The July 2007 policy superseded USPS’ previous policy and guidelines, issued in September 1995, which (1) were specific to recycling mail- related materials and (2) provided significantly more guidance on recycling UAA mail, discarded lobby mail, and facility paper waste. In addition, while not explicitly stated, the 1995 policy “technically required” facility managers to implement mail-related recycling programs at all USPS facilities that generate these types of waste, according to agency officials. According to the prior policy, effective targeting of UAA mail can achieve many objectives. For example, it “can help meet postal waste reductions goals and implement more efficient and environmentally sound alternatives to solid waste disposal practices.” In addition, the 1995 policy noted that such an “effort saves money in solid waste disposal and reduces criticism that third-class mail volumes contribute to municipal solid waste problems.” Finally, according to the prior 1995 policy, recycling UAA mail also enhances the viability of Standard Mail as an environmentally friendly advertising medium. To help USPS accomplish these objectives, the 1995 policy required facility managers to (1) keep records of revenues generated by recycling, as well as the costs and quantities of solid waste generated at their facilities; (2) conduct an annual evaluation of their practices related to discarded mixed paper, disposal methods, and recycling alternatives; and (3) supply information on their annual evaluations, including the costs, volumes, disposal methods, recycling alternatives, and barriers associated with implementing a mail- related recycling program at their facilities to his or her district manager. The prior policy also established others responsibilities. For example, area managers were responsible for ensuring that facilities that generate UAA mail conducted the annual evaluations and for assisting district managers in finding markets for the material. USPS’ latest policy, issued in 2007, does not address these and other matters. During the course of our work, we discussed differences between the two policies with USPS officials, including the requirement for an annual evaluation of facility practices related to discarded mixed paper. According to USPS officials, the omission of this requirement was unintended. To address this omission as well as others, the officials indicated that USPS would develop a new policy that will, among other things, (1) provide employees with specific information on how to implement recycling programs at their facilities, (2) require USPS facility managers to implement mail-related recycling initiatives unless doing so is not cost-effective or logistically feasible, and (3) specify requirements for reporting data on USPS’ mail-related recycling activities. USPS expects to release its revised policy, as well as guidance for implementing its recycling program, later this year. To increase its recycling revenues and reduce its waste disposal costs, USPS began a multi-phased process to consolidate its waste disposal and recycling contracts at USPS facilities nationwide. In the first phase, completed in January 2006, USPS centralized its negotiation and management of all waste disposal and recycling contracts at the Memphis, Tennessee, Category Management Center (Memphis Center). In the second phase, which recently began, the Memphis employees are working with managers at facilities with existing waste disposal and recycling contracts and are attempting to convince these managers to incorporate their facilities within larger, regionally-based USPS waste disposal and recycling contracts. Such contract consolidations are consistent with our prior findings. Specifically, in 2004, we reported that consolidating contracts allows private-sector companies to leverage their buying power and identify more efficient ways to procure goods and services. The Memphis Center offers facility managers four types of contracts: (1) waste disposal only; (2) removal of recyclables only; (3) removal of waste and recyclables; and (4) Total Solid Waste Management contracts, which cover both waste disposal and recycling. In fiscal year 2007, the four types of contracts managed by the Memphis Center resulted in approximately $6.6 million in recycling revenues (about $6 million) and waste disposal cost savings (about $600,000). Total Solid Waste Management contracts attempt to both (1) maximize recycling revenues and (2) minimize waste disposal costs by using two methods to collect and transport mail-related recycling materials. The first method—“backhauling”—uses USPS’ labor and existing transportation network to collect and transport mail-related recyclable materials from local USPS facilities (e.g., post offices) to a single USPS location, such as a mail processing and distribution center, where the materials are consolidated prior to USPS’ subsequent delivery to a paper mill or other vendor interested in purchasing the materials. When consolidation at a single USPS facility is not feasible, USPS uses a second method—“milk runs”—to collect its recyclable materials. Milk runs use contractors, such as paper brokers and other vendors, to collect recyclable materials stored at local USPS facilities and transport them to their destination. USPS officials stated that because the contractor uses its resources, including its labor and transportation, to collect and transport the materials, USPS must pay for these services—a factor that reduces both USPS’ recycling revenues and its savings from lower disposal costs. Total Solid Waste Management contracts may include a shared savings component with the contractor, whereby the contractor receives a portion of USPS’ recycling revenues and savings from lower waste disposal costs. Cost-sharing arrangements are intended to encourage the contractor to implement initiatives that maximize USPS’ recycling revenues while minimizing its waste disposal costs. One example of such a contract is USPS’ contract with Rand-Whitney, which covers 457 facilities throughout Pennsylvania. Rand-Whitney developed a recycling program for each facility that, according to documentation supplied by USPS, generated $177,000 in recycling revenues and reduced USPS’ waste disposal costs by $98,000—for a total financial benefit of $275,000 from July 2006 to June 2007. Because this contract allows Rand-Whitney to share USPS’ revenues and savings, Rand-Whitney received 25 percent of this total, or approximately $69,000, during the 12-month period. In an attempt to demonstrate the value of mail-related recycling programs, USPS began a pilot program in New York City in May 2007. The goal of the pilot—termed “New York City SOARs!” (Saving of America’s Resources)— is to identify opportunities to establish and expand recycling programs in USPS facilities throughout New York City and, based on lessons learned, identify recycling practices that can be used in other USPS facilities. USPS is implementing the pilot in stages. The first stage assessed postal recycling activities underway in each of New York City’s five boroughs, using a variety of factors, such as costs to USPS, feasibility (including logistical considerations), and mission compatibility. The report on the assessment, issued in September 2007, concluded that recycling in New York City (1) can generate recycling revenues, (2) will substantially reduce USPS’ waste disposal costs, (3) will not interfere with postal operations, and (4) will require only a “modest incremental” effort to accomplish. Based on the results of the first stage, the report suggested specific recycling activities in each borough, including: (1) the initiation of backhauling recycling programs for UAA Standard Mail and mail-related materials in Manhattan, Brooklyn, and the Bronx, and the expansion of the existing backhauling program in Queens; (2) the implementation of milk runs for UAA Standard Mail and mail-related materials in Staten Island; (3) the designation of sufficient loading dock space at USPS processing and distribution centers in Manhattan to accommodate trailers for storing and transporting mail-related recycling materials; and (4) the establishment of recycling programs for discarded lobby mail in all five boroughs. According to USPS, it is optimistic that, by carefully implementing and enhancing mail-related recycling programs throughout New York City, it could generate approximately $1.3 million per year in recycling revenues and save an additional $800,000 in waste disposal costs. The pilot’s second stage began in January 2008 and, in February 2008, according to USPS officials, the agency issued a solicitation for a contract to provide waste disposal and recycling services to USPS facilities in four of New York City’s five boroughs—Brooklyn, the Bronx, Manhattan, and Queens. USPS expects the contract will begin in October 2008. This pilot is in its early stages and, at the conclusion of our review, USPS did not have a plan or timeline for, among other actions, ending the pilot program or issuing a final report on the pilot. Furthermore, it was not clear whether, or to what extent, USPS would require its managers at other facilities to adopt—where applicable, feasible, mission compatible, and appropriate in view of cost and other considerations—lessons learned from the pilot. USPS also has implemented tools for tracking the environmental performance of its areas and districts. One such tool, called an “environmental scorecard,” tracks and ranks the environmental performance of USPS’ nine geographic areas. In fiscal year 2007, USPS used the tool to collect information needed to rank each area’s environmental performance in 12 general areas, such as pollution prevention, that includes recycling. To measure environmental performance at the district level, USPS also created a budgetary line-item for tracking each district’s recycling revenues. USPS plans to share the results of both its environmental evaluation tool and its analyses of the districts’ recycling revenues (from the budgetary line-item) with its postal managers. Because USPS officials believe that the agency’s employees are highly competitive, according to USPS officials, relative differences between the areas and districts are expected to foster competition and increase recycling revenues throughout the postal network. While USPS currently does not use the results of the environmental evaluation tool or its analyses of the districts’ recycling revenues for recognizing significant mail-related recycling achievements, according to USPS officials, USPS could choose to do so in the future. In the interim, according to these officials, USPS (1) is considering establishing a program to nominate facilities, teams, and individuals for environmental excellence in seven environmental categories—one of which includes recycling—and (2) has changed its accounting policy to allow districts to receive credit for the revenue each district generates from recycling. To increase the amount of mail with environmentally preferable attributes, USPS has undertaken two multi-faceted initiatives. Specifically, USPS (1) initiated a second Greening the Mail Task Force to, among other activities, promote the use of environmentally preferable attributes in mail and (2) established a UAA mail cost reduction goal. It also has numerous actions underway that may help the agency meet its UAA mail cost-reduction goal. Such actions include USPS’ implementation of a new mail processing method that identifies and redirects incorrectly addressed mail to the intended addressee before delivery is attempted. USPS also has taken other actions to increase the amount of mail with environmentally preferable attributes. We discuss these actions in appendix II of this report. In September 2007, USPS initiated a second Greening the Mail Task Force to, among other goals, increase the amount of mail with environmentally preferable attributes. As discussed earlier, USPS disbanded the first task force in 1999 after it issued a final report that, among other matters, identified environmentally preferable attributes associated with mail. The most recent task force—formed to address mail-related issues on a long- term basis—includes USPS officials, mailing industry representatives, and other stakeholders. The task force has five subcommittees, each with a different goal. Table 1 identifies each of the subcommittees’ relevant goals. USPS established a UAA mail cost-reduction goal in 2006 and has developed numerous tools that mailers can use to improve the accuracy of their mailing lists and reduce the amount of UAA mail they send. More recently, USPS introduced two new requirements that are expected to help USPS meet its UAA mail cost-reduction goal. In addition, USPS has implemented a new mail processing method that identifies and redirects incorrectly addressed mail to the intended addressee before delivery is attempted. USPS Established a UAA Mail Cost-Reduction Goal In addition to its recent establishment of goals for increasing the revenue USPS generates from recycling mail-related materials, in 2006, USPS set a goal of reducing UAA mail by 50 percent by fiscal year 2010. In the summer of 2007, USPS clarified this goal, specifying that it applied to the cost—not the volume—of UAA mail. USPS is developing measures to accurately assess its progress in meeting the UAA mail cost reduction goal. According to USPS, its interim measures are not sufficient for this purpose; however, USPS officials believe that data from its May 2009 deployment of Intelligent Mail, which we discuss later in this report, will provide data needed to accurately measure its progress in meeting this goal. USPS’ Tools for Improving the Accuracy of Mailing Lists Reduce UAA Mail USPS has developed numerous tools that mailers can use to increase the amount of mail that is accurately addressed for delivery. Mail that is accurately addressed decreases UAA mail volume, which, in turn, decreases USPS’ operational and waste disposal costs. Since these address accuracy tools decrease USPS’ operational costs, USPS provides lower postage rates (worksharing rates) to mailers who use them. A partial description of some of USPS’ address accuracy tools follows: “Address Element Correction” identifies mailpieces that are potentially UAA and corrects small errors in the addresses (e.g., the omission of a directional indicator such as “NW,” or errors that refer to an avenue as a street). “Delivery Point Validation” verifies that the address on a mailpiece exists in USPS’ database of addresses to which it delivers. “National Change-of-Address LINK” allows mailers to check their mailing lists against USPS’ National Change-of-Address database, which contains updated address information for mail recipients who have filed change-of-address notices with USPS. “Address Change Service” allows mailers to receive, for a fee, electronic notices that inform them when USPS cannot deliver their mailpieces. For First-Class Mail, these electronic notices reduce the amount of mail USPS must return to the sender, thereby decreasing UAA mail and USPS’ operating costs. For Standard Mail—which USPS generally is not obligated to return to the sender—the electronic notices (1) inform mailers when their Standard mailpieces are UAA and (2) provide mailers with the correct addressing information. This information enables Standard mailers to update their mailing lists with corrected addresses prior to their next mailing, thereby reducing future UAA mail-related costs. USPS Introduced Two New Requirements That Are Expected to Reduce UAA Mail USPS also introduced two recent changes that, according to USPS officials, will reduce UAA mail and, consequently, help USPS meets its UAA mail cost reduction goal (i.e., a 50 percent reduction by fiscal year 2010). First, USPS has revised its “Move Update” requirement, which currently obligates First-Class mailers to use at least one approved address accuracy tool (such as National Change-of-Address LINK or the Address Change Service) to qualify for worksharing rates. In September 2007, USPS expanded this requirement to include mailers who send Standard Mail, effective November 23, 2008. Furthermore, USPS will begin requiring First-Class and Standard mailers to update their mailing lists—using an approved address accuracy tool—95 days prior to each of their mailings. Second, beginning in May 2009, USPS intends to require mailers to use a new barcode—called the “Intelligent Mail Barcode”—on their mailpieces to qualify for worksharing rates. According to USPS officials, the new barcode will allow USPS and mailers to track individual mailpieces as they move through the mail stream. USPS officials believe that the capability to track mailpieces will reduce UAA mail volumes—and, potentially, USPS operating costs—because mailers will be able to use the barcode to determine which of their mailpieces cannot be delivered and correct their mailing lists accordingly. USPS has initiatives underway, including an agreement with the Bank of America, to test the effectiveness of the Intelligent Mail Barcode before it is fully implemented. USPS Has Implemented a Method to Identify and Redirect Improperly Addressed Mail before Delivery Is Attempted In September 2007, USPS also implemented a new, nationwide mail processing method—called the “Postal Automated Redirection System”— that identifies and redirects mailpieces to individuals who have moved. According to USPS, if a mail recipient has moved and filed a change-of- address request with USPS, the automated redirection system identifies mailpieces addressed to his or her prior address when these mailpieces first enter the mail stream and initiates one of three possible actions. The first possible action is for USPS to immediately redirect the mailpiece to the new, correct destination. Second, if the mailpiece is not eligible for forwarding or if the mailer has authorized its disposal, USPS would remove the mailpiece from the mail stream and discard it. Finally, if requested by the mailer, USPS would return the mailpiece to the mailer (i.e., the sender). In the past, USPS redirected mailpieces only after delivery had been attempted. In such cases, Standard Mail was returned to the postal facility and discarded as waste (and disposed of through recycling or some other means), while First-Class Mail was processed and forwarded, returned to the sender, or sent to a mail recovery center. Because USPS generally does not forward or return UAA Standard Mail— regardless of when it is first detected—the automated redirection system will not reduce the amount of UAA Standard Mail that USPS must eventually discard. However, USPS officials believe the automated redirection system will reduce the cost of processing UAA mail, thereby contributing to the agency’s UAA mail cost-reduction goal. In addition to USPS’ efforts, the mailing industry and other stakeholders have undertaken several key initiatives to increase the volume of mail- related materials that are recycled. Some of these initiatives were developed by mailing industry associations, while others are the result of efforts by individual mailers and organizations in the paper and environmental advocacy industries. For example, in 2007, the Direct Marketing Association (DMA), the Envelope Manufacturers Association, and the Magazine Publishers of America developed nationwide mail recycling awareness campaigns. While similar in nature, the three programs use different logos to increase recycling awareness and are intended for different types of mailpieces (e.g., catalogs, envelopes, and magazines). DMA members who participate in the association’s “Recycle Please” program are expected to include a logo in their catalogs and other mailpieces to encourage mail recipients to recycle their mailpieces after reading them. The envelope association’s program—called “Please Recycle”—promotes mail recycling by encouraging manufacturers to place a recycling logo on the front of envelopes and other packaging materials. Finally, the magazine association—which also calls its program “Please Recycle”—developed recycling logos and a full-page recycling advertisement that the association encourages its members to include in their magazines in order to increase the volume of magazines recycled. Participation in these recycling awareness campaigns varies. Specifically, based on our calculations of data provided by officials from DMA and the envelope and magazine associations, as of mid-March 2008, about 2 percent, 30 percent, and 10 percent of their members participated in these programs, respectively. (The logos used for these recycling awareness campaigns are depicted in fig. 1 of app. III.) The second key recycling initiative involves the National Recycling Coalition, which intends to develop new recycling logos to replace the familiar “chasing arrows” logo currently displayed on many products. According to coalition representatives, one of the new logos will be specific to mail. The coalition believes that this logo—which has been used for many years—is confusing to the public. According to coalition officials, the logo has been repeatedly altered by product manufacturers and others since it was first introduced and, as a result, multiple versions of the logo currently exist, each of which signifies different meanings depending on its use. By updating the existing logo, the coalition hopes to, among other intentions, enhance consumer awareness that mail can be recycled. (Examples of selected “chasing arrows” recycling logos are depicted in fig. 2 of app. III.) Finally, in 2004, Time, Inc.; Verso Paper; and the National Recycling Coalition, among other parties, initiated the “Recycling Magazines is Excellent” project to inform consumers—primarily via advertisements in magazines—that catalogs and magazines are recyclable. These parties have piloted the project in five areas: Boston, Massachusetts; Prince George’s County, Maryland; Portland, Oregon; Milwaukee, Wisconsin; and New York City, New York. According to a Time, Inc., official, in Boston, Portland, and Prince George’s County, the program increased magazine recycling by 18 percent, 6 percent, and 19 percent, respectively. Data for the Milwaukee and New York City pilots were not available at the conclusion of our review. In addition to their efforts to increase mail-related recycling, the mailing industry and other stakeholders have undertaken a variety of key initiatives to increase the amount of mail with environmentally preferable attributes. As described below, DMA is responsible for several of these initiatives. Other stakeholder initiatives, including those of individual mailers, environmental organizations, and other non-profit organizations are discussed in appendix IV of this report. DMA has undertaken several key initiatives to increase the amount of mail with environmentally preferable attributes, particularly with respect to improving its members’ mail targeting practices and the accuracy of their mailing lists. The first such effort, the Mail Preference Service, was introduced in 1971 and is a list of consumers who have requested not to receive (i.e., opt-out of) “prospecting mail” sent by DMA members. DMA requires its members to honor such requests and, consequently, forbids its members from selling or exchanging this list for any purpose other than removing prospective customers from their mailing lists. According to DMA officials, by eliminating prospecting mail, the Service (1) reduces the amount of mail a consumer receives by approximately 80 percent and (2) prevented 930 million pieces of unwanted mail from entering the mail stream in 2007. While DMA officials stated that the association works with consumer advocacy groups and other parties to inform consumers about the Mail Preference Service, a recent study conducted by Pitney Bowes indicates that two-thirds of Americans are not aware of the Service’s existence. In part to address this lack of awareness, several parties within the environmental advocacy industry recently implemented other opt-out programs. As noted previously, these programs are discussed in appendix IV of this report. More recently, DMA formed the Committee on Environment and Social Responsibility, which is comprised of 16 executives from DMA’s member organizations. Formed in 2005, the committee’s goals are to identify challenges that direct marketers face with respect to “social responsibility” issues, such as environmental sustainability and corporate citizenship issues, and to develop guidance to address these challenges. The committee designed and executed a survey to benchmark the environmental practices of its members and developed a Web-based tool to help members evaluate their environmental practices in five areas: (1) paper procurement and use, (2) address quality (accuracy) and data management, (3) design, (4) packaging and printing, and (5) recycling and pollution reduction. The tool also enables mailers to create an environmental vision statement or policy statement for, among other purposes, displaying on their Web sites. DMA officials could not supply data on the extent to which its members use this tool. In 2007, DMA also passed the “Resolution Asserting Environmental Leadership in the Direct Marketing Community.” The resolution calls on DMA members—by June 2008—to voluntarily establish internal measurements and benchmarks for assessing their business practices with respect to a list of 15 environmentally preferable practices. This list— called the “Green 15”—aligns with mailer business activities, such as paper procurement and use, mailing list accuracy, and mailpiece design. While the adoption of the 15 environmentally preferable practices is generally voluntary, in June 2008, DMA intends to establish goals and timetables for measuring its members’ success in implementing these practices, which, according to DMA officials, could lead to future DMA requirements. DMA officials stated that, thus far, members generally have reacted positively to the list of 15 preferable practices, although some members have expressed concerns about purchasing recycled and certified paper. Specifically, members expressed concerns that (1) the supply of these products may not be sufficient to meet demand if DMA were to require its members to use them; and (2) due to the number of forest certification programs and the controversy over the programs’ various merits, it is not clear which program they should use. (For more information on the Green 15, see app. V of this report.) Finally, in October 2007, DMA launched its “Commitment to Consumer Choice” program. Under this program, DMA members must, among other actions, include—on every direct mail solicitation they send—an option for consumers to opt-out of receiving future direct mail solicitations from that member, regardless of whether the member has previously established a business relationship with those customers. The new requirement, effective in October 2009, will strengthen DMA members’ current obligation to provide mail recipients with one opt-out notice per year. The Commitment to Consumer Choice program also includes several other requirements related to consumer choice. Some of these requirements are new or modified, while others are long-standing. For example, DMA members must (1) disclose, upon consumer request, the source from which they obtained data about the consumer; (2) eliminate, upon consumer request, the transfer or rental of the consumer’s personal information to other marketers; (3) increase the frequency with which they update their mailing lists against information in DMA’s Mail Preference Service opt-out database (from a quarterly to a monthly basis); and (4) act on all customer opt-out requests within 30 days and for a period of at least 3 years. According to DMA officials, DMA has an internal process for ensuring that members comply with its requirements. The process begins with DMA’s Corporate Responsibility group, which receives all customer complaints regarding the receipt of unwanted mail. If a pattern of complaints about a company emerges, DMA officials stated that the group would file a formal case before DMA’s Committee on Ethical Business Practices. If the offending mailer still refuses to comply with DMA requirements, DMA’s Board of Directors can, among other actions, expel the mailer from the association. According to DMA officials, however, member expulsions are rare. The officials explained that the association’s goal is self-correction, not punishment, and that mailers normally alter their practices to avoid expulsion from DMA. USPS, mailing industry, and other stakeholders we spoke to identified five opportunities that USPS could choose to undertake to increase its recycling of mail-related materials and to encourage mailers to increase the amount of mail with environmentally preferable attributes. The five opportunities cited most frequently were for USPS to (1) implement a program for recognizing mail-related recycling achievements; (2) increase awareness among mail recipients that mail is recyclable and encourage them to recycle their mail through, among other actions, collaboration with mailing industry and other stakeholder initiatives; (3) collaborate with parties interested in increasing the supply of paper fiber available for recycling; (4) establish a special, discounted postal rate—or Green Rate— as a means of inducing mailers to adopt one or more environmentally preferable attributes in their mailpieces; and (5) initiate a mail take-back program in locations that do not have access to municipal paper recycling. Each of these opportunities appears to be consistent with (1) the agency’s long-standing commitment to environmental leadership and (2) the Postmaster General’s recent commitments to both minimize the agency’s impact on every aspect of the environment and to act as a positive environmental influence in U.S. communities. Based on our analysis, however, USPS would need to assess factors such as costs to USPS; feasibility, including logistical considerations; and mission compatibility in deciding whether to adopt the opportunities. The stakeholders we interviewed identified five opportunities that USPS could chose to undertake to increase its recycling of mail-related materials or to encourage mailers to increase the amount of mail with environmentally preferable attributes. First, several stakeholders stated that USPS could increase its mail-related recycling activities by offering recognition, financial awards, promotional opportunities, and other incentives to reward exemplary USPS recycling achievements. Three USPS officials stated that such incentives could target facility-level managers and employees, who are likely to be critical to the successful implementation of mail-related recycling programs. As discussed, USPS is considering establishing a program to nominate facilities, teams, and individuals for environmental excellence. As currently envisioned, the program would honor excellence in seven environmental categories, including “pollution prevention.” While USPS is contemplating recognition for “improvements in recycling processes or programs” as part of its achievements related to pollution prevention, the program under consideration does not specifically recognize mail-related recycling achievements. An incentives program targeted specifically toward such achievements could foster greater competition throughout USPS, resulting in substantial increases in the agency’s recycling revenue and significant savings in its waste disposal costs. Such a program could be based solely on recycling revenues, or include other metrics—such as the amount (tonnage) of materials recycled or its savings in waste disposal costs. Second, because mail recipients often are unaware that mail can be recycled, stakeholders suggested that USPS conduct a campaign to increase awareness among mail recipients that mail is recyclable and to encourage them to recycle their mail. Such a campaign could be collaborative in nature, unilateral, or undertaken through some combination of outreach efforts. For example, several stakeholders stated that USPS could collaborate with one or more of the ongoing mailing industry and other stakeholder initiatives to increase recycling awareness among mail recipients and to encourage them to recycle. Such an effort, among other matters, could (1) address common misconceptions related to the recyclability of various types of mail and (2) raise awareness about the primary causes of identity theft—two reasons why recipients may not recycle their mail. In addition, USPS could collaborate with members of the Greening the Mail Task Force to design and implement a plan to increase the public’s awareness in these and other areas. If USPS desired to do so, stakeholders suggested that the agency also could take unilateral action to promote mail recycling by, for example, delivering an informational post card or some other form of communication to each address in America. Such an approach would be similar to USPS’ actions to promote its various products and services nationwide, which, according to USPS officials, typically increase consumer awareness by nearly 30 percent. Stakeholders also suggested that USPS develop postmarks and stamps and install signage in postal lobbies to promote mail recycling. Third, numerous stakeholders suggested that USPS collaborate with parties, such as the American Forest and Paper Association and U.S. paper recycling companies, to increase the supply of fiber needed for manufacturing recycled paper products. According to these stakeholders, such fiber is typically in short supply domestically because it is usually exported to countries, such as China and India, which pay premium prices for the fiber. The stakeholders added that the constant supply of UAA mail available through USPS could be used to increase the domestic supply of recycled fiber. Such a supply increase could potentially decrease the cost of using recycled paper products which, in turn, could encourage their increased use. One way for USPS to undertake this opportunity is to collaborate with the American Forest and Paper Association, which, according to association representatives, is eager to increase mail-related recycling. Such a collaboration, they said, would contribute to the association’s goal of recovering (i.e., preventing landfill disposal or incineration) 55 percent of paper consumed in the United States by 2012. In addition, USPS could collaborate with members of its Greening the Mail Task Force to design and implement a plan to increase the supply of paper fiber available for recycling. USPS also could choose to explore, or expand, partnerships with local recyclers. One paper recycling company in New Jersey, for example, purchases and transports UAA mail from USPS facilities in Maryland, Pennsylvania, New Jersey, and elsewhere to manufacture recycled paper products, such as paper towels, toilet paper, facial tissue, and napkins. According to a company representative, because UAA mail is a critical feedstock for the company’s production methods, the company would like to increase its supply of UAA mail as long as the cost of transporting UAA mail to the company does not become prohibitive. Fourth, numerous stakeholders in the environmental advocacy industry, suggested that USPS establish a special, discounted postal rate—or Green Rate—as a means of inducing mailers to adopt one or more environmentally preferable attributes in their mailpieces. According to these parties, for example, USPS could establish a special discount for mailers that use recycled and/or certified paper. Such a discount, they said, would help mailers offset the increased costs associated with using recycled paper and would provide an incentive for mailers to use certified paper. A Green Rate also could reward mailers who, among other practices, (1) use certain targeted marketing strategies, (2) can demonstrate measurable reductions in the amount of UAA mail they send, and (3) use mail materials efficiently. With respect to targeted marketing strategies, for example, a Green Rate could reward mailers who voluntarily participate in mail opt-out programs—such as the program offered by Catalog Choice—and can demonstrate that they honor mail recipients’ requests to be removed from their mailing listings. A Green Rate also could reward mailers who, over time, reduce the amount of UAA mail they send. Beginning in May 2009, USPS intends to use its Intelligent Mail Barcodes to establish large mailers’ UAA mail rates (the baseline) and, over time, measure changes in the frequency of the mailers’ UAA mail. While large mailers will be required to use the barcodes to receive worksharing rates for their mailings, USPS also could choose to use these data to reward mailers who meet a specified target for UAA mail reductions. In addition, if USPS chose to do so, it could reward mailers according to a “sliding scale,” whereby mailers would receive larger discounts for greater UAA mail reductions. A final example of practices that could be considered for a Green Rate is the use of two-way reusable envelopes and other mailpieces that use materials efficiently. Finally, numerous stakeholders suggested that USPS, using its existing transportation network, initiate a mail take-back program to facilitate the recycling of discarded and unwanted mail in rural, sparsely populated areas that do not have access to municipal paper recycling. While the details of such a program would need to be developed, stakeholders suggested that USPS—possibly, in collaboration with others—could supply mail recipients in these locations with pre-addressed packages to send their discarded mail either directly to a plant for recycling or, indirectly, to other facilities—including, possibly, USPS facilities—where the packages could be held for subsequent pick up and recycling. Conceptually, such a program resembles several existing take-back programs for used products—such as inkjet cartridges, digital cameras, and cellular phones—whereby the program sponsor (e.g., a manufacturer) supplies the consumer with a pre-paid and pre-addressed envelope for returning used products through the U.S. mail. Stakeholders noted that USPS receives revenue for returning the products under the existing take- back programs and, depending on how such a program is funded, also could receive revenue under a take-back program for mail. Each of the five stakeholder-identified opportunities appears to be consistent with (1) the agency’s long-standing commitment to environmental leadership and (2) the Postmaster General’s recently expressed commitment to minimize USPS’ impact on every aspect of the environment and to act as a positive environmental influence in U.S. communities. However, based on our analysis, USPS would need to assess factors such as cost; feasibility, including logistical considerations; and mission compatibility in deciding whether to adopt the opportunities. Each of the five opportunities has overall cost considerations given their likely impact on staff and other resources that would be needed to, among other actions, develop plans, procedures, and agreements for implementing them. USPS also would need to identify the staff and offices responsible for successfully initiating and carrying out the opportunities, and provide training as appropriate. In addition, while the costs associated with implementing a program for recognizing mail-related recycling achievements are likely to be minimal (and more than offset by increases in USPS’ revenues), the remaining four opportunities necessitate additional cost consideration. For example, two of the opportunities— increasing awareness about mail recycling and initiating a Green Rate— appear to have little likelihood of increasing the agency’s revenue. Furthermore, the remaining two opportunities—collaborating with parties interested in increasing the supply of paper fiber available for recycling and initiating a mail take-back program—may not generate sufficient revenues to cover their costs. Depending on the magnitude of variance between the expected costs and revenues, USPS may find implementing one or more of the opportunities unacceptable. This is, in part, because as we recently testified, USPS faces multiple short- and long-term pressures in improving its operational efficiency, increasing its revenues, and controlling its costs—some of which are increasing faster than the overall inflation rate. In addition, unlike in the past, USPS is now subject to an inflation-based cap on the prices it can charge for its goods and services. Specifically, the 2006 Postal Accountability and Enhancement Act includes an annual limitation on the average percentage changes in rates for each market-dominant mail class—such as First-Class Mail and Standard Mail— which is linked to the change in the Consumer Price Index for All Urban Customers. In addition to these overall cost considerations, three of the five opportunities have additional cost-related factors that USPS would need to assess prior to deciding whether to adopt them. First, while the cost of collaborating with other entities to increase recycling awareness among mail recipients and to encourage mail recipients to recycle their mail could be minimal, according to a USPS official, each of its recent nationwide promotional campaigns (which do not involve collaboration) cost USPS approximately $2.1 million. Such costs, however, may be overstated with respect to a recycling campaign because USPS could choose, in collaboration with others, to target only mail recipients in zip codes that do not have access to municipal paper recycling (about 14 percent of the U.S. population). Furthermore, USPS could elect to “piggyback” a recycling awareness message on another of its promotional mailings, which, by itself, would result in little additional cost. However, according to USPS officials, such a promotional campaign would necessitate the use of recycled paper to remain consistent with the agency’s recycling awareness message. As previously stated, using recycled paper is more expensive than using virgin paper and, thus, would increase the cost of such a campaign. Second, because establishing a special discount to induce mailers to adopt more environmentally preferable business practices would—absent other actions—reduce USPS revenues, USPS would have to assess the overall affects of such a discount on its financial position. In addition, USPS would need to assess the specific cost implications associated with each environmentally preferable attribute it chooses to include in a Green Rate. For example, if USPS were to consider allowing mailers who, among other attributes, use certain mail targeting strategies to qualify for a Green Rate (e.g., participation in voluntary opt-out programs), USPS would need to assess whether, and to what extent, doing so would reduce its revenues. USPS also would need to assess the costs associated with, among other of its activities, defining a Green Rate (i.e., determining which environmentally preferable attributes mailers must use to qualify for the discount) and, to avoid potential abuse, ensuring that the mailpieces presented by mailers as “green” actually qualify for the discount. Finally, initiating a mail take-back program in locations that do not have access to municipal paper recycling could greatly increase USPS’ costs and workload. The extent of these increases would depend on a variety of factors, including (1) the volume of additional mail generated by the program; (2) the characteristics, including the dimensions and weight, of the take-back packages that would require processing and delivery; (3) whether the packages would need to be manually processed; (4) the frequency with which each package needs to be handled; and (5) the distance the packages need to be transported. First, depending on the rate of program participation, the volume of mail requiring USPS processing and delivery could increase substantially. In addition, because the intent of such a program is for a mail recipient to combine all of the mail they discard during a given time frame into a single package, the package would greatly exceed the weight of typical mailpieces received by the recipient. Furthermore, because communities that do not have access to paper recycling are typically in rural, sparsely populated areas, the increased volume of larger and heavier mailpieces probably would travel long distances before reaching their final destination, thereby increasing USPS’ transportation costs throughout the journey. The volume, weight, and size of these packages also could overwhelm USPS’ service capacity in certain rural locations. Rural postal delivery service is typically carried out by USPS letter carriers using privately-owned vehicles that may not be capable of accommodating the increased volume, weight, and size of the take-back packages. Thus, USPS may incur costs for additional vehicles or changes in its operational arrangements with its rural postal carriers. Finally, the packages mailed by recipients would not be presorted and, depending on how the program is implemented, may not be barcoded— two factors that would require more costly, manual processing before delivery. Four of the five stakeholder-identified opportunities also have issues related to their feasibility, which USPS would need to assess prior to their adoption. For example, if USPS chose to implement a program for recognizing mail-related recycling achievements, such as an incentive program for facility-level managers and employees, it would first need to collect the data needed to do so. The two existing sources for USPS recycling data—the agency’s evaluation tool for its areas and the budgetary line-item for its districts—do not include facility-level data. Furthermore, collecting this data may not be feasible due to staffing constraints and the large number (about 37,000, according to USPS) of postal facilities nationwide. In light of this feasibility limitation, however, and given USPS’ goal of earning $40 million in recycling revenue in fiscal year 2010 from its districts’ efforts (approximately $500,000 per district), the agency could, instead, focus on recognizing the significant achievements of its district managers and employees. A district-level incentives program, however, has its own feasibility constraints. For example, to help ensure equity in such an incentives program, USPS would need to resolve several factors related to the program’s successful implementation. Specifically, USPS likely would need to make adjustments for large, regional variations in the price paid for recyclable mail-related materials. If USPS did not consider these variations, an incentives program based solely on revenue generated from mail-related recycling would seriously disadvantage certain districts. One possibility for resolving this issue may be to structure an incentives program on other metrics, such as the total tonnage of material recycled or a district’s savings in waste disposal costs, either in lieu of, or addition to, recycling revenues. However, such an action would introduce other issues related to the opportunity’s feasibility because, according to USPS officials, USPS does not currently require its organizations, including its districts, to (1) report their recycling tonnage or savings from waste disposal costs or (2) collect and report their recycling data using consistent methods. The Manager of USPS’ Environmental Policy and Programs organization told us that the agency intends to require its area managers to report information on their recycling tonnage, in addition to their recycling revenue and waste disposal costs, but at the conclusion of our review, USPS had not required these managers to do so. Second, if USPS chose to coordinate with parties, such as domestic recyclers, to increase the supply of fiber available for paper recycling, it would need to resolve a multitude of logistical considerations. For example, (1) Where will USPS store its mail-related materials for recycling? (2) Is sufficient storage available within USPS facilities? (3) Who will load the materials for delivery to the recycler? (4) Who will be responsible for transporting the materials and how will the deliveries be accomplished? and (5) Given space constraints, how often will the materials need to be transported, and to whom? Furthermore, to the extent that USPS facilities, vehicles, and other materials (e.g., crates and moveable carts) are used, the agency would need, among other actions, to develop a method for sharing its costs with recyclers and others who benefit directly from its efforts. If USPS were to undertake this opportunity with a goal of recycling domestically—as recommended by some stakeholders—USPS may also wish to explore arrangements to recoup a portion of any reduction in its revenues attributable to using domestic recyclers. If USPS did not use its staff or if sufficient and capable staff is not available to undertake these and other efforts, USPS could hire a third party to identify locations where this opportunity may be feasible to implement and, in such cases, act as USPS’ intermediary in addressing these and other logistical considerations. Similarly, if USPS chose to establish a Green Rate, the agency would need to resolve a wide range of issues related to the feasibility of doing so. For example, USPS would need to assess whether a discount for mailpieces with certain environmental attributes is the appropriate mechanism for promoting environmentally preferable business practices. If USPS were to proceed, it would need to define the parameters of the discount and, to avoid abuse, determine how it would ensure that mail presented as “green” actually qualifies for the discount. Several stakeholders expressed particular concern about the feasibility of enforcing a Green Rate, indicating that such a task would be administratively difficult and, possibly, impossible to accomplish. Finally, to determine whether a Green Rate is feasible, USPS would need to (1) assess the impact of such a discount on its net revenues and existing postal rates; (2) determine whether, and to what extent, a Green Rate would affect its market- dominate products, which are subject to an annual inflation-based price cap; and (3) if the establishment of the discount resulted in the need to raise postal rates, evaluate whether USPS would be able to raise rates in accordance with the requirements in Postal Accountability and Enhancement Act. Finally, to initiate a mail take-back program, USPS would need to consider the logistics and overall feasibility of collecting and transporting the increased volumes of larger and heavier mailpieces through the mail stream. USPS, probably in collaboration with others, also would need to determine how the program would work—which likely would be a complex arrangement. For example, what classes of mail would the program cover? Who would supply the packages and postage needed to return the discarded mail? How would the appropriate postage be determined? Furthermore, where would the packages be sent, and to how many locations? In addition, to estimate its costs, USPS would need to develop, among other factors, a method for estimating (1) the number of mail recipients who would participate in the program and (2) the volume, size, and weight of their discarded mail take-back packages. While these considerations are numerous, the most serious question with respect to the program’s feasibility is “Who will pay the substantial costs associated with implementing the program?” USPS has three options to cover the costs of a mail take-back program. First, USPS could explore increases in its postage rates for the applicable classes of mail—an action that mailing industry stakeholders would likely strongly oppose. Furthermore, it may not be feasible to raise these rates because of a limitation specified in the Postal Accountability and Enhancement Act. Second, while USPS could require mail recipients to pay the postage needed to cover the program’s costs, such an action, in our view, would greatly diminish or eliminate public participation. As we reported in 2006, one key to increasing the volume of materials recycled is to offer financial incentives to increase recycling. Thus, a mail take-back program that departs from this premise is, in our view, unlikely to succeed. Finally, USPS could solicit funding from other parties, such as communities, federal and non-profit organizations, businesses, environmental organizations, Congress, and other interested parties. While USPS could explore this option, it is unclear whether other parties would find it in their best interest to fund such a program. While each of these opportunities appears to be consistent with the agency’s environmental commitments, including the Postmaster General’s recent commitments to both minimize the agency’s impact on every aspect of the environment and to act as a positive environmental influence in U.S. communities, it is unclear to what extent USPS views actions to fulfill these commitments as being compatible with its mission and strategic goals. Similarly, it is not clear whether, or to what extent, USPS would be willing to sacrifice revenue and/or incur additional costs to further its environmental commitments. In addition, two of the five stakeholder-identified opportunities—the establishment of a mail take-back program and a Green Rate for mailpieces that incorporate a variety of environmentally preferable attributes—would necessitate additional cost considerations. First, several mailing industry stakeholders told us that they strongly oppose a Green Rate, in part, because of mission compatibility concerns. According to these stakeholders, unlike worksharing rates that reward mailers for reducing USPS’ costs, a Green Rate discount does not align with USPS’ primary mission of delivering the mail. In addition, they said that market forces and mailer preferences—not the establishment of a Green Rate— should determine whether mailers choose to include environmentally preferable attributes in their mailpieces. Depending on how USPS chose to define a Green Rate, some specific aspects of the definition could cause additional mission compatibility concerns. For example, if the use of certified paper was included in a Green Rate, USPS lacks the expertise needed to evaluate the relative merits of the various—and controversial— certification programs. Likewise, if a Green Rate incorporated certain targeted marketing strategies, USPS could be drawn into a contentious debate about the relative merits of various opt-out programs, including voluntary programs administered by Catalog Choice and others. Finally, as discussed previously, establishing a mail take-back program likely would result in significant increases in the volume, size, and weight of packages moving through the mail stream. Such increases could overburden USPS’ delivery networks and create delivery delays. Related to this, several stakeholders from USPS and the mailing industry told us that USPS’ mission is to deliver the mail in a timely fashion—not to help mail recipients recycle their discarded and unwanted mail. USPS, the mailing industry, and others have developed a wide range of initiatives that, over time, could alleviate some concerns related to the perceived negative impact of mail on the environment. Several of the initiatives also have the potential to improve USPS’ financial position— while also enhancing its environmental reputation in U.S. communities. However, it is not clear what level of trade-offs, including decreased revenue and/or increased costs, USPS would find acceptable to incur to further its commitments and reputation on environmental matters. While much is being done, USPS has numerous opportunities to enhance its mail-related recycling efforts. For example, while establishing goals should assist USPS in generating substantial additional mail-related recycling revenues, the agency has not established similar goals for reducing its costs associated with waste disposal. Consequently, its recycling goals do not reflect the full financial benefit attributable to mail- related recycling. In our view, revising USPS’ recycling goals to include savings from lower waste disposal costs or adopting additional goals that reflect the full financial benefit attributable to mail-related recycling would help focus USPS employees on the need to achieve greater cost reductions—consistent with one of USPS’ strategic goals. Related to this, while USPS intends to develop a plan to help it achieve its recycling goals, it is not clear whether this plan will (1) specify how progress toward its goals will be measured or (2) ensure that the data USPS will use to measure its progress are accurate, reliable, and collected using a consistent method. Furthermore, while USPS launched a pilot recycling program in New York City to, among other objectives, apply lessons learned to other postal facilities, it is unclear whether, and to what extent, USPS will require its facility managers to adopt these lessons where applicable, feasible, mission compatible, and appropriate in view of cost and other considerations. Finally, while these considerations are also applicable to the adoption of the five stakeholder-identified opportunities discussed in this report, each opportunity, at a minimum, provides thoughts and insights on activities that USPS might find—after careful analysis—beneficial to adopt. To increase USPS’ recycling of mail-related materials and increase the amount of mail with environmentally preferable attributes, we recommend that the Postmaster General direct the Manager of Environmental Policy and Programs and other parties, as appropriate, to take the following four actions: Revise the agency’s recycling goals to include savings from lower waste disposal costs or adopt additional goals that would reflect the full financial benefit attributable to mail-related recycling. Ensure that the mail-related recycling plan it develops specifies, among other matters, how USPS will (1) measure progress toward its goals and (2) ensure that the data it uses for these measurements are accurate, reliable, and collected using a consistent method. After completion of the New York City pilot, require facility managers at other facilities to adopt lessons learned, where applicable, feasible, mission compatible, and appropriate in view of cost and other considerations. Assess the environmental benefits of the mail-related recycling opportunities identified by stakeholders in this report, and any others, and adopt those opportunities that are feasible, compatible with USPS’ mission, and appropriate in view of cost and other considerations. USPS provided its written comments on a draft of this report by letter dated May 2, 2008. These comments are summarized below and are included, in their entirety, as appendix VI to this report. USPS agreed with three of our four recommendations and stated that it had begun initiating actions to implement them. USPS also agreed, in principle, with our remaining recommendation to adopt applicable lessons learned from its New York City recycling pilot nationwide, where feasible, mission compatible, and appropriate in view of cost and other considerations. USPS stated, however, that it cannot require all of its facility managers to adopt these lessons since “not all lessons learned are applicable nationwide.” We recognize that the pilot’s lessons will not be applicable at every postal facility and, thus, clarified the recommendation to avoid confusion. We are sending this report to the congressional requestors and their staffs. We are also sending copies to the Postmaster General and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at siggerudk@gao.gov or by telephone at (202) 512-2834. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix VII. Our objectives were to (1) describe the key, recent recycling accomplishments (initiatives) of USPS, the mailing industry, and other stakeholders and (2) identify additional recycling opportunities that USPS could choose to engage in (or influence mailers to undertake), including the factors that USPS would need to assess prior to adopting the opportunities. Such factors include costs to USPS; feasibility, such as logistical considerations; and compatibility with USPS’ mission. To address our overall reporting objectives, we interviewed a wide range of USPS officials as well as representatives from about 40 organizations (stakeholders) that have expertise in mail and paper recycling issues. For expertise in mail, we contacted numerous, major mailing industry associations which encompass the three major classes of mail that contain advertisements—First-Class, Standard, and Periodical. For expertise on environmental matters, we contacted organizations with positions on a wide range of environmental matters, including the waste generated from mail. For expertise in paper recycling issues, we contacted organizations that, among other matters, have an interest in increasing the amount of paper fiber available for recycling. During our interviews, we requested contact information for other relevant stakeholders and, as appropriate, contacted representatives from those organizations who agreed to speak with us. The stakeholders we interviewed are listed in table 2. In addition, to describe recent USPS initiatives, we (1) reviewed and analyzed relevant documents related to the initiatives; (2) toured various facilities engaged in recycling activities, including USPS facilities in Baltimore and Philadelphia, a paper recycling facility and a printing facility; and (3) attended meetings of the “Greening the Mail Task Force”— a committee of USPS, mailing industry, and other stakeholders whose mission is to identify and address environmental issues that relate to the mail. We also interviewed a wide range of officials in various USPS organizations, including Environmental Policy and Programs, Address Management, Product Development, Pricing and Classification, Marketing, Government Affairs, the Office of Inspector General, and the Memphis Category Management Center. In addition, we interviewed USPS staff involved in implementing the New York City pilot; facility managers and employees involved with recycling in Baltimore and Philadelphia facilities; and employees involved with the National Postal Forum and the Mailers’ Technical Advisory Committee, among others. To determine recent initiatives of the mailing industry and others, we interviewed each of the stakeholder organizations listed above and reviewed and analyzed relevant documents related to the initiatives they identified. We selected key, recent initiatives undertaken by USPS, the mailing industry, and others based on our professional judgment. To identify additional mail-related recycling opportunities that USPS could choose to undertake, we solicited the views of representatives from the aforementioned stakeholders. We specifically inquired about opportunities to increase the recycling of mail and the amount of mail with environmentally preferable attributes. We reported on those opportunities that stakeholders cited more than twice and that were not currently being addressed by an ongoing USPS initiative. Finally, using our professional judgment, we analyzed pertinent factors, such as cost; feasibility, including logistical considerations; and compatibility with USPS’ mission, that USPS would need to assess prior to adopting the opportunities. We discussed First-Class, Standard, and Periodical Mail with stakeholders; however, we primarily focused on Standard Mail because of (1) its increasing prominence in the mail stream; (2) its contribution to the municipal solid waste stream; (3) USPS’ responsibility for discarding large volumes of UAA Standard Mail; and (4) the issues critics cite related to Standard Mail, which are reflected in numerous “Do Not Mail” state legislative initiatives and a recent online petition for a national Do Not Mail Registry. While other studies measure the environmental impact of mail using different measurements (e.g., the carbon footprint attributable to mail), this report focuses on the role recycling plays in eliminating mail and mail-related materials from municipal solid waste, decreasing USPS’ waste disposal costs, increasing USPS’ revenue, and enhancing USPS’ commitment to environmental leadership. We conducted this performance audit from April 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the two multi-faceted initiatives discussed in the body of this report, USPS also has undertaken other actions to increase the amount of mail with environmentally preferable attributes. First, according to USPS officials, the agency uses 100 percent recycled paperboard in its Priority Mail and Express Mail packages and envelopes. In addition, these officials stated that the agency’s marketing materials, such as postcards and brochures, typically contain at least 10 percent recycled paper. Finally, USPS recently approved a change in its mailing standards which allows mailers to use reusable mailpieces, such as “two-way” envelopes. Such envelopes enable mail recipients to either remove or cover the recipient’s address in order to reveal a return address. Mailers that use two-way envelopes do not need to include a return envelope in their mailpieces, which reduces their paper use and costs. While several initiatives have been taken by the Direct Marketing Association (DMA), other stakeholders in the mailing industry and environmental advocacy organizations also have initiatives underway to increase the amount of mail with environmentally preferable attributes. For example, the National Postal Forum—a non-profit educational corporation—sponsors an annual postal event and trade show with the same name. This forum, among other goals, provides USPS and mailing industry attendees with training, education, and opportunities to communicate with USPS officials on matters related to the mail. In 2007, the forum included a series of workshops designed to educate mailers on USPS’ tools for improving the accuracy of their mailing lists and reducing UAA mail volumes. The May 2008 forum featured more workshops on improving the accuracy of mailing lists, including a series of “Xtremely Green” workshops to address the environmental implications of the mail and ways to effectively communicate to mail recipients about environmental issues. A second stakeholder, the Mailers’ Technical Advisory Committee, (1) shares technical information on matters of mutual interest related to mail- related products and services and (2) discusses ways to enhance the value of these products and services. The committee recently published two reports. The first report, issued in 2006, contained a list of best practices for accurately addressing mailpieces and recommended, among other matters, that mailers—prior to each mailing—update their mailing lists by using USPS’ tools for improving address accuracy. We were unable to ascertain the extent to which mailers have adopted the report’s recommendations. The second report, issued in 2007, (1) outlined a system by which USPS could certify the accuracy of mailing lists purchased by mailers and (2) described several scenarios in which such a system would reduce UAA mail. According to USPS, it intends to develop a list certification system by October 1, 2010. However, this time frame is contingent on the deployment of software upgrades related to its postal automated redirection system and the mailing industry’s implementation of Intelligent Mail barcodes, expected in May 2009. Third, several parties within the environmental advocacy industry have undertaken initiatives to decrease the amount of unwanted mail received by mail recipients. For example, “41 Pounds” and “GreenDimes”—a non- profit and for-profit organization, respectively—were established in 2006 to help mail recipients decline (i.e., opt-out of) many types of unwanted mail, including credit card and sweepstakes offers, insurance promotions, coupon booklets, and catalogs. These organizations accomplish this goal by, among other activities, (1) helping mail recipients register for DMA’s Mail Preference Service and (2) directly contacting non-DMA mailers— who are not required to use this service—to request them not to send mail solicitations to these mail recipients. 41 Pounds charges $41 for its services and, according to its Web site, donates one-third of this fee to community and environmental organizations. GreenDimes charges $20 for its services and, as of mid-March 2008, planted five trees for its services and an additional tree for every catalog that a member (mail recipient) declined (up to five additional trees). In addition, Catalog Choice—a non- profit program sponsored by the Ecology Center and endorsed by the National Wildlife Federation and the Natural Resources Defense Council— offers a free service that allows consumers to stop receiving unwanted catalogs. Consumers can search for catalogs on the Catalog Choice Web site and, after providing their address information, opt-out of those they do not wish to receive. According to the organization’s Web site, nearly 700,000 people had registered for its service, opting out of over 9 million catalogs as of March 31, 2008. Fourth, some parties within the envelope industry also have undertaken initiatives to increase the amount of mail with environmentally preferable attributes. For example, one major envelope manufacturer enables its customers to customize their envelopes with an assortment of environmentally preferable attributes, including recycled and certified paper. A company official estimated that 80 percent of his company’s envelope sales include at least one environmentally preferable attribute. Another company designed reusable “two-way” envelopes that are made with at least 30 percent recycled paper. Fifth, some individual catalog mailers have undertaken their own efforts to incorporate environmentally preferable attributes in their mailpieces. In 2007, Forest Ethics—an environmental non-profit organization that, among other activities, encourages catalog companies to improve their environmental practices—surveyed the catalog industry and reported that nine major catalog mailers had attained its highest environmental rating by, among other actions, reducing the quantity of paper they use and using certified and recycled paper. Another catalog mailer partnered with Environmental Defense in 2001 and began using paper with 10 percent recycled paper. This company also offers mail recipients a “frequency opt- out” option that enables mail recipients to choose how often they wish to receive the company’s catalog. Finally, to increase the prevalence of environmentally preferable attributes in magazines, a non-profit organization called Co-op America created the “Magazine PAPER Project.” The project helps magazines change their business practices to better protect the environment by, among other actions, educating magazine publishers about the environmental consequences associated with the paper they use and working with publishers to help them adopt environmentally preferable practices, such as the use of recycled or certified paper. A representative of Co-op America estimated that the percentage of magazines using recycled paper is extremely low. According to the representative, convincing magazine officials to use recycled or certified paper is difficult, due, in part, to (1) confusion regarding the environmental benefits of using these products, (2) the higher cost of recycled paper, (3) the availability of recycled paper, and (4) the perception among many magazine companies that using recycled paper will adversely affect the appearance of their products. As discussed in the body of this report, DMA developed a list of 15 environmentally preferable business practices—the “Green 15.” While adoption of these 15 practices is mostly voluntary, in June 2008, DMA intends to establish goals and timetables for measuring its members’ success in implementing these practices, which, according to DMA officials, could lead to future DMA requirements. Overall, the Green 15 consists of five mailer business practices: paper procurement and use, mailing list accuracy and data management, mail design and production, packaging, and recycling and pollution reduction. A description of the specific practices related to each of the five overall business practices follows: Paper procurement and use – Mailers should: 1. Encourage paper suppliers to increase their wood purchases from recognized forest certification programs; 2. Require paper suppliers to commit to implementing sustainable forestry practices that (a) protect forest ecosystems and biodiversity and (b) provide wood and paper products that meet industry needs; 3. Ask paper suppliers where their paper comes from before purchasing it, with the intent of avoiding paper from unsustainable or illegally managed forests; 4. Require paper suppliers to document that they do not produce or sell paper from illegally harvested or stolen wood; and 5. Evaluate the paper used for advertising, product packaging, and internal consumption in order to identify opportunities to increase their environmentally preferable attributes. Mailing list accuracy and data management – Mailers should: 6. Comply with DMA guidelines for list management, such as: a. maintaining lists of consumers who do not wish to receive their b. updating their mailing lists against the Mail Preference Service database; c. including—on every direct mail solicitation they send—an option for mail recipients to opt-out of receiving all direct mail solicitations from that member, regardless of whether a business relationship has been previously established; 7. Use tools developed by USPS or other parties to improve the accuracy of their mailing lists; and 8. Apply predictive targeted marketing models to reduce unwanted mail, where appropriate. Mail design and production – Mailers should: 9. Review their direct mail solicitations and other printed marketing material to determine whether, for example, smaller or lighter designs (that use less paper) are appropriate; and 10. Test and use production methods that reduce waste. Packaging – Mailers should: 11. Encourage their packaging suppliers to submit price quotes for environmentally preferable packaging alternatives, in addition to approved or existing packaging specifications. Recycling and pollution reduction – Mailers should: 12. Purchase office paper and packaging materials that are made from recycled paper, where appropriate; 13. Integrate the use of electronic communications (e.g., e-mail, internet, and intranet) for both internal and external communications; 14. Ensure that all environmental labeling is clear, honest, and complete; 15. Participate in DMA’s “Recycle Please” campaign and/or other recycling campaigns in order to demonstrate that their company or organization has a program to encourage recycling. In addition to the individual named above, Kathleen Turner, Assistant Director; Samer Abbas; Kathleen Gilhooly; Jeff Jensen; Joshua Ormond; Daniel Paepke; Stephanie Purcell; and Erin Roosa made key contributions to this report.
In 2006, the U.S. Postal Service (USPS) discarded about 317,000 tons of undeliverable-as-addressed advertising mail. Such mail can be disposed of using incineration, landfills or through other methods. USPS recently committed to minimizing the agency's impact on every aspect of the environment. Recycling undeliverable advertising mail can help USPS achieve this commitment, while generating revenue and reducing its costs and financial pressures. In response to the 2006 Postal Accountability and Enhancement Act, this report addresses (1) recent mail-related recycling accomplishments (initiatives) undertaken by USPS, the mailing industry, and others and (2) additional recycling opportunities that USPS could choose to engage in, or influence mailers to undertake. To conduct this study, GAO analyzed relevant data and documents, visited USPS and other facilities, and interviewed about 40 stakeholders. USPS and the mailing industry have undertaken numerous initiatives to increase (1) the recycling of mail-related materials and (2) the amount of mail with environmentally preferable attributes, such as mail that uses recycled paper. USPS has five key recycling-related initiatives underway. For example, USPS recently established annual goals to increase its revenue from mail-related recycling from $7.5 million to $40 million from fiscal years 2007 to 2010. However, by excluding savings that result from lower waste disposal costs--which accompany increased recycling--the goals do not reflect the full financial benefit attributable to mail-related recycling. USPS also has launched a pilot recycling program in New York City, but it is not known whether USPS will require its managers elsewhere to adopt applicable "lessons learned" from the pilot. Representatives of the mailing industry and other stakeholders also have undertaken a wide range of initiatives to, among other actions, increase the amount of mail that is recycled. For example, three mailing industry associations recently introduced separate awareness campaigns to encourage mail recipients to recycle their catalogs, envelopes, and magazines. In addition, the Direct Marketing Association--whose members collectively send about 80 percent of all Standard Mail--is undertaking several initiatives, including an effort to encourage mailers to use environmentally preferable mail attributes. USPS, mailing industry, and other stakeholders GAO interviewed identified five opportunities that USPS could choose to undertake to increase its recycling of mail-related materials and to encourage mailers to increase the amount of mail with environmentally preferable attributes. The five opportunities stakeholders cited most frequently were for USPS to: (1) implement a program for recognizing mail-related recycling achievements; (2) increase awareness among mail recipients that mail is recyclable and encourage them to recycle their mail; (3) collaborate with parties interested in increasing the supply of paper fiber available for recycling; (4) establish a special, discounted postal rate--or "Green Rate"--as a means of inducing mailers to adopt environmentally preferable attributes; and, (5) initiate a "mail take-back" program in locations that do not have access to municipal paper recycling. Each of these opportunities appears to be consistent with the agency's long-standing commitment to environmental leadership and the Postmaster General's recent commitments to minimize the agency's impact on every aspect of the environment and to act as a positive environmental influence in U.S. communities. Based on GAO's analysis, however, USPS would need to assess several factors including cost, feasibility (including logistical considerations), and mission compatibility in deciding whether to adopt these opportunities. For example, depending on the magnitude of variance between the expected costs and revenues, USPS may find implementing one or more of the opportunities unacceptable. This is, in part, because USPS faces multiple short- and long-term pressures in improving its operational efficiency, increasing its revenues, and controlling its costs--some of which are increasing faster than the overall inflation rate.
Our recent analyses of NRC programs identified several areas where NRC needs to take action to better fulfill its mission and made associated recommendations for improvement. With respect to NRC’s security mission, we found that the security of sealed radioactive sources and the physical security at nuclear power plants need to be strengthened. With respect to its public health and safety, and environmental missions, we found several shortcomings that need to be addressed. NRC’s analyses of plant owners’ contributions could be improved to better ensure that adequate funds are accumulating for the decommissioning of nuclear power plants. By contrast, we found that NRC is ensuring that requirements for liability insurance for nuclear power plants owned by limited liability companies are being met. Further, to ensure the safety of nuclear power plants NRC must more aggressively and comprehensively resolve oversight issues related to the shutdown of the Davis-Besse plant. Finally, NRC’s methods of ensuring that power plants are effectively controlling spent nuclear fuel need to be improved. In August 2003, we reported on federal and state actions needed to improve security of sealed radioactive sources. Sealed radioactive sources, radioactive material encapsulated in stainless steel or other metal, are used worldwide in medicine, industry, and research. These sealed sources could be a threat to national security because terrorists could use them to make “dirty bombs.” We were asked among other things to determine the number of sealed sources in the United States. We found that the number of sealed sources in use today in the United States is unknown primarily because no state or federal agency tracks individual sealed sources. Instead, NRC and the agreement states track numbers of specific licensees. NRC and the Department of Energy (DOE) have begun to examine options for developing a national tracking system, but to date, this effort has had limited involvement by the agreement states. NRC had difficulty locating owners of certain generally licensed devices it began tracking in April 2001, and has hired a private investigation firm to help locate them. Twenty-five of the 31 agreement states that responded to our survey indicated that they track some or all general licensees or generally licensed devices, and 17 were able to provide data on the number of generally licensed devices in their jurisdictions, totaling approximately 17,000 devices. GAO recommended that NRC (1) collaborate with states to determine the availability of the highest risk sealed sources, (2) determine if owners of certain devices should apply for licenses, (3) modify NRC’s licensing process so sealed sources cannot be purchased until NRC verifies their intended use, (4) ensure that NRC’s evaluation of federal and state programs assesses the security of sealed sources, and (5) determine how states can participate in implementing additional security measures. NRC disagreed with some of our findings. In September 2003, we reported that NRC’s oversight of security at commercial nuclear power plants needed to be strengthened. The September 11, 2001, terrorist attacks intensified the nation’s focus on national preparedness and homeland security. Among possible terrorist targets are the nation’s nuclear power plants which contain radioactive fuel and waste. NRC oversees plant security through an inspection program designed to verify the plants’ compliance with security requirements. As part of that program, NRC conducted annual security inspections of plants and force-on-force exercises to test plant security against a simulated terrorist attack. GAO was asked to review (1) the effectiveness of NRC’s security inspection program and (2) legal challenges affecting power plant security. At the time of our review, NRC was reevaluating its inspection program. We did not assess the adequacy of security at the individual plants; rather, our focus was on NRC’s oversight and regulation of plant security. We found that NRC had taken numerous actions to respond to the heightened risk of terrorist attack, including interacting with the Department of Homeland Security and issuing orders designed to increase security and improve defensive barriers at plants. However, three aspects of NRC’s security inspection program reduced the agency’s effectiveness in overseeing security at commercial nuclear power plants. First, NRC inspectors often used a process that minimized the significance of security problems found in annual inspections by classifying them as “non-cited violations” if the problem had not been identified frequently in the past or if the problem had no direct, immediate, adverse consequences at the time it was identified. Non-cited violations do not require a written response from the licensee and do not require NRC inspectors to verify that the problem has been corrected. For example, guards at one plant failed to physically search several individuals for metal objects after a walk-through detector and a hand-held scanner detected metal objects in their clothing. These individuals were then allowed unescorted access throughout the plant’s protected area. By extensively using non-cited violations for serious problems, NRC may overstate the level of security at a power plant and reduce the likelihood that needed improvements are made. Second, NRC did not have a routine, centralized process for collecting, analyzing, and disseminating security inspections data to identify problems that may be common to plants or to provide lessons learned in resolving security problems. Such a mechanism may help plants improve their security. Third, although NRC’s force-on-force exercises can demonstrate how well a nuclear plant might defend against a real-life threat, several weaknesses in how NRC conducted these exercises limited their usefulness. Weaknesses included (1) using more personnel to defend the plant during these exercises than during normal operations, (2) using attacking forces that are not trained in terrorist tactics, and (3) using unrealistic weapons (rubber guns) that do not simulate actual gunfire. Furthermore, at the time, NRC has made only limited use of some available improvements that would make force-on-force exercises more realistic and provide a more useful learning experience. Finally, we also found that even if NRC strengthens its inspection program, commercial nuclear power plants face legal challenges in ensuring plant security. First, federal law generally prohibits guards at these plants from using automatic weapons, although terrorists are likely to have them. As a result, guards at commercial nuclear power plants could be at a disadvantage in firepower, if attacked. Second, state laws regarding the permissible use of deadly force and the authority to arrest and detain intruders vary, and guards were unsure about the extent of their authorities and may hesitate or fail to act if the plant is attacked. GAO made recommendations to promptly restore annual security inspections and revise force-on-force exercises. NRC disagreed with many of GAO’s findings, but did not comment on GAO’s recommendations. In September 2004, we testified on our preliminary observations regarding NRC’s efforts to improve security at nuclear power plants. The events of September 11, 2001, and the subsequent discovery of commercial nuclear power plants on a list of possible terrorist targets have focused considerable attention on plants’ capabilities to defend against a terrorist attack. NRC is responsible for regulating and overseeing security at commercial nuclear power plants. We were asked to review (1) NRC’s efforts since September 11, 2001, to improve security at nuclear power plants, including actions NRC had taken to implement some of GAO’s September 2003 recommendations to improve security oversight, and (2) the extent to which NRC is in a position to assure itself and the public that the plants are protected against terrorist attacks. The testimony reflected the preliminary results of GAO’s review. We are currently performing a more comprehensive review in which we are examining (1) NRC’s development of its 2003 design basis threat (DBT), which establishes the maximum terrorist threat that commercial nuclear power plants must defend against, and (2) the security enhancements that plants have put in place in response to the design basis threat and related NRC requirements. We expect to issue a report on our findings later this year. In the earlier work, we found that NRC responded quickly and decisively to the September 11, 2001, terrorist attacks with multiple steps to enhance security at commercial nuclear power plants. NRC immediately advised plants to go to the highest level of security using the system in place at the time, and issued advisories and orders for plants to make certain enhancements, such as installing more physical barriers and augmenting security forces, which could be quickly completed to shore up security. According to NRC officials, their inspections found that plants complied with these advisories and orders. Later, in April 2003, NRC issued a new DBT and required the plants to develop and implement new security plans to address the new threat by October 2004. NRC is also improving its force-on-force exercises, as GAO recommended in its September 2003 report. While its efforts had enhanced security, NRC was not yet in a position to provide an independent determination that each plant has taken reasonable and appropriate steps to protect against the new DBT. According to NRC officials, the facilities’ new security plans were on schedule to be implemented by October 2004. However, NRC’s review of the plans, which are not available to the general public for security reasons, had primarily been a paper review and was not detailed enough for NRC to determine if the plans would protect the facility against the threat presented in the DBT. In addition, NRC officials generally were not visiting the facilities to obtain site-specific information and assess the plans in terms of each facility’s design. NRC is largely relying on the force- on-force exercises it conducts to test the plans, but these exercises will not be conducted at all facilities for 3 years. We also found that NRC did not plan to make some improvements in its inspection program that GAO previously recommended. For example, NRC was not following up to verify that all violations of security requirements had been corrected, nor was the agency taking steps to make “lessons learned” from inspections available to other NRC regional offices and nuclear power plants. In October 2003, we reported that NRC needs to more effectively analyze whether nuclear power plant owners are adequately accumulating funds for decommissioning plants. Following the closure of a nuclear power plant, a significant radioactive waste hazard remains until the waste is removed and the plant site is decommissioned. In 1988, NRC began requiring owners to (1) certify that sufficient financial resources would be available when needed to decommission their nuclear power plants and (2) require them to make specific financial provisions for decommissioning. In 1999, GAO reported that the combined value of the owners’ decommissioning funds was insufficient to ensure enough funds would be available for decommissioning. GAO was asked to update its 1999 report, and to evaluate NRC’s analysis of the owners’ funds and the agency’s process for acting on reports that show insufficient funds. We found that although the collective status of the owners’ decommissioning fund accounts has improved considerably since GAO’s last report, some individual owners were not on track to accumulate sufficient funds for decommissioning. Based on our analysis and using the most likely economic assumptions, we concluded that the combined value of nuclear power plant owners’ decommissioning fund accounts in 2000— about $26.9 billion—was about 47 percent greater than needed at that point to ensure that sufficient funds would be available to cover the approximately $33 billion in estimated decommissioning costs when the plants are permanently closed. This value contrasts with GAO’s prior finding that 1997 account balances were collectively 3 percent below what was needed. However, overall industry results can be misleading. Because funds are generally not transferable from funds that have more than sufficient reserves to those with insufficient reserves, each individual owner must ensure that enough funds are available for decommissioning their particular plants. We found that 33 owners with ownership interests in a total of 42 plants had accumulated fewer funds than needed through 2000, to be on track to pay for eventual decommissioning. In addition, 20 owners with ownership interests in a total of 31 plants recently contributed less to their trust funds than we estimated they needed in order to put them on track to meet their decommissioning obligations. NRC’s analysis of the owners’ 2001 biennial reports was not effective in identifying owners that might not be accumulating sufficient funds to cover their eventual decommissioning costs. In reviewing the 2001 reports, NRC reported that all owners appeared to be on track to have sufficient funds for decommissioning. In reaching this conclusion, NRC relied on the owners’ future plans for fully funding their decommissioning obligations. However, based on the owners’ actual recent contributions, and using a different method, GAO found that several owners could be at risk of not meeting their financial obligations for decommissioning when these plants stop operating. In addition, for plants with more than one owner, NRC did not separately assess the status of each co-owner’s trust funds against each co-owner’s contractual obligation to fund decommissioning. Instead, NRC assessed whether the combined value of the trust funds for the plant as a whole were reasonable. Such an assessment for determining whether owners are accumulating sufficient funds can produce misleading results because owners with more than sufficient funds can appear to balance out owners with less than sufficient funds, even though funds are generally not transferable among owners. Furthermore, we found that NRC had not established criteria for taking action when it determines that an owner is not accumulating sufficient decommissioning funds. We recommended that NRC (1) develop an effective method for determining whether owners are accumulating decommissioning funds at sufficient rates and (2) establish criteria for taking action when it is determined that an owner is not accumulating sufficient funds. NRC disagreed with these recommendations, suggesting that its method is effective and that it is better to deal with unacceptable levels of financial assurance on a case-by-case basis. GAO continues to believe that limitations in NRC’s method reduce its effectiveness and that, without criteria, NRC might not be able to ensure owners are accumulating decommissioning funds at sufficient rates. In May 2004, we issued a report on NRC’s liability insurance requirements for nuclear power plants owned by limited liability companies. An accident at one the nation’s commercial nuclear power plants could result in personal injury and property damage. To ensure that funds would be available to settle liability claims in such cases, the Price-Anderson Act requires licensees of these plants to have primary insurance—currently $300 million per site. The act also requires secondary coverage in the form of retrospective premiums to be contributed by all licensees of nuclear power plants to cover claims that exceed primary insurance. If these premiums are needed, each licensee’s payments are limited to $10 million per year and $95.8 million in total for each of its plants. In recent years, limited liability companies have increasingly become licensees of nuclear power plants, raising concerns about whether these companies—which shield their parent corporations’ assets—will have the financial resources to pay their retrospective premiums. We were asked to determine (1) the extent to which limited liability companies are the licensees for U.S. commercial nuclear power plants, (2) NRC’s requirements and procedures for ensuring that licensees of nuclear power plants comply with the Price- Anderson Act’s liability requirements, and (3) whether and how these procedures differ for licensees that are limited liability companies. We found that of the 103 operating nuclear power plants, 31 were owned by 11 limited liability companies. Three energy corporations—Exelon, Entergy, and the Constellation Energy Group—were the parent companies for eight of these limited liability companies. These 8 subsidiaries were the licensees or co-licensees for 27 of the 31 plants. We also found that NRC requires all licensees for nuclear power plants to show proof that they have the primary and secondary insurance coverage mandated by the Price-Anderson Act. Licensees sign an agreement with NRC that requires the licensee to keep the insurance in effect. American Nuclear Insurers also has a contractual agreement with each of the licensees that obligates the licensee to pay the retrospective premiums to American Nuclear Insurers if these payments become necessary. A certified copy of this agreement, which is called a bond for payment of retrospective premiums, is provided to NRC as proof of secondary insurance. Finally, we found that NRC does not treat limited liability companies differently than other licensees with respect to the Price-Anderson Act’s insurance requirements. Like other licensees, limited liability companies must show proof of both primary and secondary insurance coverage. American Nuclear Insurers also requires limited liability companies to provide a letter of guarantee from their parent or other affiliated companies with sufficient assets to pay the retrospective premiums. These letters state that the parent or affiliated companies are responsible for paying the retrospective premiums if the limited liability company does not. American Nuclear Insurers informs NRC that it has received these letters. In May 2004, we also issued a report documenting the need for NRC to more aggressively and comprehensively resolve issues related to the shutdown of the Davis-Besse nuclear power plant. The most serious safety issue confronting the nation’s commercial nuclear power industry since Three Mile Island in 1979, was identified at the Davis-Besse plant in Ohio in March of 2002. After NRC allowed Davis-Besse to delay shutting down to inspect its reactor vessel for cracked tubing, the plant found that leakage from these tubes had caused extensive corrosion on the vessel head—a vital barrier in preventing a radioactive release. GAO determined (1) why NRC did not identify and prevent the corrosion, (2) whether the process NRC used in deciding to delay the shutdown was credible, and (3) whether NRC is taking sufficient action in the wake of the incident to prevent similar problems from developing at other plants. We found that NRC should have, but did not identify or prevent the corrosion at Davis- Besse because agency oversight did not produce accurate information on plant conditions. NRC inspectors were aware of indications of leaking tubes and corrosion; however, the inspectors did not recognize the importance of the indications and did not fully communicate information about them to other NRC staff. NRC also considered FirstEnergy—Davis-Besse’s owner—a good performer, which resulted in fewer NRC inspections and questions about plant conditions. NRC was aware of the potential for cracked tubes and corrosion at plants like Davis- Besse but did not view them as an immediate concern. Thus, despite being aware of the development of potential problems, NRC did not modify its inspection activities to identify such conditions. Additionally, NRC’s process for deciding to allow Davis-Besse to delay its shutdown lacked credibility. Because NRC had no guidance for making the specific decision of whether a plant should shut down, it instead used guidance for deciding whether a plant should be allowed to modify its operating license. However, NRC did not always follow this guidance and generally did not document how it applied the guidance. Furthermore, the risk estimate NRC used to help decide whether the plant should shut down was also flawed and underestimated the risk that Davis-Besse posed. Finally, even though it underestimated the risk posed by Davis-Besse, the risk estimate applied to the plant still exceeded levels generally accepted by the agency. Nevertheless, Davis-Besse was allowed to delay the plant’s shutdown. After this incident, NRC took several significant actions to help prevent reactor vessel corrosion from recurring at nuclear power plants. For example, NRC has required more extensive vessel examinations and augmented inspector training. I would also like to note that, in April 2005, NRC proposed a $5.45 million fine against the licensee of the Davis-Besse plant. The principal violation was that the utility restarted and operated the plant in May 2000, without fully characterizing and eliminating leakage from the reactor vessel head. Additional violations included providing incomplete and inaccurate information to NRC on the extent of cleaning and inspecting the reactor vessel head in 2000. While NRC has not yet completed all of its planned actions, we remain concerned that NRC has no plans to address three systemic weaknesses underscored by the incident at Davis-Besse. Specifically, NRC has proposed no actions to help it better (1) identify early indications of deteriorating safety conditions at plants, (2) decide whether to shut down a plant, or (3) monitor actions taken in response to incidents at plants. Both NRC and GAO had previously identified problems in NRC programs that contributed to the Davis-Besse incident, yet these problems continued to persist. Because the nation’s nuclear power plants are aging, GAO recommended that NRC take more aggressive actions to mitigate the risk of serious safety problems occurring at Davis-Besse and other nuclear power plants. In April 2005, we issued a report outlining the need for NRC to do more to ensure that power plants are effectively controlling spent nuclear fuel. Spent nuclear fuel—the used fuel periodically removed from reactors in nuclear power plants—is too inefficient to power a nuclear reaction, but is intensely radioactive and continues to generate heat for thousands of years. Potential health and safety implications make the control of spent nuclear fuel of great importance. The discovery, in 2004, that spent fuel rods were missing at the Vermont Yankee plant in Vermont generated public concern and questions about NRC’s regulation and oversight of this material. GAO reviewed (1) plants’ performance in controlling and accounting for their spent nuclear fuel, (2) the effectiveness of NRC’s regulations and oversight of plants’ performance, and (3) NRC’s actions to respond to plants’ problems controlling their spent fuel. We found that nuclear power plants’ performance in controlling and accounting for their spent fuel has been uneven. Most recently, three plants—Vermont Yankee and Humboldt Bay (California) in 2004, and Millstone (Connecticut) in 2000—have reported missing spent fuel. Earlier, several other plants also had missing or unaccounted for spent fuel rods or rod fragments. NRC regulations require plants to maintain accurate records of their spent nuclear fuel and to conduct a physical inventory of the material at least once a year. The regulations, however, do not specify how physical inventories are to be conducted. As a result, plants differ in the regulations’ implementation. For example, physical inventories at plants varied from a comprehensive verification of the spent fuel to an office review of the records and paperwork for consistency. Additionally, NRC regulations do not specify how individual fuel rods or segments are to be tracked. As a result, plants employ various methods for storing and accounting for this material. Further, NRC stopped inspecting plants’ material control and accounting programs in 1988. According to NRC officials, there was no indication that inspections of these programs were needed until the event at Millstone. At the time of our review, NRC was collecting information on plants’ spent fuel programs to decide if it needs to revise its regulations and/or oversight. It had its inspectors collect basic information on all facilities’ programs. It also contracted with the Department of Energy’s Oak Ridge National Laboratory in Tennessee to review NRC’s material control and accounting programs for nuclear material. NRC is planning to request information from plants and plans to visit over a dozen plants for more detailed inspection. The results of these efforts may not be completed until late 2005, over 5 years after the incident at Millstone that initiated NRC’s efforts. However, we believed NRC has already collected considerable information indicating problems or weaknesses in plants’ material control and accounting programs for spent fuel. GAO recommended that NRC (1) establish specific requirements for the way plants control and account for loose rods and fragments as well as conduct their physical inventories, and (2) develop and implement appropriate inspection procedures to verify plants’ compliance with the requirements. Based on our recent work at NRC, we have identified several cross-cutting challenges that NRC faces as it works to effectively regulate and oversee the nuclear power industry. First, NRC must manage the implementation of its risk-informed regulatory strategy across the agency’s operations. Second, and relatedly, NRC must strive to achieve the appropriate balance between more direct involvement in the operations of nuclear power plants and self-reliance and self-reporting on the part of plant operators to do the right things to ensure safety. Third, and finally, NRC must ensure that the agency effectively manages resources to implement its risk- informed strategy and achieve the appropriate regulatory balance in the current context of increasing regulatory and oversight demands as the industry’s interest in expansion grows. Nuclear power plants have many physical structures, systems, and components, and licensees have numerous activities under way, 24-hours a day, to ensure that plants operate safely. NRC relies on, among other things, the agency’s on-site resident inspectors to assess plant conditions and oversee quality assurance programs, such as maintenance and operations, established by operators to ensure safety at the plants. Monitoring, maintenance, and inspection programs are used to ensure quality assurance and safe operations. To carry out these programs, licensees typically prepare numerous reports describing conditions at plants that need to be addressed to ensure continued safe operations. Because of the significant number of activities and physical structures, systems, and components, NRC adopted a risk-informed strategy to focus inspections on those activities and pieces of equipment that are considered to be the most significant for protecting public health and safety. Under the risk-informed approach, some systems and activities that NRC considers to have relatively less safety significance receive little agency oversight. With its current resources, NRC can inspect only a relatively small sample of the numerous activities going on during complex plant operations. NRC has adopted a risk-informed approach because it believes that it can focus its regulatory resources on those areas of the plant that the agency considers the most important to safety. NRC has stated the adoption of this approach was made possible by the fact that safety performance at plants has improved as a result of more than 25 years of operating experience. Nevertheless, we believe that NRC faces a significant challenge in effectively implementing its risk-informed strategy, especially with regards to improving the quality of its risk information and identifying emerging technical issues and adjusting regulatory requirements before safety problems develop. The 2002 shutdown of the Davis-Besse plant illustrates this challenge, notably the shortcomings in NRC’s risk estimate and failure to sufficiently address the boric acid corrosion and nozzle cracking issues. We also note that NRC’s Inspector General considers the development and implementation of a risk-informed regulatory oversight strategy to be one of the most serious management challenges facing NRC. Under the Atomic Energy Act of 1954, as amended, and the Energy Reorganization Act of 1974, as amended, NRC and the operators of nuclear power plants share the responsibility for ensuring that nuclear reactors are operated safely. NRC is responsible for issuing regulations, licensing and inspecting plants, and requiring action, as necessary, to protect public health and safety. Plant operators have the primary responsibility for safely operating their plants in accordance with their licenses. NRC has the authority to take actions, up to and including shutting down a plant, if licensing conditions are not being met and the plant poses an undue risk to public health and safety. NRC has sought to strike a balance between verifying plants’ compliance with requirements through inspections and affording licensees the opportunity to demonstrate that they are operating their plants safely. While NRC oversees processes, such as the use of performance measures and indicators, and requirements that licensees maintain their own quality assurance programs, NRC, in effect, relies on licensees and trusts them to a large extent to make sure their plants are operated safely. While this approach has generally worked, we believe that NRC still has work to do to effectively position itself so that it can identify problems with diminishing performance at individual plants before they become serious. For example, incidents such as the 2002 discovery of the extensive reactor vessel head corrosion at the Davis-Besse plant and the unaccounted for spent nuclear fuel at several plants across the country, raise questions about whether NRC is appropriately balancing agency involvement and self-monitoring by licensees. An important aspect of NRC’s ability to rely on licensees to maintain their own quality assurance programs is a mechanism to identify deteriorating performance at a plant before the plant becomes a problem. At Davis-Besse, NRC inspectors viewed the licensee as a good performer based on its past performance and did not ask the questions that should have been asked about plant conditions. Consequently, the inspectors did not make sure that the licensee adequately investigated the indications of the problem and did not fully communicate the indications to the regional office and NRC headquarters. Finally, Mr. Chairman, I would also like to comment briefly on NRC’s resources. While we have not assessed the adequacy of NRC’s resources, we have noted instances, such the shutdown of the Davis-Besse plant, where resource constraints affected the agency’s oversight or delayed certain activities. NRC’s resources have been challenged by the need to enhance security at nuclear power plants after the September 11, 2001, terrorist attacks, and they will continue to be challenged as the nation’s fleet of nuclear power plants age and the industry’s interest grows in both licensing and constructing new plants, and re-licensing and increasing the output of existing plants. Resource demands will also increase when the Department of Energy submits for NRC review, an application to construct and operate a national depository for high-level radioactive waste currently planned for Yucca Mountain, Nevada. We believe that it is important for NRC and the Congress to monitor agency resources as these demands arise in order to ensure that NRC can meet all of its regulatory and oversight responsibilities and fulfill its mission to ensure adequate protection of public health, safety, and the environment. In closing, we recognize and appreciate the complexities of NRC’s regulatory and oversight efforts required to ensure the safe and secure operation of the nation’s commercial nuclear power plants. As GAO’s recent work has demonstrated, NRC does a lot right but it still has important work to do. Whether NRC carries out its regulatory and oversight responsibilities in an effective and credible manner will have a significant impact on the future direction of our nation’s use of nuclear power. Finally, we note that NRC has generally been responsive to our report findings. Although the agency does not always agree with our specific recommendations, it has continued to work to improve in the areas we have identified. It has implemented many of our recommendations and is working on others. For example, with respect to nuclear power plant security, NRC has restored its security inspection program and resumed its force-on-force exercises with a much higher level of intensity. It is also strengthening these exercises by conducting them at individual plants every 3 years rather than every 8 years, and is using laser equipment to reduce the exercises’ artificiality. Another example involves sealed radioactive sources. NRC is working with agreement states to develop a process for ensuring that high-risk radioactive sources cannot be obtained before verification that the materials will be used as intended. NRC anticipates that an NRC-agreement state working group will deliver a recommended approach to NRC senior management later this year. In addition, NRC continues to work on its broader challenges. For example, the agency intends to develop additional regulatory guidance to expand the application of risk-informed decision making, including addressing the need to establish quality requirements for risk information and specific instructions for documenting the decision making process and its conclusions. We will continue to track NRC’s progress in implementing our recommendations. In addition, as members of this subcommittee are aware, GAO has been asked to review the effectiveness of NRC’s activities for overseeing nuclear power plants, that is, its reactor oversight process. An important part of that work would be to review the agency’s risk- informed regulatory strategy and its effectiveness in identifying deteriorating plant performance as well as whether NRC is making progress toward effectively balancing agency inspections and self- monitoring by licensees. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841 (or at wellsj@gao.gov). John W. Delicath, Ilene Pollack, and Raymond H. Smith, Jr. made key contributions to this testimony. Nuclear Waste: Preliminary Observations on the Quality Assurance Program at the Yucca Mountain Repository. GAO-03-826T. Washington, D.C.: May 28, 2003. Nuclear Regulatory Commission: Revision of Fee Schedules; Fee Recovery for FY 2003. GAO-03-934R. Washington, D.C.: June 30, 2003. Spent Nuclear Fuel: Options Exist to Further Enhance Security. GAO-03- 426. Washington, D.C.: July 15, 2003. Nuclear Security: Federal and State Action Needed to Improve Security of Sealed Radioactive Sources. GAO-03-804. Washington, D.C.: August 6, 2003. Nuclear Regulatory Commission: Oversight of Security at Commercial Nuclear Power Plants Needs to Be Strengthened. GAO-03-752. Washington, D.C.: September 4, 2003. Nuclear Regulation: NRC Needs More Effective Analysis to Ensure Accumulation of Funds to Decommission Nuclear Power Plants. GAO-04- 32. Washington, D.C.: October 30, 2003. Information Technology Management: Governmentwide Strategic Planning, Performance Measurement, and Investment Management Can Be Further Improved. GAO-04-49. Washington, D.C.: January 12, 2004. Yucca Mountain: Persistent Quality Assurance Problems Could Delay Repository Licensing and Operation. GAO-04-460. Washington, D.C.: April 30, 2004. Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown. GAO-04-415. Washington, D.C.: May 17, 2004. Nuclear Regulation: NRC’s Liability Insurance Requirements for Nuclear Power Plants Owned by Limited Liability Companies. GAO-04- 654. Washington, D.C.: May 28, 2004. Low-Level Radioactive Waste: Disposal Availability Adequate in the Short Term, but Oversight Needed to Identify Any Future Shortfalls. GAO-04-604. Washington, D.C.: June 10, 2004. Nuclear Nonproliferation: DOE Needs to Take Action to Further Reduce the Use of Weapons-Usable Uranium in Civilian Research Reactors. GAO-04-807. Washington, D.C.: July 30, 2004. Nuclear Regulatory Commission: Preliminary Observations on Efforts to Improve Security at Nuclear Power Plants. GAO-04-1064T. Washington, D.C.: September 14, 2004. Low-Level Radioactive Waste: Future Waste Volumes and Disposal Options Are Uncertain. GAO-04-1097T. Washington, D.C.: September 30, 2004. Nuclear Regulatory Commission: NRC Needs to Do More to Ensure that Power Plants Are Effectively Controlling Spent Nuclear Fuel. GAO-05- 339. Washington, D.C.: April 8, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Nuclear Regulatory Commission (NRC) has the regulatory responsibility to, among other things, ensure that the nation's 103 commercial nuclear power plants are operated in a safe and secure manner. While the nuclear power industry's overall safety record has been good, safety issues periodically arise that threaten the credibility of NRC's regulation and oversight of the industry. Recent events make the importance of NRC's regulatory and oversight responsibilities readily apparent. The terrorist attacks on September 11, 2001, focused attention on the security of facilities such as commercial nuclear power plants, while safety concerns were heightened by shutdown of the Davis-Besse nuclear power plant in Ohio in 2002, and the discovery of missing or unaccounted for spent nuclear fuel at three nuclear power plants. GAO has issued a total of 15 recent reports and testimonies on a wide range of NRC activities. This testimony (1) summarizes GAO's findings and associated recommendations for improving NRC mission-related activities and (2) presents several cross-cutting challenges NRC faces in being an effective and credible regulator of the nuclear power industry. GAO has documented many positive steps taken by NRC to advance the security and safety of the nation's nuclear power plants. It has also identified various actions that NRC needs to take to better carry out its mission. First, with respect to its security mission, GAO found that NRC needs to improve security measures for sealed sources of radioactive materials---radioactive material encapsulated in stainless steel or other metal used in medicine, industry, and research--which could be used to make a "dirty bomb." GAO also found that, although NRC was taking numerous actions to require nuclear power plants to enhance security, NRC needed to strengthen its oversight of security at the plants. Second, with respect to its public health and safety, and environmental missions, GAO found that NRC needs to conduct more effective analyses of plant owners' funding for decommissioning to ensure that the significant volume of radioactive waste remaining after the permanent closure of a plant are properly disposed. Further, NRC needs to more aggressively and comprehensively resolve issues that led to the shutdown of the Davis-Besse nuclear power plant by improving its oversight of plant safety conditions. Finally, NRC needs to do more to ensure that power plants are effectively controlling spent nuclear fuel, including developing and implementing appropriate inspection procedures. GAO has identified several cross-cutting challenges affecting NRC's ability to effectively and credibly regulate the nuclear power industry. Recently, NRC has taken two overarching approaches to its regulatory and oversight responsibilities. These approaches are to (1) develop and implement a risk-informed regulatory strategy that targets the most important safety-related activities and (2) strike a balance between verifying plants' compliance with requirements through inspections and affording licensees the opportunity to demonstrate that they are operating their plants safety. NRC must overcome significant obstacles to fully implement its risk-informed regulatory strategy across agency operations, especially with regards to developing the ability to identify emerging technical issues and adjust regulatory requirements before safety problems develop. NRC also faces inherent challenges in achieving the appropriate balance between more direct oversight and industry self-compliance. Incidents such as the 2002 shutdown of the Davis-Besse plant and the unaccounted for spent nuclear fuel at several plants raise questions about whether NRC has the risk information that it needs and whether it is appropriately balancing agency involvement and licensee self-monitoring. Finally, GAO believes that NRC will face challenges managing its resources while meeting increasing regulatory and oversight demands. NRC's resources have already been stretched by the extensive effort to enhance security at plants in the wake of the September 11, 2001, terrorist attacks. Pressure on NRC's resources will continue as the nation's fleet of plants age and the industry's interest in expansion grows, both in licensing and constructing new plants, and re-licensing and increasing the power output of existing ones.
The financial statements including the accompanying notes present fairly, in all material respects, in accordance with generally accepted accounting principles, the Resolution Trust Corporation’s assets, liabilities, and equity; revenues, expenses, and accumulated deficit; and cash flows. However, misstatements may nevertheless occur in other RTC-related financial information as a result of the internal control weakness described below. We evaluated RTC management’s assertion about the effectiveness of its internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or assure the execution of transactions in accordance with management’s authority and with laws and regulations that have a direct and material effect on the financial statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. RTC management fairly stated that those controls in place on December 31, 1995, provided reasonable assurance that losses, noncompliance, or misstatements material in relation to the financial statements would be prevented or detected on a timely basis. RTC management made this assertion, which is included in appendix II, based upon criteria established under the Federal Managers’ Financial Integrity Act of 1982 (FMFIA). RTC management, in making its assertion, recognized the need to improve internal controls. Our work also identified the need to improve internal controls, as described in the following section. The weakness in internal controls, although not considered a material weakness, represents a significant deficiency in the design or operation of internal controls which could have adversely affected RTC’s ability to fully meet the internal control objectives listed above. RTC acted during 1995 to resolve the reportable condition related to the weaknesses in general controls over some computerized information systems identified in our audit of its 1994 financial statements. However, as reported by RTC, many of those corrective actions were not completed until late in 1995. In addition, our audit of RTC’s 1995 financial statements identified additional weaknesses related to general controls over its computerized systems such that this reportable condition continued to exist. Because RTC relied on its computerized information systems extensively, both in its daily operations and in processing and reporting financial information, the effectiveness of general controls is a significant factor in ensuring the integrity and reliability of financial data. Because corrective actions for many of the general control weaknesses identified in our 1995 and 1994 audits were not implemented until late 1995 and early 1996, our audit found that general controls still did not provide adequate assurance that some of RTC data files and computer programs were fully protected from unauthorized access and modification. In response to the weaknesses we identified, RTC and FDIC developed action plans to address the weaknesses. Prior to the completion of our audit work on June 7, 1996, FDIC reported that most of the corrective actions had been implemented, with those remaining scheduled for implementation by September 30, 1996. We plan to evaluate the effectiveness of the corrective actions as part of our 1996 audit of FDIC. During 1995, RTC performed accounting and control procedures, such as reconciliations and manual comparisons, which would have detected material data integrity problems resulting from inadequate general controls. Without these procedures, weaknesses in the general controls would raise significant concern over the integrity of the information obtained from the affected systems. Other less significant matters involving the internal control structure and its operation noted during our audit will be communicated separately to FDIC’s management, which assumed responsibility for RTC’s remaining assets and liabilities since RTC’s termination on December 31, 1995. Our tests for compliance with selected provisions of laws and regulations disclosed no instances of noncompliance that would be reportable under generally accepted government auditing standards. However, the objective of our audit was not to provide an opinion on the overall compliance with laws and regulations. Accordingly, we do not express such an opinion. With the termination of RTC’s operations on December 31, 1995, a significant phase of the savings and loan crisis has ended. The following sections present an historical perspective on the savings and loan crisis and RTC’s role in resolving the crisis. Specifically, the information describes (1) background on the savings and loan crisis and the creation of RTC, (2) the completion of RTC’s mission, (3) RTC’s estimated costs and funding, (4) RTC’s controls over contracting, (5) the cost of resolving the savings and loan crisis, and (6) remaining fiscal implications of the crisis. During the 1980s, the savings and loan industry experienced severe financial losses because extremely high interest rates caused institutions to pay high rates on deposits and other funds while earning low yields on their long-term loan portfolios. During this period, regulators reduced capital standards and allowed the use of alternative accounting procedures to increase reported capital levels. While these conditions were occurring, institutions were allowed to diversify their investments into potentially more profitable, but risky, activities. The profitability of many of these activities depended heavily on continued inflation in real estate values to make them economically viable. In many cases, diversification was accompanied by inadequate internal controls and noncompliance with laws and regulations, thus further increasing the risk of these activities. As a result of these factors, many institutions experienced substantial losses on their loans and investments, a condition that was made worse by an economic downturn. Faced with increasing losses, the industry’s insurance fund, the Federal Savings and Loan Insurance Corporation (FSLIC), began incurring losses in 1984. By the end of 1987, 505 savings and loan institutions were insolvent. The industry’s deteriorating financial condition overwhelmed the insurance fund which only 7 years earlier reported insurance reserves of $6.5 billion. In 1987, the Congress responded by creating the Financing Corporation (FICO) to provide financing to the FSLIC through the issuance of bonds. Through August 8, 1989, FICO provided $7.5 billion in financing to the FSLIC; however, the insurance fund required far greater funding to deal with the industry’s problems. In response to the worsening savings and loan crisis, the Congress enacted the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) on August 9, 1989. FIRREA abolished FSLIC and transferred its assets, liabilities, and operations to the newly-created FSLIC Resolution Fund (FRF) to be administered by the FDIC. In addition, FIRREA created a new insurance fund, the Savings Association Insurance Fund (SAIF). FIRREA also created the RTC to resolve all troubled institutions placed into conservatorship or receivership from January 1, 1989, through June 30, 1995. RTC’s overall responsibilities included managing and disposing of receivership assets and recovering taxpayer funds. In 1993, the Resolution Trust Corporation Completion Act required RTC to cease its operations on or before December 31, 1995, and transfer any remaining assets and liabilities to the FSLIC Resolution Fund. FIRREA provided RTC with a total of $50 billion in funding to resolve failed institutions and pay related expenses. FIRREA also established the Resolution Funding Corporation (REFCORP) to provide RTC with $30 billion of the $50 billion in funding through the issuance of bonds. However, funding provided to RTC by FIRREA was not sufficient and the Congress enacted subsequent legislation resulting in a total of $105 billion being made available to RTC to cover losses associated with resolutions. RTC closed 747 institutions with $402 billion in book value of assets when they entered the conservatorship phase. During conservatorship, assets were reduced by $162 billion to $240 billion through sales, collections, and other adjustments. In the receivership phase, assets were further reduced by $232 billion. Thus, at December 31, 1995, RTC assets in liquidation totaled approximately $8 billion. The remaining assets were transferred to the FSLIC Resolution Fund effective January 1, 1996. RTC also fulfilled the government’s pledge to insured depositors by protecting 25 million depositor accounts. Of the $277 billion in liabilities at resolution, approximately $221 billion represented liabilities to depositors. At resolution, RTC generally transferred the deposit liabilities, along with the required funding, to one or more healthy acquiring institutions. During the receivership phase, RTC used asset recoveries to pay the remaining creditors, and to recover a portion of the amount it advanced to cover deposit liabilities. Another important part of RTC’s activities included ensuring that as many thrift violators as possible were brought to justice and that funds were recovered on behalf of taxpayers. RTC investigated, initiated civil litigation, and made criminal referrals in cases involving former officers, directors, professionals, and others who played a role in the demise of failed institutions. Approximately $2.4 billion was recovered from professional liability claims, and $26 million was collected in criminal restitution. As of December 31, 1995, RTC estimated that the total cost for resolving the 747 failed institutions was $87.9 billion. These costs represent the difference between recoveries from receivership assets and the amounts advanced to pay depositors and other creditors of failed institutions plus the expenses associated with resolving institutions. As shown in table 1, $81.3 billion, or 92 percent, of RTC’s total estimated costs have already been realized through December 31, 1995, and therefore, are known. The estimated $6.6 billion remaining at December 31, 1995, represents expected future losses on remaining receivership and corporate assets. The ultimate recoveries on those assets are subject to uncertainties. Losses of $72.2 billion were realized while institutions were in receivership and after termination. Receivership losses were realized when amounts realized from asset sales were not sufficient to repay the amounts advanced by RTC. For those institutions that were terminated, RTC realized further losses if it later sold assets for less than the price it paid when it purchased the assets from the receiverships at termination. RTC borrowed working capital funds from the Federal Financing Bank (FFB) to provide funding for insured deposits and to replace high-cost borrowing of the failed institutions. In general, these funds were expected to be repaid with the proceeds from receivership asset sales, with any shortfall being covered by loss funding. Through December 31, 1995, RTC incurred $10.2 billion in interest expense on amounts borrowed from the FFB for working capital. RTC’s administrative expenses represent overhead expenses not otherwise charged or billed back to receiverships. The portion of expenses billed back to receiverships is not included in RTC’s administrative expense total, but is included in the loss from receiverships. In addition, receiverships pay many other expenses directly which are also included in the losses from receiverships. The estimated $6.6 billion of future costs include expected losses from receiverships and terminations as well as estimated future administrative expenses. In total, the Congress provided funding to cover $105 billion of losses and expenses associated with RTC’s resolution of failed institutions. As shown in table 2, after reducing the $105 billion available for RTC’s estimated losses of $87.9 billion, an estimated $17.1 billion in unused loss funds will remain. The final amount of unused loss funds will not be known with certainty until all remaining assets and liabilities are liquidated. Loss funds not used for RTC resolution activity are available until December 31, 1997, for losses incurred by the SAIF, if the conditions set forth in the Resolution Trust Corporation Completion Act are met. Also, according to the act, unused loss funds will be returned to the general fund of the Treasury. RTC used thousands of private contractors to manage and dispose of assets from failed thrifts, including activities such as collecting income and paying expenses. The estimated recoveries from receiverships included in RTC’s financial statements include the receipts collected and disbursements made by contractors that perform services for receiverships. As we previously reported, weak operating controls over contract issuance and contractor oversight may have affected the amounts RTC ultimately recovered from its receiverships. While we assess, as part of our financial statement audit, internal accounting controls over receivership receipts and disbursements, RTC’s operating controls over contract issuance and contractor oversight are not part of the scope of our audit. These operating controls were reviewed by RTC’s Inspector General and Office of Contract Oversight and Surveillance, as well as by GAO in other reviews. RTC took various actions to improve the process of contract issuance and contractor oversight, and had placed increased emphasis on the process of closing out contracts to ensure that contractors have fulfilled all contractual responsibilities. However, results of audits conducted by RTC’s Inspector General and Office of Contract Oversight and Surveillance demonstrated that despite RTC’s actions to correct contracting problems, the effects of early neglect of contracting operations remained. These audits identified internal control problems with RTC’s auction contracts and with RTC’s general oversight of contractors. These audits also identified significant performance problems with contracts that were issued before many contracting reforms and improvements were implemented by RTC. During 1995, RTC closed many contracts, pursued contract audit resolution, identified contracts necessary to accomplish the remaining workload after RTC’s termination, and processed contract modifications to transfer them to FDIC. However, estimated future recoveries from RTC receiverships remain vulnerable to the risks associated with early weaknesses in contractor oversight and performance. As a result of these operating weaknesses, RTC could not be sure that it has recovered all it should have recovered from its receiverships. RTC’s costs for its responsibilities in resolving the savings and loan crisis represent only a portion of the total costs of the savings and loan crisis. The cost associated with FSLIC assistance and resolutions represents another sizable direct cost. In addition, the total cost includes indirect costs related to tax benefits granted in FSLIC assistance agreements. Of the $160.1 billion in total direct and indirect costs, approximately $132.1 billion, or 83 percent was provided from taxpayer funding sources. The remaining $28.0 billion, or 17 percent was provided from industry assessments and other private sources. (See Figure 1.) As shown in table 3, the direct costs associated with resolving the savings and loans crisis include the cost of RTC resolutions, FSLIC activity, and supervisory goodwill claims. All of the funding for the estimated $152.6 billion in estimated costs related to FSLIC and RTC has been provided as of December 31, 1995. However, the cost of the claims is currently uncertain. RTC resolved 747 failed institutions through June 30, 1995, when its authority to close failed thrifts expired. As of December 31, 1995, the total estimated losses associated with RTC’s resolved institutions is $87.9 billion. Taxpayer funding for RTC’s direct costs is estimated to be $81.9 billion, which is made up of $56.6 billion in appropriations and $25.3 billion related to the government’s responsibility attributable to the REFCORP transaction. The private sources of funding for RTC activity totaled $6.0 billion, consisting of $1.2 billion contributed to RTC from the Federal Home Loan Banks, and $4.8 billion from SAIF and the Federal Home Loan Banks to support the REFCORP transaction. As of December 31, 1995, the total estimated costs associated with FSLIC activity was $64.7 billion. The estimated cost includes expenses and liabilities arising from FSLIC assistance provided to acquirers of failed or failing savings and loan institutions and FSLIC resolution activity since January 1, 1986. Taxpayer funding for FSLIC’s costs consists of appropriations used by the FSLIC Resolution Fund and totaled $42.7 billion. The private sources of funding for the FSLIC costs include $13.8 billion from FSLIC capital and industry assessments and $8.2 billion provided by FICO. An additional cost of the savings and loan crisis results from the federal government’s legal exposure related to supervisory goodwill and other forbearances from regulatory capital requirements granted to the acquirers of troubled savings and loan institutions in the 1980s. As of December 31, 1995, there were approximately 120 pending lawsuits which stem from legislation that resulted in the elimination of supervisory goodwill and other forbearances from regulatory capital. These lawsuits assert various legal claims including breach of contract or an uncompensated taking of property resulting from the FIRREA provisions regarding minimum capital requirements for thrifts and limitations as to the use of supervisory goodwill to meet minimum capital requirements. One case has resulted in a final judgment of $6 million against FDIC, which was paid by FRF. On July 1, 1996, the United States Supreme Court concluded that the government is liable for damages in three other cases in which the changes in regulatory treament required by FIRREA led the government to not honor its contractual obligations. However, because the lower courts had not determined the appropriate measure or amount of damages, the Supreme Court returned the cases to the Court of Federal Claims for further proceedings. Until the amounts of damages are determined by the court, the amount of additional cost from these three cases is uncertain. Further, with respect to the other pending cases, the outcome of each case and the amount of any possible damages will depend on the facts and circumstances, including the wording of agreements between thrift regulators and acquirers of troubled savings and loan institutions. Estimates of possible damages suggest that the additional costs associated with these claims may be in the billions. The Congressional Budget Office’s December 1995 update of its baseline budget projections increased its projection of future federal outlays for fiscal years 1997 through 2002 by $9 billion for possible payments of such claims. As shown in table 3, the estimated cost of special tax benefits related to FSLIC assistance agreements represents an indirect cost of the savings and loan crisis. The estimated total cost for these tax benefits is $7.5 billion, which will be funded using taxpayer sources. Acquiring institutions received various tax benefits associated with FSLIC assistance agreements. For instance, for tax purposes, assistance paid to an acquiring institution was considered nontaxable. In addition, in some cases, acquiring institutions could carry over certain losses and tax attributes of the acquired troubled institutions to reduce their own tax liability. The effect of these special tax benefits was to reduce the amount of FSLIC assistance payments required by an acquiring institution for a given transaction because of the value of tax benefits associated with the transaction. Thus, total assistance received by an acquiring institution consisted of both FSLIC payments and the value of these tax benefits. Because these tax benefits represented a reduction in general Treasury receipts rather than direct costs to FSLIC, we are presenting tax benefits as indirect costs associated with FSLIC’s assistance transactions. Of the $7.5 billion in estimated tax benefits, $3.1 billion has been realized through December 31, 1995. The remaining $4.4 billion represents an estimate of the future tax benefits that could be realized by acquiring institutions in the future. However, the amount of future tax benefits depends greatly upon the future actions and profitability of the acquirers. For example, reduced or enhanced earnings, institutional acquisitions, and changes in corporate control would all affect acquirers’ taxable income or the amount of tax benefits allowed to offset such taxable income in the future. The current estimate of future tax benefits is based on assumptions which are currently deemed most likely to occur in the future. However, if conditions change, the amount of future estimated tax benefits realized could be substantially higher or lower than the estimated $4.4 billion. Although most of the direct and indirect costs of the savings and loan crisis had been funded or provided for through December 31, 1995, significant fiscal implications remain as a result of the crisis. Substantial funds were borrowed through bonds specifically designed to provide funding for a portion of the direct costs. Both taxpayers and the industry are paying financing costs on those bonds. In addition, a significant portion of direct costs were paid from appropriations at a time when the federal government was operating with a sizable budget deficit. Therefore, it is arguable that additional borrowing was incurred. In view of these circumstances, we are presenting information on the known and estimated interest expense associated with financing the crisis because the future stream of payments associated with interest will have continuing fiscal implications for taxpayers and the savings and loan industry. An additional fiscal implication is that SAIF is currently undercapitalized and the savings and loan industry continues to pay high insurance premiums to build the fund. In 1987, the Congress established FICO, which had the sole purpose of borrowing funds to provide financing to FSLIC. FICO provided funding for FSLIC-related costs by issuing $8.2 billion of noncallable, 30-year bonds to the public. In 1989, the Congress established REFCORP to borrow funds and provide funding to RTC. REFCORP provided funding to the RTC for resolution losses by issuing $30.0 billion of noncallable, 30- and 40-year bonds to the public. The annual interest expense on the $38.2 billion of bonds issued by FICO and REFCORP has and will continue to have a significant impact on taxpayers and the savings and loan industry. The annual FICO bond interest is funded from the industry’s insurance premiums and represents an increasing burden on the savings and loan industry. In addition, the government’s portion of annual interest expense on the REFCORP bonds will continue to require the use of increasingly scarce budgetary resources. Annual interest on the FICO bonds is $793 million and is currently being paid from industry assessments and interest earnings on FICO’s cash balances. The annual interest obligation on the FICO bonds will continue through the maturity of the bonds in the years 2017 through 2019. The total nominal amount of interest expense over the life of the FICO bonds will be $23.8 billion. Annual interest expense on the REFCORP bonds is $2.6 billion. The Federal Home Loan Banks contribute $300 million annually to the payment of REFCORP interest expense, and the remaining $2.3 billion of annual interest expense is paid through appropriations. Annual interest expense will continue through the maturity of the REFCORP bonds in the years 2019, 2020, 2021, and 2030. The total nominal amount of interest expense over the life of the REFCORP bonds will be $88 billion. The largest source of funding to pay the direct costs of the savings and loan crisis was provided by taxpayers as a result of legislation enacted to specifically deal with the crisis. This legislation was enacted during a period in which the federal government was financing—via deficit spending—a sizable portion of its regular, ongoing program activities and operations. Under these circumstances, it is arguable that substantial, incremental Treasury borrowing occurred in order to finance the taxpayer portion of the crisis. To arrive at an amount for estimated future interest associated with appropriations, we made various simplifying assumptions. For purposes of estimating Treasury interest expense associated with resolving the savings and loan crisis, we assumed that the entire amount of appropriations used to pay direct costs was borrowed. Various other simplifying assumptions were made regarding interest rates and the financing period. We assumed that the $99.3 billion in appropriations for the FSLIC Resolution Fund and the RTC would be financed for 30 years at 7 percent interest,with no future refinancing. Under these assumptions, approximately $209 billion in estimated interest payments would be needed over 30 years to cover the interest expense related to appropriations used to cover the direct costs of the crisis. Table 4 presents the known and estimated interest expense components associated with the financing mechanisms used to provide funds for the direct costs of the savings and loan crisis. Significant resources will be needed in the future to pay the known annual interest expense on the FICO and REFCORP bonds as well as the estimated Treasury interest expense related to the crisis. As shown in table 5, $20.4 billion, or 18 percent of the total nominal interest expense on FICO and REFCORP bonds has been paid through December 31, 1995. The remaining $91.4 billion, or 82 percent, will be funded in the future. Future interest expense of approximately $18 billion remains to be paid to cover the FICO bond interest. Currently, insurance premiums paid by certain SAIF-insured institutions are used to pay annual FICO bond interest expense. In 1995, the FICO interest expense represented about 69 percent of insurance premiums earned on SAIF’s FICO-assessable base. In recent years, the FICO-assessable base has been shrinking, thereby increasing the burden of the FICO interest expense relative to the size of the assessment base, and calling into question the future ability of the FICO-assessable base to cover the annual FICO interest expense. Future interest expense of approximately $73.4 billion remains to be paid on the REFCORP bonds. The Federal Home Loan Banks will continue to be responsible for paying $300 million each year toward the cost of REFCORP interest expense until the bonds mature. The remaining portion of the REFCORP bond interest expense will be paid with Treasury funds until the bonds mature in the years 2019, 2020, 2021, and 2030. For purposes of analyzing the timing of estimated Treasury interest expense on funds provided to pay the direct costs, we estimated that approximately $176 billion of the $209 billion in estimated Treasury interest expense, shown in table 5, related to future periods. Under these assumptions, future estimated Treasury interest would represent a significant claim on future federal budgetary resources. FIRREA created SAIF to insure deposits previously insured by the FSLIC, and set a designated reserve requirement of 1.25 percent of insured deposits. We consider the need to capitalize SAIF a remaining fiscal implication of the crisis because insurance premiums that could have been used to capitalize SAIF were used to pay a portion of the direct costs of the crisis,as well as annual interest expense on the FICO bonds. As a result, SAIF’s capitalization has been delayed, creating ongoing implications in terms of high deposit insurance premiums. In order to be fully capitalized, SAIF would have needed $8.9 billion in reserves based on the level of insured deposits at December 31, 1995. However, at that date, SAIF had reserves of only $3.4 billion, $5.5 billion below the designated reserve amount of $8.9 billion. preparing annual financial statements in conformity with generally establishing, maintaining, and assessing the internal control structure to provide reasonable assurance that the broad control objectives of FMFIA are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the financial statements are free of material misstatement and presented fairly, in all material respects, in conformity with generally accepted accounting principles and (2) RTC management’s assertion about the effectiveness of internal controls is fairly stated in all material respects and is based upon the criteria established under FMFIA. We are also responsible for testing compliance with selected provisions of laws and regulations and for performing limited procedures with respect to certain other information appearing in the financial statements. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements; assessed the accounting principles used and significant estimates made by evaluated the overall presentation of the financial statements; obtained an understanding of the internal control structure related to safeguarding assets, compliance with laws and regulations, including the execution of transactions in accordance with management authority and financial reporting; tested relevant internal controls over safeguarding, compliance, and financial reporting and evaluated management’s assertion about the effectiveness of internal controls; and tested compliance with selected provisions of the following laws and regulations: section 21A of the Federal Home Loan Bank Act (12 U.S.C. 1441a) and Chief Financial Officers Act of 1990, sections 305 and 306 (Public Law 101-576). We did not evaluate all internal controls relevant to operating objectives as broadly defined by FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to those controls necessary to achieve the objectives outlined in our opinion on RTC management’s assertion about the effectiveness of internal controls. Because of inherent limitations in any internal control structure, losses, noncompliance, or misstatements may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. With the termination of RTC on December 31, 1995, an important phase of the savings and loan crisis ended. To provide an historical perspective on RTC and its role in resolving the crisis, we obtained and reviewed background information and data from RTC and FDIC. In addition, we obtained and analyzed audited financial information from the following entities which had varying roles in resolving the savings and loan crisis: FSLIC, FICO, RTC, REFCORP, FSLIC Resolution Fund, and SAIF. We conducted our audit from July 7, 1995, through June 7, 1996, in accordance with generally accepted government auditing standards. FDIC provided written comments on a draft of this report because of its responsibility for RTC’s remaining assets and liabilities and its role in preparing RTC’s final financial statements. In FDIC’s comments, provided in appendix III, the Corporation’s Chief Financial Officer acknowledges the weaknesses in general controls over RTC’s computerized information systems and discusses the status of RTC and FDIC actions to correct them. We plan to evaluate the adequacy and effectiveness of those corrective actions as part of our audit of FDIC’s 1996 financial statements. The Chief Financial Officer’s comments also discuss FDIC’s involvement in RTC’s transition and FDIC’s plans in assuming responsibility for closing out RTC’s active and completed contracts. Accounting and Information Management Division, Washington, D.C. Resolution Trust Corporation: Implementation of the Management Reforms in the RTC Completion Act (GAO/GGD-95-67, March 9, 1995) Resolution Trust Corporation: Evaluations Needed to Identify the Most Effective Land Sales Methods (GAO/GGD-95-43, April 13, 1995) 1993 Thrift Resolutions: RTC’s Resolution Process Generally Adequate to Determine Least Costly Resolutions (GAO/GGD-95-119, May 15, 1996) Resolution Trust Corporation: Management Improvements Reduce Risk But Transition Challenges Remain (GAO/T-GGD-95-163, May 16, 1995) Resolution Trust Corporation: Management Improvements Reduce Risk But Transition Challenges Remain (GAO/T-GGD-95-188, June 20, 1995) Inspectors General: Mandated Studies to Review Costly Bank and Thrift Failures (GAO/GGD-95-126, July 31, 1995) Resolution Trust Corporation: Performing Assets Sold to Acquirers of Minority Thrifts (GAO/GGD-96-44, December 22, 1995) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO audited the Resolution Trust Corporation's (RTC) financial statements for the years ended December 31, 1995 and 1994. GAO also reviewed: (1) RTC internal control weaknesses; (2) RTC mission and its completion; (3) RTC costs and funding; and (4) the cost of resolving the savings and loan crisis. GAO found that: (1) RTC financial statements were reliable in all material aspects; (2) although RTC internal controls need improvement, they were effective in safeguarding assets, ensured that transactions were in accordance with management authority and material laws and regulations, and ensured that there were no material misstatements; and (3) there was no material noncompliance with applicable laws and regulations. GAO also found that: (1) RTC essentially accomplished its mission of closing insolvent institutions, liquidating institution assets, insuring depositor accounts, and bringing many thrift violators to justice; (2) the estimated cost of RTC activities totalled $87.9 billion; (3) RTC contractor control weaknesses and performance problems could adversely affect receivership recoveries; (4) all of the $160.1 billion in estimated direct and indirect costs of RTC and Federal Savings and Loan Insurance Corporation activities have been provided for as of December 31, 1995; (5) the cost of present and future litigation resulting from the savings and loan crisis is unknown; (6) the annual interest expense on the $38.2 billion in Financing Corporation and Resolution Funding Corporation bonds will continue to significantly impact taxpayers and the savings and loan industry; and (7) because the Savings Association Insurance Fund has not been fully capitalized, thus deposit insurance premiums have remained high.
In 2002, the Secretary of Defense created MDA to develop an integrated system that would have the ability to intercept incoming missiles in all phases of their flight. In developing BMDS, MDA is using an incremental approach to field militarily useful capabilities as they become available. MDA plans to field capabilities in 2-year blocks. The configuration of a given block is intended to build on the work completed in previous blocks. For example, Block 2006 is intended to build on capabilities developed in Block 2004, and is scheduled to field capabilities during calendar years 2006–07. The integrated BMDS is comprised of various elements, three of which are intended to intercept threat missiles in their boost or ascent phase. Table 1 below describes each of these elements and shows the MDA projected dates for key decision points, initial capability, and tested operational capability. During the past year, Congress requested additional information and analyses on the boost and ascent phase elements from DOD. Specifically, House Report 109-119 on the Department of Defense Appropriations Bill for Fiscal Year 2006 directed the Secretary of Defense to conduct a study to review the early engagement of ballistic missiles to include boost and ascent phase intercepts and submit the report to the congressional defense committees. The report was to include, but not be limited to an assessment of the operational capabilities of systems against ballistic missiles launched from North Korea or a location in the Middle East against the continental United States, Alaska, or Hawaii; an assessment of the quantity of operational assets required for deployment periods of 7 days, 30 days, 90 days, and 1 year; basing options; and an assessment of life-cycle costs to include research and development efforts, procurement, deployment, operating, and infrastructure costs. In addition, the National Defense Authorization Act for Fiscal Year 2006 required the Secretary of Defense to assess missile defense programs designed to provide capability against threat ballistic missiles in the boost/ascent phase of flight. The purpose of this assessment was to compare and contrast capabilities of those programs (if operational) to defeat ballistic missiles from North Korea or a location in the Middle East against the continental United States, Alaska, or Hawaii; and asset requirements and costs for those programs to become operational with the capabilities referred to above. MDA, on behalf of DOD, prepared one report to satisfy both of the above requirements and sent the report to all four defense committees on March 30, 2006. The report included technical, operational, and cost information for each of the three boost and ascent phase BMDS elements. The remainder of this report discusses our assessment of the MDA report and how DOD can build on this information to support future key decision points. MDA’s March 2006 report to Congress included some useful technical and operational information on boost and ascent phase capabilities. However, the information in the report has several limitations—such as not including stakeholders in the analysis or explaining how assumptions affect results. Moving forward, DOD can enhance its ability to make informed decisions at future key decision points by including stakeholders DOD-wide in conducting analyses to provide complete technical and operational information. Otherwise, senior DOD and congressional decision makers may be limited in their ability to effectively assess the technical progress and operational effects of proceeding with one or more boost and ascent phase element. The March 2006 report to Congress contained some useful technical and operational information for Congress. For example, the report included a detailed description of the three boost and ascent phase elements, which could be useful for those unfamiliar with these elements. Additionally, the report listed upcoming knowledge points where DOD will review the progress MDA has made toward developing each of the boost and ascent phase elements. Further, the report discussed geographic areas where boost and ascent phase elements could intercept missiles shortly after launch based on desired technical capabilities. Also, MDA used a model to assess the desired capabilities of each BMDS element for the March 2006 report to Congress. Further, the modeling environment was used for several past BMDS analyses and the results were benchmarked against other models. Finally, MDA performed a sensitivity analysis that compared how the results in the modeling changed when different assumptions for targets’ propellants, ascent times, hardness levels, and burn times were used. To provide context, the report explained that the boost and ascent phase elements are in the early stages of development and that the operational concepts are not yet mature. The information in the March 2006 report has several limitations because the analyses did not involve stakeholders and did not clearly explain modeling assumptions and their effects on results as identified by relevant research standards. The relevant research standards and our prior work have shown that coordination with stakeholders from study design through reporting, and clearly explained assumptions and their effects on results, can enable DOD officials to make fully informed program decisions. As a result, the March 2006 report presents an incomplete picture of technical capabilities, such as development challenges to be overcome in order to achieve desired performance, and it does not clearly explain the effects of operational assumptions, such as basing locations, asset quantities, and base support requirements. As a step in the right direction, MDA stated that it plans to develop criteria to assess the boost/ascent phase elements at major decision points in a process involving the combatant commands. Although MDA officials told us that they consult stakeholders in a variety of forums other than the March 2006 report, they did not clearly state whether or how the services or other DOD stakeholders would be involved in developing criteria for key decision points or the extent to which their analyses would include information on technical and operational issues. MDA’s analyses did not involve soliciting or using information from key DOD stakeholders such as the services, combatant commands, and joint staff from study design through reporting. For example, officials from the Office of the Secretary of Defense for Program Analysis and Evaluation and the Defense Intelligence Agency stated there were areas where additional information would have improved the fidelity of the results. First, the officials stated that there is uncertainty that the boost and ascent phase elements would achieve their desired capabilities within the timeframe stated in the report. Second, officials from both organizations stated that the report could have been enhanced by presenting different views of the type and capability of threats the United States could face and when these threats could realistically be expected to be used by adversaries. Third, officials from the Office of the Secretary of Defense for Program Analysis and Evaluation said that the MDA report did not distinguish between countermeasures that could be used in the near term and countermeasures that may be more difficult to implement. MDA officials said that they worked with the Office of the Secretary of Defense for Program Analysis and Evaluation in conducting analyses before they began work on the March 2006 report. MDA also stated that it discussed the draft March 2006 report with Office of the Secretary of Defense for Program Analysis and Evaluation officials and included some of their comments in the report’s final version. However, without communication with stakeholders from study design through reporting, MDA may not have had all potential inputs that could have affected how the type, capability, and likelihood of countermeasures to the boost and ascent phase elements were presented in its report. Additionally, MDA did not solicit information from the services, combatant commands, or Joint Staff regarding operational issues that could have affected information about basing and the quantities of elements that could be required to support operations. Although the elements have to be located in close proximity to their intended targets, and the report discusses placing the elements at specific forward overseas locations, the report does not include a basing analysis explaining what would need to be done to support operations at these locations. Specifically, the report did not include any discussion of the infrastructure or security/force protection that will be needed for the BMDS elements. Although the report mentions some support requirements—such as the Airborne Laser’s need for unique maintenance and support equipment and skilled personnel to maintain the laser—the report did not fully explain how these support requirements would be determined, who would provide or fund them, or explain the operational effect if this support is not provided. For instance, without an adequate forward operating location, the boost and ascent phase elements would have to operate from much further away which would significantly limit the time an element is in close proximity to potential targets. Developing such information with the services, Joint Staff, and combatant commands could provide a much more complete explanation of operational issues and challenges. The services typically perform site analyses to ascertain what support is needed for a new weapon system at either a U.S. or overseas location. This comprehensive analysis examines a range of issues from fire protection to security, to infrastructure, to roads and airfields. In addition, U.S. Strategic Command and service officials told us that this type of support must be planned for in advance when adding a new system to any base, either in the United States or a forward location. MDA also did not involve stakeholders in assessing the quantities of each element for deployment periods of 7 days, 30 days, 90 days, and 1 year. The report stated that limited data exist at this time for a full assessment of this issue, and service, Joint Staff, and MDA officials acknowledged that the quantities of each element used in the report are MDA assumed quantities. Service, Joint Staff, and U.S. Strategic Command officials stated that they have not completed analyses to assess quantities the warfighters may require. We understand that operational concepts will continue to evolve and could affect required quantities. However, stakeholders such as the services, Joint Staff, or combatant commands could have assisted MDA in assessing potential quantities required for various deployment periods. In addition, MDA did not solicit information from the services, Joint Staff, or combatant commands to determine if those organizations were conducting force structure analyses for the boost and ascent phase elements. We learned that the Navy had done a preliminary analysis in July 2005 and that the Joint Staff has begun a capabilities mix study and both include, in part, an analysis of quantities. Thus, in preparing for future decision points, MDA’s analysis could be strengthened by including stakeholders to leverage other analyses. For example, MDA could have presented a range of scenarios to show how the quantities required to intercept adversary missiles could vary depending upon the number of sites covered and whether continuous, near-continuous, or sporadic coverage is provided. The March 2006 report to Congress did not clearly explain the assumptions used in the modeling of the BMDS elements’ capabilities and did not explain the effects those assumptions may have had on the results. First, the model inputs for the technical analysis assumed desired rather than demonstrated performance, and the report does not fully explain challenges in maturing technologies or how these performance predictions could change if the technologies are not developed as desired or assumed. For example, although the model MDA used is capable of showing different results based on different performance assumptions, the report did not explain how the number of successful intercepts may change if less than 100 percent of the desired technical capabilities are developed as envisioned. Thus the results represent the best expected outcome. Second, the report does not explain the current status of technical development or the challenges in maturing each element’s critical technologies as desired or assumed in the report. DOD best practices define Technology Readiness Levels on a scale of 1–9, and state which level should be reached to progress past specific program decision points. However, the March 2006 report does not explain the current Technology Readiness Level for any of the boost and ascent phase elements’ critical technologies or the extent to which the technology has to mature to attain the performance assumed in the report. For example, the report does not explain that some of the technologies for the Airborne Laser have to improve between 60 percent and 80 percent and the report does not discuss any of the challenges MDA faces in doing so. The March 2006 report to Congress provides cost estimates for each of the boost and ascent phase capabilities; however, the cost estimates in the report have several limitations that raise questions about their usefulness. We compared the report’s cost estimates with various DOD and GAO sources that describe key principles for developing accurate and reliable life-cycle cost estimates. Based on our analysis, we found that MDA did not include all cost categories, calculate costs based on warfighter quantities, and did not conduct a sensitivity analysis to assess the effects of cost drivers. Moreover, although MDA’s report acknowledges uncertainty in the cost estimates, the report does not fully disclose the limitations of the cost estimates. DOD can significantly improve the completeness of and confidence in cost estimates for boost and ascent phase capabilities as it prepares for future investment and budget decisions. For example, although DOD did not have its cost estimate for its March 2006 report independently verified because doing so would have taken several months, MDA officials agreed that independently verified cost estimates will be critical to support major decision points for boost and ascent phase capabilities. In addition, as these capabilities mature, MDA officials agreed that showing cost estimates over time and conducting uncertainty analyses will be needed to support key program and investment decisions. The cost estimates provided in the MDA report included some development, production, and operations/support costs for each boost and ascent phase element but were not fully developed or verified according to key principles for developing life-cycle cost estimates. Life-cycle costs are the total cost to the government for a program over its full life, including the costs of research and development, investment, operating and support, and disposal. Based on our comparison of the life-cycle cost estimates in the report with key principles for developing life-cycle cost estimates, we found that the estimates were incomplete in several ways. First, the cost estimates did not include all cost categories, such as costs to establish and sustain operations at U.S. bases. Instead, MDA assumed that the elements would be placed at existing bases with sufficient base support, infrastructure and security; however, some of these costs such as infrastructure could be significant. For example, an MDA planning document cited about $87 million for infrastructure costs to support a ground-based BMDS element (Terminal High Altitude Area Defense). Army officials confirmed that training facilities, missile storage buildings, and a motor pool were built at a U.S. base specifically to support this element and it is likely that similar infrastructure would be needed to support the land-based Kinetic Energy Interceptor. Additionally, MDA’s cost estimates did not include costs to establish and sustain operations at forward overseas locations, even though the report states that the elements will have to be located in close proximity to their targets, and the operational concepts for Kinetic Energy Interceptor and Airborne Laser, although in early development, state that these elements will be operated from forward locations. Again, these are important factors to consider—the Airborne Laser operational concept and the MDA report acknowledge that unique support will be required to support operations at any forward location for the Airborne Laser such as chemical facilities, unique ground support equipment, and maintenance. Service, Joint Staff, and U.S. Strategic Command officials also said that these elements would have to be located forward and could be used as a strategic deterrent in peacetime. Second, the production and operating cost estimates were not based on warfighter quantities, that is, quantities of each element that the services and combatant commands may require to provide needed coverage of potential targets. MDA assumed a certain quantity of each element. For example, MDA officials told us that they assumed 96 Standard Missile-3 block 2A missiles because, at the time MDA prepared the report, they planned to buy 96 block 1A missiles developed to intercept short-range ballistic missiles. However, MDA did not solicit input from the services, Joint Staff, or combatant commands on whether they had done or begun analyses to determine element quantities. Third, MDA did not conduct a sensitivity analysis to identify the effects of cost drivers. A sensitivity analysis is a way to identify risk by demonstrating how the cost estimates would change in response to different values for specific cost drivers. Therefore, a sensitivity analysis should be performed when developing cost estimates, and the results should be documented and reported to decision makers. This means, for example, that MDA could have computed costs with and without significant categories of costs such as forward bases to identify the effect that adding forward bases would have on operating costs. The House Armed Services Committee report on the National Defense Authorization Bill for Fiscal Year 2006 recognized that operational capabilities and costs must be taken into account when making decisions on future funding support. Finally, the cost estimates did not estimate costs over time—a process known as time phasing—which can assist decision makers with budgetary decisions. The MDA report showed an annual cost estimate but did not state for how many years the development, production, and operating costs may be incurred. Although MDA officials stated they did not prepare time-phased cost estimates in order to prepare the report to Congress in a timely manner, they agreed that showing cost estimates over time would be important information to support investment decisions at key decision points. Key principles for developing life-cycle cost estimates also include two steps for assessing the confidence of cost estimates. However, MDA did not take these steps to assess the confidence of the estimates reported in March 2006. First, the Missile Defense Agency did not conduct a risk analysis to assess the level of uncertainty for most of the cost estimates in the MDA report. Risk and uncertainty refer to the fact that, because a cost estimate is a prediction of the future, it is likely that the estimated cost will differ from the actual cost. It is useful to perform a risk analysis to quantify the degree of uncertainty in the estimates. By using standard computer simulation techniques, an overall level of uncertainty can be developed for cost estimates. In contrast, MDA officials told us that they could only provide a judgmental confidence level for the most of the cost estimates. Second, MDA did not have the cost estimates in the report verified by an independent organization such as DOD’s Cost Analysis Improvement Group because doing so would have taken several months. However, MDA officials agreed that independent verification of cost estimates would be important information to support investment decisions at key decision points. According to the key principles that we have identified, all life-cycle cost estimates should be independently verified to assure accuracy, completeness, and reliability. MDA has recognized the value in independently developed cost estimates. In 2003, MDA and the Cost Analysis Improvement Group developed a memorandum of understanding that said, in part, the Cost Analysis Improvement Group would develop independent cost estimates for the approved BMDS and its elements as appropriate during development in anticipation of transition to production, but MDA officials said that little work was completed under this agreement, which has expired. Developing complete cost estimates in which decision makers can have confidence is important since life-cycle cost estimates usually form the basis for investment decisions and annual budget requests. Specifically, life-cycle cost estimates that include all cost categories, show costs over time, include warfighter quantities, include an assessment of cost drivers, and are independently verified are important because accurate life-cycle cost estimates can be used in formulating funding requests contained in the President’s Budget and DOD’s future funding plan, the Future Years Defense Program (FYDP) submitted to Congress. Therefore, there is a need for DOD to provide transparent budget and cost planning information to Congress. In May 2006, GAO reported that the FYDP, a major source of budget and future funding plans, does not provide complete and transparent data on ballistic missile defense operational costs because the FYDP’s structure does not provide a way to identify and aggregate these costs. It is important that Congress has confidence in boost and ascent phase estimates because Congress has indicated that it is concerned with the affordability of pursuing both the Airborne Laser and Kinetic Energy Interceptor programs in parallel through 2008. As we reported in 2003, DOD assumes increased investment risk by not having information available for decision makers at the right time, and the level of anticipated spending magnifies this risk. Otherwise, senior DOD and congressional decision makers may be limited in their ability to assess the relative cost of the elements if all cost categories are not included and cost drivers are not identified. Considering competing demands, this could also limit Congress’s ability to consider investment decisions or evaluate whether continued expenditures are warranted. MDA officials stated that, in developing the cost estimates for the March 2006 report, they decided not to follow some of the key principles for developing life-cycle cost estimates such as time phasing and independent verification of the cost estimates in order to complete the report in a timely manner. However, the officials also agreed that these key principles are important in developing complete, accurate, and reliable life-cycle cost estimates for supporting investment decisions at key decision points. Therefore, in the future, when preparing cost estimates to be used in support of key decision points, MDA could provide decision makers with more complete, accurate, and reliable cost estimates by better adhering to key principles for developing life-cycle cost estimates. Our review of MDA’s March 2006 report on boost and ascent phase elements identified a number limitations but helps to illuminate the kind of information that DOD and congressional decision makers will need following upcoming tests for boost and ascent phase elements. We recognize that the March 2006 report was prepared in response to congressional direction rather than to support program decisions. We also recognize that, at the time of MDA’s report, these elements were early in their development and information was incomplete and changing. Thus, the focus of our analysis was to identify additional information that could enhance future program and investment decisions. In particular, the House Armed Services Committee has raised questions about the affordability of pursuing both the Kinetic Energy Interceptor and the Airborne Laser in parallel through the projected knowledge point demonstrations, which are now scheduled for 2008 and 2009 respectively. It is important that these decisions be both well-informed and transparent because of the long-term funding consequences. DOD and congressional decision makers’ ability to assess which elements can be fully developed, integrated, and operated relative to the others will be enhanced if they have the benefit of information based on more rigorous analysis than that contained in MDA’s March 2006 report. Looking forward, as DOD strengthens its analyses to support future key decisions, DOD and congressional decision makers will be able to use more complete information to assess force structure, basing, support, and infrastructure requirements, as well as technical maturity, budget requests, and FYDP spending plans, in deciding whether or not to continue developing one, two, or all three boost and ascent phase elements and in what quantities. To provide decision makers with information that enables them to clearly understand the technical progress and operational implications of each boost and ascent phase element and make fully informed, fact-based, program decisions at future key decision points, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following actions to support key decision points for the BMDS boost and ascent phase elements: Include all DOD stakeholders (including services, combatant commands, Joint Staff) in developing and analyzing operational issues regarding what is needed to support operations at U.S. bases and potential forward locations, including basing assessments, force structure and quantity requirements, infrastructure, security/force protection, maintenance, and personnel. Provide specific information on the technical progress of each element. Specifically, the analysis should explain current technical maturity versus desired technical maturity and capabilities of all major components and subsystems, reasonable model inputs on element performance, and provide a clear explanation of assumptions and their effect on results. Use the results of these analyses at each key decision point. To provide decision makers with complete and reliable data on the costs of each boost/ascent phase BMDS element to enhance investment and budget decisions, we recommend that the Secretary of Defense take the following actions: Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to require MDA to prepare and—to support key decision points—periodically update a full life-cycle cost estimate for each boost/ascent phase element, in accordance with key principles for developing accurate and reliable life-cycle cost estimates, that includes all operational costs, including costs to establish and sustain operations at U.S. bases and forward locations, and that is based on warfighter quantities, includes sensitivity analyses, and reflects time phasing. Direct an independent group, such as the Cost Analysis Improvement Group, to prepare an independent life-cycle cost estimate for each capability at each key decision point. Direct MDA and services to report independently verified life-cycle cost estimates along with budget requests and FYDP funding plans for each boost/ascent phase element. In written comments on a draft of this report, DOD agreed with our recommendations regarding the need for analysis of technical progress and operational issues to support key boost and ascent phase element decision points. DOD also agreed that an independent life-cycle cost estimate may be needed to inform some key decision points while they may not be needed at other decision points. However, DOD did not agree to prepare and periodically update full life-cycle cost estimates for each boost and ascent phase element to support key decision points, and report independently verified life cycle cost estimates with budget requests and FYDP funding plans. As discussed below, we continue to believe our recommendations have merit and that DOD should take the additional actions we have recommended to provide a rigorous analytical basis for future decisions, enhance the transparency of its analyses, and increase accountability for key decisions that could involve billions of dollars. The department’s comments are reprinted in their entirety in appendix II. DOD agreed with our recommendations that all DOD stakeholders be included in developing and analyzing operational issues, that specific information on technical progress be provided to explain current versus desired capabilities, and that the results of both analyses be used at key decision points. DOD stated in its comments that officials from MDA, the military departments, the combatant commanders, and other organizations are collaborating to develop an operational BMDS. Moreover, the annual BMDS Transition and Transfer Plan is coordinated with the service secretaries and other stakeholders and serves as a repository for plans, agreements, responsibilities, authorities, and issues. DOD also stated that key program decisions are and will continue to be informed by detailed technical analysis, including assessment of element technical maturity. However, DOD did not clearly explain how future decision making will be enhanced or how analyses of operational issues will be conducted if, as in the case of the Kinetic Energy Interceptor, DOD has not assigned a service responsibility for operating the element once it is developed. We continue to believe that DOD and congressional decision makers will need more complete information on support requirements at upcoming decision points as well as a clear comparison of current versus desired technical capabilities in deciding whether or not to continue developing one, two, or all three boost and ascent phase elements. Regarding our recommendations to improve cost estimates used to support key investment decisions, DOD partially concurred that independent life-cycle cost estimates may be required to inform some key decision points but stated that other key decision points may not. However, DOD did not agree that it should routinely prepare and periodically update a full life-cycle cost estimate for each boost and ascent phase element. DOD said that it continuously assesses all aspects of its development efforts and will direct an independent evaluation of life-cycle costs for boost and ascent phase elements if circumstances warrant or if MDA’s Director declares an element mature enough to provide a militarily useful capability. However, if, as DOD’s comments suggest, such costs are not assessed until circumstances warrant or MDA’s Director declares an element mature enough to provide a militarily useful capability, these costs may not be available early enough to help shape important program and investment decisions and consider trade-offs among elements. Moreover, DOD’s Operating and Support Cost Estimating Guide, published by the Cost Analysis Improvement Group, states that when the Cost Analysis Improvement Group assists the Office of the Secretary of Defense components in their review of program costs, one purpose is to determine whether a new system will be affordable to operate and support. Therefore, such analysis must be done early enough to provide cost data that will be considered in making a decision to field, produce, or transition an element. We continue to believe our recommendation has merit because the development of life-cycle cost estimates that include potential operations and support costs would improve the information available to decision makers and increase accountability for key decisions that could involve billions of dollars at a time when DOD will likely face competing demands for resources. Finally, DOD did not agree to report independently verified life-cycle cost estimates along with budget requests and FYDP funding plans for each boost and ascent phase element. DOD stated that operations and support segments of the budget are organized by functional area rather than by weapon system and are dependent on operations and support concepts of the employing military department. DOD further stated that development of total life-cycle cost estimates for operational BMDS capabilities requires agreement between MDA and the lead military department on roles and responsibilities for fielded BMDS capabilities that transcend the annual transition planning cycle but serve as a basis for budget submittals. We recently reported that MDA enjoys flexibility in developing BMDS but this flexibility comes at the cost of transparency and accountability. One purpose of cost estimates is to support the budget process by providing estimates of the funding required to efficiently execute a program. Also, independent verification of cost estimates allows decision makers to gauge whether the program is executable. Thus, cost estimating is the basis for establishing and defending budgets and is at the heart of the affordability issue. This principle is stated in DOD procedures which specify that when cost results are presented to the Office of the Secretary of Defense Cost Analysis Improvement Group, the program office- developed life-cycle cost estimate should be compared with the FYDP and differences explained. Therefore, we continue to believe that our recommendation has merit because, without an independent cost estimate that can be compared to budget requests and FYDP funding plans, congressional decision makers may not have all the necessary information to assess the full extent of future resource requirements if the boost and ascent phase capabilities go forward, or assess the completeness of the cost estimates that are in the budget request and FYDP funding plans. We are sending copies of this report to the Secretary of Defense; the Commander, U.S. Strategic Command; the Director, Missile Defense Agency; Chairman, the Joint Chiefs of Staff; and the Chiefs of Staff of the Army, Navy, and Air Force. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call either Janet St. Laurent on (202) 512-4402 or Paul Francis on (202) 512-2811. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. During this review, we focused on assessing the analytical approach the Missile Defense Agency (MDA) used to develop its March 2006 report to Congress, as well as the methodology for developing the cost estimates for each of the three Ballistic Missile Defense System (BMDS) boost and ascent phase elements. To assess the extent to which the Department of Defense (DOD) is developing technical and operational information useful for oversight and that will support decision making at key points, we compared the analytical approach DOD used to develop its March 2006 report with generally accepted research standards that are relevant for defense studies such as this, that define a sound and complete study, and that cover all phases of a study—design, execution, and presentation of results. The following were our sources for these standards: GAO, Government Auditing Standards: 2003 Revision, GAO-03-673G (Washington, D.C.: June 2003); GAO, Designing Evaluations, GAO/PEMD-10.1.4 (Washington, D.C.: GAO, Dimensions of Quality, GAO/QTM-94-1 (Washington, D.C.: RAND Corporation, RAND Standards for High-Quality Research and Analysis (Santa Monica, Calif.: June 2004); Air Force, Office of Aerospace Studies, Analysts Handbook: On Understanding the Nature of Analysis (January 2000); Air Force, Office of Aerospace Studies, Air Force Analysis Handbook, A Guide for Performing Analysis Studies: For Analysis of Alternatives or Functional Solution Analysis (July 2004); Department of Defense, DOD Modeling and Simulation (M&S) Verification, Validation, Accreditation (VV&A), Instruction 5000.61 (Washington, D.C.: May 2003); Department of Defense, Data Collection, Development, and Management in Support of Strategic Analysis, Directive 8260.1 (Washington, D.C.: Dec. 2, 2003); and Department of Defense, Implementation of Data Collection, Development, and Management for Strategic Analyses, Instruction 8260.2 (Washington, D.C.: Jan. 21, 2003). For a more complete description of these standards and how we identified them, see GAO-06-938, appendix I. In applying these standards, we focused on the extent to which stakeholders were involved in study design and analysis as well as the extent to which assumptions were reasonable and their effects on results were clearly explained. We assessed MDA briefings that explained the modeling used for the technical analysis projecting the elements’ capabilities. To assess the basis for the assumed performance parameters used to model each element’s performance, we traced and verified a nonprobability sample of these parameters to their source documentation and concluded that they were generally supported. To evaluate the DOD report’s characterization of threats, we reviewed Defense Intelligence Agency documents and discussed the type and capability of threats and expected BMDS capabilities with officials from the Office of the Secretary of Defense for Program Analysis and Evaluation and the Defense Intelligence Agency. In addition, to gain an understanding of the extent to which DOD has assessed warfighter quantities for the boost and ascent phase elements, the development of operational concepts, and operational implications of employing the boost and ascent phase elements at forward locations, we evaluated DOD and service guidance on assessing sites and support for new weapon systems and discussed these issues with officials from the Joint Staff; U.S. Army Headquarters and Space and Missile Defense Command; U.S. Strategic Command; the office of the Chief of Naval Operations Surface Warfare Directorate, Ballistic Missile Defense Division; Air Combat Command; and the office of the Secretary of the Air Force for Acquisition, Global Power Directorate. Finally, we discussed the results of all our analyses with officials in the Joint Staff; U.S. Strategic Command; the Army’s Space and Missile Defense Command; Office of the Secretary of Defense for Acquisition, Technology, and Logistics; Missile Defense Agency; the office of the Chief of Naval Operations Surface Warfare Directorate, Ballistic Missile Defense Division; the office of the Secretary of the Air Force for Acquisition Global Power Directorate; and Air Combat Command. To assess the extent to which DOD presented cost information to Congress that is complete and transparent, we first assessed how MDA developed its estimates and then compared the method by which those estimates were prepared to key principles compiled from various DOD and GAO sources that describe how to develop accurate and reliable life-cycle cost estimates to determine their completeness and the extent to which DOD took steps to assess confidence in the estimates. The following were our sources for compiling the cost criteria: Department of Defense, Assistant Secretary of Defense (Program Analysis and Evaluation), Cost Analysis Guidance and Procedures, DOD Manual 5000.4-M (December 1992); Department of Defense, Office of the Secretary of Defense Cost Analysis Improvement Group, Operating and Support Cost Estimating Guide (May 1992); Department of Defense, Defense Acquisition University, Defense Acquisition Guidebook (online at http://akss.dau.mil/dag); Department of Defense, Defense Acquisition University, Introduction to Cost Analysis (April 2006); Air Force, Office of Aerospace Studies, Air Force Analysis Handbook: A Guide for Performing Analysis Studies for Analysis of Alternatives or Functional Solution Analysis (July 2004); Air Force, Base Support and Expeditionary Site Planning, Air Force Instruction 10-404 (March 2004); and GAO, GAO Cost Assessment Guide (currently under development). In addition, we met with DOD officials from MDA, U.S. Strategic Command, the Joint Staff, Army, Navy and Air Force to determine the extent to which they were involved in developing the cost estimates for the DOD report. Finally, we corroborated our methodology and results with officials from the Office of the Under Secretary of Defense, Program, Analysis and Evaluation (Cost Analysis Improvement Group) and the Office of the Under Secretary of Defense (Comptroller) and they agreed that our methodology for examining the report’s cost estimates was reasonable and consistent with key principles for developing accurate and reliable life-cycle cost estimates. We identified some data limitations with the cost estimates which we discuss in this report. We provided a draft of this report to DOD for its review and incorporated its comments where appropriate. Our review was conducted between June 2006 and February 2007 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Barbara H. Haynes and Gwendolyn R. Jaffe, Assistant Directors; Brenda M. Waterfield; Todd Dice; Jeffrey R. Hubbard; Nabajyoti Barkakati; Hai V. Tran; Ron La Due Lake; and Susan C. Ditto made key contributions to this report. Defense Transportation: Study Limitations Raise Questions about the Adequacy and Completeness of the Mobility Capabilities Study and Report. GAO-06-938. Washington, D.C.: September 20, 2006. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goal. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000.
The Department of Defense (DOD) has spent about $107 billion since the mid-1980s to develop a capability to destroy incoming ballistic missiles. DOD has set key decision points for deciding whether to further invest in capabilities to destroy missiles during the initial phases after launch. In March 2006, DOD issued a report on these capabilities in response to two mandates. To satisfy a direction from the House Appropriations Committee, GAO agreed to review the report. To assist Congress in evaluating DOD's report and preparing for future decisions, GAO studied the extent to which DOD (1) analyzed technical and operational issues and (2) presented complete cost information. To do so, GAO assessed the report's methodology, explanation of assumptions and their effects on results, and whether DOD followed key principles for developing life-cycle costs. The report DOD's Missile Defense Agency (MDA) submitted to Congress in March 2006 included some useful technical and operational information on boost and ascent phase capabilities by describing these elements, listing upcoming decision points, and discussing geographic areas where boost and ascent elements could intercept missiles shortly after launch. However, the information in the report has several limitations because the analysis did not involve key DOD stakeholders such as the services and combatant commands in preparing the report and did not clearly explain modeling assumptions and their effects on results as required by relevant research standards. MDA's report states that, at this time, some data is limited, and operational concepts that discuss operations from forward locations have not been fully vetted with the services and combatant commands. However, the report did not explain how each element's performance may change if developing technologies do not perform as expected. Also, it did not address the challenges in establishing bases at the locations cited or provide information on the quantity of each element required for various deployment periods. Moving forward, DOD has an opportunity to involve stakeholders in analyzing operational and technical issues so that senior DOD and congressional leaders will have more complete information on which to base upcoming program decisions following key tests in 2008 and 2009 for the Kinetic Energy Interceptor and Airborne Laser boost and ascent phase programs. MDA's report provided some cost estimates for developing and fielding boost and ascent phase capabilities, but these estimates have several limitations and will require refinement before they can serve as a basis for DOD and congressional decision makers to compare life-cycle costs for the elements. MDA's report states that there is uncertainty in estimating life-cycle costs because the elements are early in development. However, based on a comparison of the estimates in the report with key principles for developing life-cycle cost estimates, GAO found that MDA's estimates did not include all cost categories, including costs to establish and sustain operations at U.S. bases and at forward overseas operating locations. Also, MDA's estimates did not calculate costs based on realistic quantities of each element the combatant commanders or services would need to conduct the mission. Finally, MDA did not conduct a sensitivity analysis to assess the effect of key cost drivers on total costs. MDA officials stated that further analysis of the costs for each element along with measures to assess their confidence would help to better inform DOD and congressional decision makers in making investment decisions following key tests in 2008 and 2009.
DOE has a large complex of sites around the country dedicated to supporting its missions: sites that were used to produce or process materials and components for nuclear weapons and laboratories that conduct research on nuclear weapons, defense issues, basic science, and other topics. These sites and laboratories are often located on government-owned property and facilities, but are usually operated by organizations under contract to DOE, including universities or university groups, non-profit organizations, or other commercial entities. DOE contracting activities are governed by federal laws and regulations. Although federal laws generally require federal agencies to use competition in selecting a contractor, until the mid-1990s, DOE contracts for the management and operation of its sites generally fit within an exception that allowed for the use of noncompetitive procedures. Those contracts were subject to regulation that established noncompetitive extensions of contracts with incumbent contractors as the norm and permitted competition only when it appeared likely that the competition would result in improved cost or contractor performance and would not be contrary to the government’s best interests. In the mid-1990s, DOE began a series of contracting reforms to improve its contractors’ performance. A key factor of that initiative has been the increasing use of competition as a way to select management and operating contractors for DOE sites. Although DOE initially focused the increased use of competition on its contracts with for-profit organizations, the laboratories operated by universities and other nonprofit organizations have not been completely insulated from these changes. Contract administration in DOE is carried out by the program offices, with guidance and direction from DOE’s Office of Procurement and Assistance Management. The management and operating contracts at DOE’s FFRDC laboratories are administered primarily by the National Nuclear Security Administration, a semi-autonomous agency within DOE; or DOE’s Offices of Science, Environmental Management, or Nuclear Energy, Science, and Technology. DOE has had three main reasons for competing its FFRDC contracts instead of extending the contracts noncompetitively: when the contractor operating the laboratory is a for-profit entity, when mission changes warrant a review of the capabilities of other potential contractors, or when the incumbent contractor’s performance is unsatisfactory. Without one of these conditions, DOE has generally extended these contracts without competition. DOE has considerable flexibility in deciding whether to compete a management and operating contract for one of its FFRDC laboratories. Although federal procurement law specifies a clear preference for competition in awarding government contracts, the Competition in Contracting Act of 1984 provided for certain conditions under which full and open competition is not required. One of these noncompetitive conditions occurs when awarding the contract to a particular source is necessary to establish or maintain an essential engineering, research, or development capability to be provided by an educational or other nonprofit institution or a FFRDC. The Federal Acquisition Regulation, which implements federal law, defines government-wide policy and requirements for FFRDCs, including the establishment, use, review, and termination of the FFRDC relationship. Under this regulation (1) there must be a written agreement of sponsorship between the government and the FFRDC; (2) the sponsoring governmental agency must justify its use of the FFRDC; (3) before extending the agreement or contract with the FFRDC, the government agency must conduct a comprehensive review of the use and need for the FFRDC; and (4) when the need for the FFRDC no longer exists, the agency may transfer sponsorship to another government agency or phase out the FFRDC. DOE’s 1996 acquisition guidance describes the procedures DOE program offices must follow to support any recommendation for a non-competitive extension of any major site contract, including a FFRDC contract. This guidance indicates a clear preference for competition and requires DOE program offices to make a convincing case to the Secretary before a noncompetitive contract extension is allowed. This preference for competition is an outcome of DOE’s contract reform initiative, which concluded that DOE needed to expand the use of competition in awarding or renewing contracts. Among other things, the 1996 guidance specifies that, before a noncompetitive contract extension can occur, DOE must provide a certification that full and open competition is not in the best interest of a detailed description of the incumbent contractor’s past performance, an outline of the principal issues and/or significant changes to be negotiated in the contract extension, and in the case of FFRDCs, a showing of the continued need for the research and development center in accordance with criteria established in the Federal Acquisition Regulation. In November 2000, DOE’s Office of Procurement and Assistance Management issued additional guidance on how to evaluate an incumbent contractor’s past performance when deciding whether to extend or compete an existing contract. The guidance states that DOE contracting officers must review an incumbent contractor’s overall performance including technical, administrative, and cost factors, and it outlines the information required to support the performance review and the expected composition of the evaluation team. When reporting the results of a performance evaluation, the team should address all significant areas of performance and highlight the incumbent contractor’s strengths and weaknesses. The evaluation team’s report serves as the basis for determining whether extending a contract is in the best interests of the government and is subject to review and concurrence by the responsible assistant secretary and DOE’s Procurement Executive. In September 2002, we reported that DOE had taken several steps to expand competition for its site management and operating FFRDC contracts. First, DOE reassessed which sites it should continue to designate as federally funded research and development centers. As a result of the reassessment, DOE removed 6 of the 22 sites from the FFRDC designation. DOE subsequently competed the contracts for two of these— the Knolls and Bettis Atomic Power Laboratories in New York and Pennsylvania. DOE restructured the other four contracts and, because of the more limited scope of activities, no longer regards them as major site contracts. The six site contracts that DOE has dropped from FFRDC status since 1992 are listed in table 1. For the 16 remaining FFRDC contracts that DOE sponsors, DOE has competed 6 of them and is planning to compete two additional contracts in 2004 and 2005. The 16 current FFRDC sites and the competitive status of the site contract are shown in table 2. DOE’s decision to compete the six FFRDC sites shown in table 2 is consistent with the department’s overall policy on determining when competition is appropriate. For example, DOE competed the contract for the Brookhaven National Laboratory in 1997, after terminating the previous contract for unsatisfactory performance by the incumbent contractor. DOE competed the contract for the National Renewable Energy Laboratory in 1998 to incorporate additional private sector expertise into the management team for the site. This competition resulted from an expanded mission at the site to develop innovative renewable energy and energy efficient technologies and to incorporate these technologies into cost effective new products. For the remaining four FFRDC contracts that DOE has competed, the operator of the laboratory was a for-profit entity. When DOE has decided not to compete its FFRDC contracts but to extend them noncompetitively, its decisions have not been without controversy. For example, in 2001, DOE extended the management and operating contracts with the University of California for the Los Alamos and Lawrence Livermore National Laboratories. The University of California has operated these sites for 50 years or more and has been the sites’ only contractor. In recent years, we and others have documented significant problems with laboratory operations and management at these two laboratories—particularly in the areas of safeguards, security, and project management. Congressional committees and others have called for DOE to compete these contracts. Until recently, however, DOE did not compete them. Instead, DOE chose to address the performance problems using contract mechanisms, such as specific performance measures and interim performance assessments. In our September 2002 report, we commented that if the University of California did not make significant improvements in its performance, DOE may need to reconsider its decision not to compete the contracts. In April 2003, the Secretary of Energy decided to open the Los Alamos National Laboratory contract to competition when the current contract expires in September 2005. The Secretary made this decision based on “systemic management failures” that came to light in 2002. The management failures included inadequate controls over employees’ use of government credit cards, inadequate property controls and apparent theft of government property, and the firing of investigators attempting to identify the extent of management problems at the laboratory. DOE has also decided to restructure the FFRDC contracts supporting work at the Idaho National Laboratory. Currently the laboratory has two FFRDC contracts—(1) a site management contract that includes activities ranging from waste cleanup to facility operations activities and (2) a contract to operate Argonne National Laboratory, which includes the Argonne West facility at the Idaho site. DOE plans to restructure the two contracts so that one focuses on the nuclear energy research mission and the other focuses on the cleanup mission at the site. DOE also plans to include the activities at Argonne West in the contract competition for the site’s research mission and to remove the Argonne West scope of work from DOE’s existing contract with the University of Chicago to operate Argonne National Laboratory. DOE believes this contract restructuring will help revitalize the nuclear energy research mission at the Idaho Site and accelerate the environmental cleanup. DOE is continuing to examine the nature of its relationship with FFRDC contractors and the implications of that relationship for its contracting approach. DOE established FFRDCs in part to gain the benefits of having a long-term association with the research community beyond that available with a normal contractual relationship. However, more recent events are causing DOE to rethink its approach. As discussed above, DOE has been criticized for not competing laboratory contracts where the contractors are performing poorly. Furthermore, annual provisions in the Energy and Water Development Appropriations Acts since fiscal year 1998 have required DOE to compete the award and extension of management and operating contracts, including FFRDC contracts, unless the Secretary waives the requirement and notifies the Subcommittees on Energy and Water of the House Committee on Appropriations 60 days before contract award. Given these concerns, in 2003 the Secretary of Energy commissioned an independent panel to determine what criteria DOE should consider when deciding whether to extend or compete a laboratory management and operating contract. The panel is expected to help DOE determine, among other things, the conditions under which competition for laboratory contracts is appropriate, the appropriate criteria for deciding to compete or extend laboratory contracts, the benefits and disadvantages derived from competing laboratory contracts, and whether different standards and decision criteria should apply depending on whether the contractor is non- profit, an educational institution, an academic consortium, or a commercial entity. Competing contracts is one of several mechanisms DOE can use to address contractor performance problems or strengthen contract management. However, competing a contract does not ensure that contractor performance will improve. Other steps DOE has taken as part of its contract reform initiative to address contractor performance issues include changing the type of contract, such as from a cost-reimbursement to a fixed-price contract, or establishing or strengthening performance- based incentives in the contract. For example, in September 2002, we reported that DOE now requires performance-based contracts at all of its major sites. DOE has also increased over time the proportion of contractors’ fees tied to achieving those performance objectives. However, DOE has struggled to develop effective performance measures and continues to modify and test various performance measures that more directly link performance incentives to a site’s strategic objectives. Even these changes to DOE’s contracts do not by themselves ensure that contractor performance will improve. We have reported that DOE must also (1) effectively oversee its contractors’ activities in carrying out projects and (2) use appropriate outcome measures to assess overall results and apply lessons learned to continually improve its contracting practices. Effectively overseeing contractor activities involves, among other things, ensuring that appropriate and effective project management principles and practices are being used. Since June 1999, DOE has been working to implement recommendations by the National Research Council on how to improve project management at DOE. In 2003, the National Research Council reported that DOE has made progress in improving its management of projects but that effective management of projects was not fully in place. Regarding the use of outcome measures to assess overall results, in September 2002, we reported that DOE did not have outcome measures or data that could be used to assess the overall results of its contract reform initiatives. We recommended that DOE develop an approach to its reform initiatives, including its contracting and project management initiatives, that is more consistent with the best practices of high-performing organizations. DOE is still working to put a best-practices approach in place. As we reported in 2001, improving an organization’s performance can be difficult, especially in an organization like DOE, which has three main interrelated impediments to improvement—diverse missions, a confusing organizational structure, and a weak culture of accountability. However, DOE expects to spend hundreds of billions of dollars in future years on missions important to the well-being of the American people, such as ensuring the safety and reliability of our nuclear weapon stockpile. Therefore, the department has compelling reasons to ensure that it has in place an effective set of contracting and management practices and controls. Thank you, Madam Chairman and Members of the Subcommittee. This concludes my testimony. I would be pleased to respond to any questions that you may have. For further information on this testimony, please contact Ms. Robin Nazzaro at (202) 512-3841. Individuals making key contributions to this testimony included Carole Blackwell, Bob Crystal, Doreen Feldman, Molly Laster, Carol Shulman, Stan Stenersen, and Bill Swick. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOE is the largest civilian-contracting agency in the federal government, and relies primarily on contractors to operate its sites and carry out its diverse missions. For fiscal year 2003, DOE will spend about 90 percent of its total annual budget, or $19.8 billion, on contracts, including $9.4 billion to operate 16 of its research laboratories (called federally funded research and development centers). Since 1990, GAO has identified DOE's contract management as high-risk for fraud, waste, abuse, and mismanagement. In 1994, DOE began reforming its contracting practices to, among other things, improve contractor performance and accountability. As part of that effort, DOE has at times used competition in awarding contracts to manage and operate its research laboratories. In September 2002, GAO reported on the status of contract reform efforts in DOE. (Contract Reform: DOE Has Made Progress, but Actions Needed to Ensure Initiatives Have Improved Results) (Sep. 2002, GAO-02-798) This testimony discusses some of the findings in that report. GAO was asked to testify on DOE's rationale for deciding whether to compete a laboratory research contract, the extent to which DOE has competed these contracts, and the role of competition and other mechanisms in improving contractor performance. DOE has competed its research laboratory contracts in three main situations--when the contractor operating the laboratory is a for-profit entity, when mission changes warrant a review of the capabilities of other potential contractors, or when the incumbent contractor's performance is unsatisfactory. DOE guidance requires that to extend a contract noncompetitively, the department must present a convincing case for doing so to the Secretary of Energy. Among other things, DOE must certify that competing the contract is not in the best interests of the government and must describe the incumbent contractor's past successful performance. Of the 16 research laboratory contracts currently in place, DOE has competed 6. The remaining 10 contracts have not been competed since the contractors began operating the sites--in some cases, since the 1940s. DOE recently decided to compete 2 of the 10 contracts that had never before been competed--contracts to operate the Los Alamos National Laboratory in New Mexico and the Argonne West Laboratory, located at the Idaho National Laboratory. DOE decided to compete the Los Alamos contract because of concerns about the contractor's performance, and to compete the Argonne West contract as part of an overall effort to separate the Idaho National Laboratory's nuclear energy research mission from the environmental cleanup mission at the Idaho site. Competing contracts is one of several mechanisms DOE can use to address contractor performance problems or strengthen contract management. However, just competing a contract does not ensure that contractor performance will improve. Other aspects of DOE's contract reform initiative intended to improve contractor performance included greater use of fixed-price contracts instead of cost-reimbursement contracts and establishing or strengthening performance-based incentives in existing contracts. In addition, GAO has reported that DOE must (1) effectively oversee its contractors' activities in carrying out projects and (2) use appropriate outcome measures to assess overall results and apply lessons learned to continually improve its contracting practices. GAO's recent evaluation of DOE's contract reform efforts indicates that DOE is still working to put these management practices and outcome measures in place.
Within VA, its OSDBU has overall responsibility for the SDVOSB and VOSB verification program. OSDBU’s Center for Verification and Evaluation (CVE) maintains the mandated database of verified SDVOSBs and VOSBs and is responsible for verification operations, such as application processing. CVE is led by a director and deputy director, and staff are organized into seven teams that either assist with a verification phase or supporting function. A federal employee leads each team and CVE contracts with several SDVOSBs to provide contractors that conduct verifications and supporting activities. As of January 2016, CVE had 16 federal employees and 156 contract staff (employed by five different SDVOSB contractors) verifying applications or filling supporting roles. CVE and its information technology are funded by VA’s Supply Fund, a fund that recovers its operating expenses through fees and markups on different products or services. CVE’s final obligations for fiscal year 2014 were $17.9 million and its approved budget for fiscal year 2015 was $16.1 million, representing a decrease of about 10 percent ($1.8 million). VA developed eligibility requirements and a process to verify the ownership and control of firms seeking contracting preferences as SDVOSBs and VOSBs and to confirm the status of any owner who indicates a service-connected disability. To be eligible for verification under VA’s rules the small business concern (hereafter, firm) must be unconditionally owned and controlled by one or more eligible parties (veterans, service-disabled veterans, or surviving spouses); the owners of the firm must have good character (any small business owner or concern that has been debarred or suspended is ineligible); the applicant cannot knowingly make false statements in the application process; the firm and its eligible owners must not have significant financial obligations owed to the federal government; and the firm must not have been found ineligible due to a Small Business Administration protest decision. VA’s verification process consists of reviewing and analyzing a standardized set of documents submitted with each verification application. VA’s current verification process has four phases—initiation, examination, evaluation, and determination (see fig. 1). Denied applicant firms can request a reconsideration of the denial decision, but if the denial is upheld, must wait 6 months before submitting another application. VA maintains a database on its Vendor Information Pages (VIP) website of verified SDVOSBs and VOSBs. Once VA verifies a firm, the firm name appears with a verified logo in VIP. Verification is valid for 2 years, with the option of renewing for 2 additional years. To apply for renewal, verified firms must submit an application before expiration of their verification, answer questions about any key changes to their business structure, submit supporting business documents such as updated tax returns and any amended operating documents, and receive a full evaluation by CVE. Firms that receive a renewal have to undergo a full verification (reverification) after the 2-year renewal period expires (that is, firms must go through the full process every 4 years). In 2014, VA launched the MyVA Reorganization Plan in an effort to improve the efficiency and effectiveness of VA’s services to veterans. The plan’s strategy emphasizes improved service delivery, a veteran-centric culture, and an environment in which veteran perceptions are the indicator of VA’s success. MyVA extends to all aspects of the agency’s operations, including the verification program. In response to this organizational change, OSDBU is required to align its own strategy with MyVA and take steps to make its operations more customer service- oriented and veteran-centric. OSDBU has established in its strategic plan that its longer-term goals for the verification program are to transform the verification process to make it more veteran-friendly and to expand the program’s capacity to serve more veterans. In our May 2010 and January 2013 reports, we found that VA faced challenges in establishing a program to verify firms on a timely and consistent basis. We made a number of recommendations to address these issues. Since that time, CVE has made significant improvements to the verification program—consistent with our recommendations—such as improving application processing times, quality controls, and communication with veterans. Based on CVE’s administrative data, application processing times have decreased by more than 50 percent since October 2012—from an average of approximately 85 days—to 42 days in fiscal year 2015 (see fig. 2). VA officials attributed the decreased processing time to a number of process improvements, such as moving from paper to electronic applications, developing detailed written work instructions for staff and contractors conducting verification activities, and enhancing resources available to applicants to make them more aware of program requirements. Officials told us that they have been generally meeting their regulatory processing goal of 60 days (from receipt of a complete application) and had 5 applications (out of 3,129) in fiscal year 2014 and 11 applications (out of 4,651) in fiscal year 2015 that did not meet this goal. Our review of randomly selected application files corroborated that CVE generally met its processing goals, but the verification process can take longer from a veteran’s perspective. In calculating processing times, CVE excluded any time spent waiting for additional information it asked firms to supply, so the number of days it took an applicant to become verified typically was longer than CVE reported. To illustrate how long the process could take from the veteran’s perspective, we used the results of our case file review to estimate the average number of days it took veterans to receive an initial determination—including any time a firm took to prepare and submit additional requested documentation (time that VA excluded from its estimated processing times). Based on our analysis, it took firms that applied for verification from June through September 2014 an average of 68 days to receive an initial determination on verification eligibility. Additionally, firms can submit and withdraw their application multiple times if they need to correct issues or wish to apply at a later date, an option that can lengthen the verification process for some firms. Each time a firm resubmits an application, CVE resets the processing clock. Based on our case file review, we estimated that for 15 percent of applications submitted from June 2014 through September 2014, it took more than 4 months from the initial application date for firms to receive a determination from CVE. In 5 of the 96 applications we reviewed, the verification process took more than 6 months to complete. For 2 of the 5 applications, the process took 1 year or more to complete. In each of the 5 cases, the firm had submitted and withdrawn an application at least one time before submitting an application that received a final determination. VA officials said that weeks or months could pass between a firm’s withdrawal of its application and resubmission of a new application. Additionally, VA officials said that allowing applicants to withdraw and resubmit multiple applications was advantageous to the veteran (without the option, more veterans would receive denials and have to wait 6 months before submitting another application). CVE has continued to refine its quality management system since our May 2010 and January 2013 reports, including developing comprehensive work instructions, conducting site visits, and revising policies for investigating allegations of noncompliance. In particular, VA has made progress since our 2013 report. For example, at that time, VA was introducing significant changes to its procedures and operations, and we determined that our original focus on evaluating VA’s compliance with policies and procedures would be of limited value. Since our 2013 report CVE has put into place detailed written work instructions—which are used by CVE staff and contractors conducting verification activities—for each part of the verification process, and a quality manual that documents the requirements of its quality management system. According to CVE officials, the work instructions are updated on a regular basis. CVE officials said the agency received certification in 2015 that its revised quality management system was compliant with the International Organization for Standardization 9001:2008 quality management standards. CVE also has implemented an internal audit process and a continuous improvement process. As of September 2015, CVE had taken action on and closed 332 of 350 (95 percent) internal audit recommendations made since February 2013. Based on our case file review, we estimate that VA followed its policies and procedures for verifying SDVOSBs and VOSBs in 100 percent of applications that were approved or denied from June through September 2014. This included checking the veteran and disability status of the applicant, conducting research on the firm from publicly available information, and reviewing business documents to determine compliance with eligibility requirements (such as direct majority ownership by the veteran and experience of the veteran manager). In our May 2010 report, we found that VA had a large backlog of firms awaiting site visits, including some high-risk firms. As a result, we recommended that VA develop and implement a plan to, among other things, conduct timely site visits to high-risk firms. VA reported in October 2013 that it had conducted more than 1,000 site visits in fiscal year 2013 and there was no longer a backlog of firms awaiting site visits, which was consistent with our recommendation. In fiscal years 2014 and 2015, CVE conducted 1,750 site visits to gather additional information about firms during its application review, or check the accuracy of verification teams’ decisions and help ensure that verified firms continue to comply with program regulations. Specifically, CVE conducted 1,144 site visits in fiscal year 2014 and 606 site visits in fiscal year 2015 on verified firms and firms applying for verification. Of the fiscal year 2015 site visits, CVE used risk-based determinations to select the vast majority of firms for visits (93.7 percent or 568 firms). The remainder were randomly selected or chosen because they were in the application process. CVE officials said that firms identified through risk-based selection were chosen based on their risk to VA (for instance, if the firm had a VA contract). Officials reported low error and noncompliance rates as identified through site visits, described specifically below. CVE officials said the site visits identified two instances in fiscal year 2015 and nine instances in fiscal year 2014 in which CVE evaluators mistakenly verified a firm (a less than 1 percent error rate). CVE issued 25 cancellations to firms found out of compliance with program regulations (a 4 percent noncompliance rate) in fiscal year 2015 and 57 cancellations in fiscal year 2014 (a 5 percent noncompliance rate), an outcome that can result from changing characteristics of the firm after verification. These statistics, particularly the identification of a small number of instances in which CVE evaluators mistakenly verified a firm in the past 2 fiscal years, are consistent with the findings from our case file review that VA has been following its policies and procedures for verifying firms. CVE officials said they have been working with VA’s Office of Enterprise Risk Management to determine how many site visits should be conducted annually and how firms should be selected. VA spent about $3 million in fiscal year 2014 to conduct 1,144 site visits (about 16 percent of all verified firms). Officials stated that because of the cost of conducting the site visits (about $2,600 per visit)—and the low rate of noncompliance identified by site visit examiners (about 4.7 percent in fiscal year 2014)— the agency has been looking to reduce the number of site visits while maintaining the effectiveness of the site visit program. CVE also randomly selected a sample of 104 of 2,312 firms that received VA contracts from March 2014 through April 2015 for site visits, in order to obtain a statistical estimate of the noncompliance rate for the program. CVE officials said they have compiled data from visits performed in fiscal years 2014 through 2016, which they will use to identify risk factors that could affect compliance (such as a firm’s business type, industry type, and size of VA contract). VA officials said they plan to use these data to monitor emerging risk factors and adjust their selection of firms to receive site visits accordingly. CVE also monitors program compliance through investigations of suspicious firms identified through tips from external sources. CVE officials told us they received about 400 tips in 2014 about noncompliance with program regulations. CVE is responsible for investigating noncompliance with program requirements, and the VA OIG is responsible for investigating fraud. Officials said that they investigate credible allegations they receive by conducting public research to substantiate or disprove the tip, reviewing eligibility requirements related to the tip, and making a recommendation for corrective action, if necessary. We reviewed case files associated with 10 firms for which CVE received such allegations (between June 2014 and May 2015). These allegations were made by a veteran-owned small business advocate. We found that CVE investigated 6 of the 10 allegations. For the 4 cases that it did not review, CVE officials said the allegations were not specific enough to conduct an investigation. Additionally, officials said the allegations were not sent to the e-mail address that VA had established for this purpose, and thus, may not have been routed to the correct individuals within VA charged with investigating allegations. According to a policy memorandum issued in October 2015, CVE previously only accepted referrals on its non-compliance referral form. However, the policy memorandum changed CVE’s policy regarding reviews of non- compliance referrals, and CVE began reviewing and responding to all allegations of noncompliance, whether received on the referral form or not. Additionally, CVE will notify the sender if the agency has enough information to investigate the allegation and request additional information, if necessary. Based on our review of the files for these 10 allegations, we found that CVE had not always documented that a noncompliance allegation had been received or that it was conducting a review of the firm’s eligibility based on the allegation. CVE officials said they adopted a policy in July 2015 to upload all findings from investigations that result from noncompliance allegations to the case-management system so that CVE staff and contractors working on a firm’s verification have access to that information. VA has taken steps to improve communication with veterans since our January 2013 report, in which we discussed concerns of some veterans’ organizations about the verification program. VA implemented additional procedures to improve communication with verified firms about their verification status. Specifically, according to agency officials, VA sends e-mail reminders 120, 90, and 30 days before the expiration of a firm’s verification status; contacts verified firms by telephone 90 days before expiration of verification status; and notifies firms in writing 30 days before cancelling a firm’s verification. Additionally, VA communicates with applicants at several points in the verification process, such as to indicate that an application is complete, additional documents are needed, and a determination has been made. VA also e-mails applicants about issues that would result in a denial to offer them the ability to make changes to the application or to withdraw it prior to receiving the denial. Our case file review found VA generally followed its procedures to send reminder notices to applicants who needed to submit additional documentation, and to send notices that an application was complete. In our January 2013 report, we noted that VA had recognized that some applicants needed additional support and launched a Verification Counseling program in June 2012 to assist firms interested in becoming verified. Since that time, VA has continued to work with PTACs across the United States to provide verification assistance to veterans free of charge. VA officials said they have trained more than 300 procurement experts on the verification process so they can assist veterans applying for verification. In addition, we determined that VA provides other resources such as fact sheets, verification assistance briefs, and standard operating procedures for the verification program on its website. VA also provides a tool on its website that allows firms to obtain a list of documents required for an application depending on the type of company they own (such as a limited liability corporation or sole proprietorship). Moreover, CVE officials said that they have increased interaction with veterans seeking verification through web-based activities and outreach at conferences and other events. For instance, since our 2013 report, CVE began conducting monthly pre-application, reverification, and town hall webinars to provide information and assistance to verified firms and others interested in the verification process. VA officials also told us that they attend veteran small business conferences and other meetings in which they can conduct outreach for the verification program. CVE has taken steps to collect veteran feedback about the program. The agency held two focus groups in July 2015 and began surveying firms in August 2015. Several veterans who participated in the focus groups commented on the lack of clarity of VA’s communications. For example, one veteran said that there was a lack of clarity about the documents that should be submitted with an application. Two veterans noted issues with the timing and redundancy of document requests during the application process. And two veterans said there was lack of clarity around certain program rules. Although survey results cannot be generalized to all veterans going through the verification process, the contractor administering the surveys reported in September 2015 that feedback from firms that had been through the verification process appeared positive, particularly with respect to improvements in the verification process. CVE officials stated they hope the surveys will allow them to more systematically collect feedback from veterans on different aspects of the program, including the pre-application experience, the verification process from submission to determination, and site visit examinations. All the counselors and representatives of veterans’ service organizations with whom we spoke noted that VA has improved its verification process, but most had suggestions for continued improvement. One PTAC representative noted that VA could better leverage organizations such as PTACs, veteran small business outreach centers, and state-level Departments of Veterans Affairs to disseminate information. Another PTAC representative noted that PTACs and other veterans’ groups could host events for CVE to interact with veterans to increase awareness of the verification program and the services PTACs provide. Additionally, this PTAC representative said CVE could do a better job referring veterans to PTACs that could assist them with the verification process. He added that having PTACs assist veterans before they started the application process would reduce applicant error and frustration and processing times. VA officials stated that they make PTAC referrals through the help desk and monthly preverification webinars and participate in outreach events when possible. Officials also noted that travel dollars to attend and conduct outreach at external events are limited and they try to use the monthly webinars to interact with and inform veterans. In addition, VA has taken steps to address external stakeholders’ concerns about the information available on VA’s website and clarity of communications to applicants. Three of the four counselors noted that resources on VA’s website for the verification program can be difficult to locate. Representatives from one of two service organizations said VA does not provide adequate documentation of program standards for applicants. Officials said that they have been working with OSDBU to redesign the website to make documents, such as verification assistance briefs and the tool to identify required documents, easier to locate. Additionally, the counselors we interviewed noted that VA’s communications to applicants were at times unclear. VA’s procedures require that determination letters to applicants include specific reasons for denial or potential denial. All four of the counselors we interviewed also stated that VA’s determination letters to applicants could be clearer and that they include regulatory compliance language that could be difficult for some applicants to understand. We used an automated readability test on five determination letters from our case file review (written from August through December 2014). According to the test, the reading level required to understand the letters ranged from a college sophomore to a college senior level, so readability might present a challenge for some applicants. VA officials maintained that the inclusion of regulatory language in the determination letters was necessary, but acknowledged the language could present readability challenges. Moreover, the officials noted that they encourage veterans to obtain free assistance with their applications from VA certified verification counselors. We also observed several instances in our review in which a letter initially stated that documents were due on one date, and then later stated the applicant should disregard the initial statement and that documents were due on a different, earlier date. VA officials said this was due to a glitch in the system that generated the letters and a software update issued in May 2015 resolved the issue. Officials also said document requests are now generated using a new template, instead of the older template that previously caused issues. VA has efforts under way (discussed below) to replace the program’s case management system, which generates the templates for document requests. Although VA improved application processing times and communication with veterans since our 2013 report, VA has recognized that the verification process can be more cost-effective and veteran-friendly. OSDBU developed a strategic plan for 2014–2018 that included longer- term goals for the program, such as making the verification process more veteran-friendly. Additionally, OSDBU officials also told us that in 2015 the Supply Fund Board asked OSDBU to design a veteran-centered process that highlights customer service and maximizes cost efficiency. OSDBU and CVE have various efforts under way—such as restructuring the verification and reverification processes and revising program regulations—intended to improve veterans’ experience with the program and provide cost efficiencies. These changes are also intended to better align the verification program with MyVA—VA’s organization-wide transformation initiative. In August 2015, VA began to test (VA officials refer to it as a pilot) a restructured verification process that gives veterans a case manager who serves as a point of contact throughout the process and allows veterans to communicate directly with the individual processing their application. According to OSDBU officials, the new process is expected to provide cost savings to the agency by reducing the amount of time spent reviewing applications and addressing veterans’ questions. For example, VA officials told us that under the current four-phase process, eligibility issues are not identified and communicated to the veteran until the later stages of the process, which could be 35–40 days after application submission. Under the new process, veterans would be interviewed shortly after submitting their application, which would allow VA to identify issues up front and avoid multiple reviews of applications for firms not meeting program requirements. CVE officials also stated that changes to the process are intended to help improve communication with applicants. For example, under the current process, applicants may correspond with several different customer service representatives, each of whom would have to read through case notes before addressing the veteran’s question. Under the new process, a case manager, who serves as the point of contact for the veteran and coordinates staff evaluating the application, would be familiar with the application status and any issues that arose throughout the process. Additionally, by interviewing the applicant at the beginning of the application process, CVE may be able to reduce some of the problems caused by applicants not understanding the written communications sent by CVE. Key differences between the pilot and current processes as described by CVE officials are shown in table 1. CVE officials stated that as of November 2015, 369 applications had been reviewed using the new process and that CVE had processed about 15 percent to 20 percent of applications under the new process. CVE officials said they plan to process about half of all applications using the new process by April 2016. CVE officials stated that they plan to fully transition to a new process by September 2016. The new process will be based on the approach used during the pilot with some adjustments, as determined by CVE’s evaluation of the pilot procedures. The officials stated that they developed a number of metrics to inform adjustments to the pilot, which include length of application processing times, number of approvals, denials, and withdrawals, and number of applications processed by case-management teams each month. Additionally, to help evaluate the verification procedures used during the pilot, CVE officials stated that VA held one focus group in October 2015 and since September 2015 has been surveying firms that participated in the new process to obtain feedback. CVE has made adjustments to the process in response to metrics and feedback. For example, CVE decided to have case analysts process the majority of applications, which allowed assessors more time to process complex applications and provide guidance to the case analysts and other assessors, when needed. CVE’s Acting Director and Deputy Director were responsible for evaluating data obtained from these metrics according to CVE officials. In addition, CVE officials stated that information obtained from the metrics was intended to provide them with a better sense of how many teams will be needed to conduct verification and what types of skills verification staff and contractors needed for the work. CVE officials stated that VA has used current staffing resources for the pilot and transitioned more staff from the current verification process to the pilot, as the pilot expanded. Officials said that VA has developed a range of preliminary cost estimates based on different staffing levels used during the pilot. The agency plans to develop a cost estimate for average and total processing costs after they have transitioned staff to the new process. As of March 2016, CVE officials stated they completed their evaluation of the pilot and have selected a process they determined was cost effective and efficient and did not compromise quality. VA also recently revised its reverification process to improve efficiency and customer service. According to CVE officials, reverification used to require nearly the same effort of CVE staff, contractors, and veterans as the full initial verification process. Under a new process CVE implemented in October 2015, CVE contractors review documentation from the veteran’s previous application, determine what additional documentation is needed to reverify the firm, and conduct an initial interview with the veteran to identify and provide information on what documents need to be updated for reverification. CVE officials said these changes are intended to improve veterans’ understanding of the requirements for reverification, enhance the veteran- centric nature of the program, and further reduce application processing times—by reducing the number of documents the veteran uploads at the beginning of the process and therefore the time CVE contractors and staff spend reverifying applications. According to CVE, it is too soon to determine if these changes have achieved the desired effect, but they intend to evaluate the new procedures by developing survey questions on customer satisfaction and reviewing processing times for reverification applications. In addition to changes to the verification and reverification processes, VA has continued to make revisions to its program regulations to streamline the process and provide clarity for veterans. In our 2013 report, we found that VA had begun modifying program regulations to extend the verification period from 1 to 2 years and published an interim final rule to this effect in late June 2012. More recently, in November 2015 VA published a proposed rule for clarifying and simplifying eligibility criteria in the Federal Register, on which it had been working since 2013. VA obtained input and recommendations from veteran stakeholder groups to inform the new regulations. According to officials, the proposed revisions are intended to simplify the criteria used to determine veteran ownership and control, and account for common business practices that might otherwise lead to a denial decision under the current regulation. For example, in addressing the challenges associated with one current regulatory provision, CVE officials said that the proposed rule simplifies the ownership criteria by modifying the term “unconditional” to allow businesses to include rights of first refusal and “tag-along” rights in their operating documents and still participate in the program. Similarly, officials said that VA plans to allow minority owners to vote on extraordinary business decisions such as closing or selling the business. Officials stated that the revisions to the regulation were not expected to provide cost and resource efficiencies, or affect the new verification process being developed through the pilot. Comments on the proposed rule were due January 5, 2016, and officials expect to finalize the proposed rule in mid-2016. Officials said they received about 100 comments and that they still were evaluating and categorizing the comments as of January 21, 2016. When the rule changes are finalized, VA plans to train staff on the revised regulation and update its review sheet used during application review. We previously found that leadership and staff vacancies contributed to the slow pace of implementation of the verification program. In our May 2010 report, we found that leadership in OSDBU was lacking because the position of Executive Director remained vacant from January 2009 until January 2010. Furthermore, one of two leadership positions directly below the Executive Director had been vacant since October 2008, and the other positions had been filled by an Acting Director. In 2010, we recommended that VA develop and implement a plan that ensures a more thorough and effective verification program and addresses actions and milestone dates for filling vacant positions within OSDBU, including the leadership positions. By July 2011, VA had filled the vacant leadership positions but has since experienced turnover in the leadership positions for the verification program. Specifically, CVE has had four different directors since 2011, including two acting directors in 2015. In addition, the position of Deputy Director was vacant from March 2014 to September 2015. VA posted a job announcement for the CVE director position in October 2015 that closed in November 2015. OSDBU’s Executive Director told us in December 2015 that VA planned to complete the recruiting process and hire a permanent director by January 2016. VA hired a permanent director in February 2016. In addition to taking steps to acquire permanent leadership for the program, VA has made changes to CVE’s organizational structure to align staffing resources with agency needs and reflect the new pilot verification process. CVE officials said that under the new organizational structure, a federal employee heads each of three teams—critical path, risk and compliance, and verification support—and oversees the contractors who conduct the majority of the work for the program. The critical path team is responsible for verifying applications. All federal staff responsible for overseeing the verification process and the contractors reviewing applications have been moved to this team, according to CVE officials. The risk and compliance team oversees CVE’s site visit program, investigates allegations of noncompliance, and makes referrals to VA’s Debarment and Suspension Committee and OIG. The verification support team staffs the help desk and provides support processing communications to applicants. CVE officials also said that the quality assurance team has been moved from within CVE to the larger OSDBU organization and will continue to support CVE internal audits as well as provide support to OSDBU. OSDBU officials indicated that VA made this change to share more responsibilities between CVE and OSDBU and reduce program operating costs. According to CVE officials, VA has developed position descriptions for the pilot verification process and used data from the pilot to determine optimal staffing levels needed to process applications under the new procedures. As discussed earlier, CVE relies heavily on contractor support to conduct its verification activities and currently has 16 federal employees and 156 contractors working on the verification program. CVE officials stated that VA plans to continue using contractor staff to conduct verification activities because the use of such staff gives VA the flexibility to adjust staffing levels as needed to respond to changes in the number of verification applications received. According to VA officials, and generally consistent with findings from our case file review, federal employees make the final determination for verification decisions. Officials said that based on information obtained through the pilot, VA has determined that it needs a total of 10 federal reviewers on the critical path team; however, it still was determining its needs for contractor staff as of November 2015. According to OSDBU officials, VA has contracts in place for the verification program staff through April 2016 and plans to start the process for securing new contracts in January 2016. We reported in January 2013 that VA moved from using a paper-based verification application to an electronic application when it implemented a new case-management system in 2011, consistent with our prior recommendation. However, we identified significant shortcomings in VA’s data system, including that the system did not collect important data and had limited reporting and workflow management capabilities. We recommended that VA integrate its efforts to modify or replace the program’s data system with a broader strategic planning effort to ensure that the system addressed the program’s short- and long-term needs. VA concurred with the recommendation and included a strategic performance goal to improve information technology capabilities in its strategic plan, which was consistent with our recommendation. VA awarded a contract for an enhanced data system; however, VA has since faced delays in developing the system. VA’s Office of Information Technology hired a contractor in September 2013 to develop the new system, but VA cancelled the contract in October 2014 due to poor contractor performance. VA paid the contractor about $871,000 for work that had been performed before the contract’s termination, and received several planning documents from the contractor that helped inform its current acquisition effort, according to CVE officials. VA officials told us that CVE and VA’s Office of Information and Technology then began working with a contractor in September 2014 to identify data systems in existing federal programs that VA could use to build its own system. In February 2015, officials told us that they planned to award a contract for development of the new system in the first quarter of fiscal year 2016 based on the contractor’s research to identify existing federal systems. VA was unable to award a contract based on this effort. Specifically, the contractor identified two programs that had the potential to be used in developing a new case-management system for VA. However, VA officials conducted additional research on each program’s system and found technical or administrative reasons that prevented VA from partnering with any of the identified programs to use their existing systems to develop a new case-management system for CVE. VA’s more recent efforts to develop a new case-management system also have faced setbacks. In May 2015, VA established an internal working group consisting of staff from OSDBU and VA’s Office of Information and Technology and Office of Acquisition and Logistics to plan and manage the development of the new system, based on the planning documents provided by the contractor and in accordance with internal guidelines for managing new information technology projects. In July 2015, VA officials told us they decided to develop a pilot system through another existing contract. Officials said they intended to use the pilot system to provide VA with the opportunity to evaluate the capabilities of a new system without the time and expense of putting an entire new system in place. VA developed specifications and other planning documents for the pilot system, and planned to develop and evaluate the system from November 2015 through January 2016. If the pilot was successful, VA had planned to issue a solicitation and award a contract for development of a full system by April 2016 and fully transition to the new system by September 2016. However, in November 2015, OSDBU officials told us that the Supply Fund board had requested that OSDBU develop and provide a business case for the new system. VA officials stated that as a result, they have revised the timeline for developing the new system, and expect to begin the pilot in January 2016. If the pilot is successful, VA plans to fully transition to a new system in early 2017. But as a result, VA continues to use a data system that does not collect important data and has limited reporting and workflow management capabilities. In both our May 2010 and January 2013 reports, we found that VA faced challenges in developing and implementing plans for establishing an effective verification program. Specifically, in our May 2010 report, we found that VA did not have a plan or specific time frames for implementing a thorough and effective verification program, including filling vacant staff positions; improving verification procedures to ensure greater completeness, accuracy, and consistency of verification reviews; and conducting timely site visits at high-risk firms. We recommended that VA develop and implement such a plan. VA took actions that addressed the specific actions referenced in our recommendation, obviating the need for a plan to accomplish these actions. In our January 2013 report, we found that VA faced challenges in its strategic planning efforts and recommended that VA refine and implement a strategic plan with outcome-oriented longer-term goals and performance measures. Subsequently, VA developed a strategic plan for fiscal years 2014–2018 that, consistent with our recommendation, described OSDBU’s vision, mission, and performance goals for its programs, including the verification program. Additionally, VA has developed a high-level operating plan for fiscal year 2016 that identifies key actions needed to meet OSDBU’s objectives, such as transitioning to a new verification process, completing revisions to verification regulations, and developing a new case- management system. But, VA’s operating plan is not comprehensive and does not include an integrated schedule with specific actions and milestone dates for achieving program changes or discuss how the efforts described in the previous sections might be coordinated. For example, the operational plan states that VA needs to have in place an information technology system that allows both case-management and client-relationship management. However, it does not describe the specific actions that VA must take to acquire such a system or how system development will be integrated with other ongoing efforts, such as adoption of a new verification process. Instead, the operating plan states that a main element to achieving the goal is to develop a case-management system. In another example, the operating plan states that VA must have a verification process that provides a positive veteran experience and that a main element to achieving that goal is to transition to the new process. But the plan does not explain the specific actions necessary to fully transition to the new process or the timetable for the transition. In addition, VA does not have a process in place to update the verification program operating plan on a timely basis to ensure that it reflects current initiatives, conditions, and long-term goals. We previously reported that useful practices and lessons learned from organizational transformation show that a transformation, such as CVE’s efforts to make the verification process more efficient and veteran-friendly, is a substantial commitment that could take years before it is completed, and therefore must be carefully and closely managed. As a result, setting implementation goals and a timeline to build momentum and show progress from day one is essential for organizations. Tracking implementation goals and establishing a timeline can help pinpoint performance shortfalls and gaps and identify the need for midcourse corrections. According to OSDBU officials, each OSDBU program team (such as CVE) is to develop an action plan for its specific program that includes resource needs and expected timelines. The Executive Director stated that OSDBU intends to review and incorporate each action plan into the operating plan for all of OSDBU, so that OSDBU has a detailed plan for the verification and other OSDBU programs. CVE officials told us that they had initially delayed finalizing CVE’s plan because they were waiting to evaluate the results of the verification pilot, and intended to finalize the plan with actions and milestone dates in December 2015. As of January 29, 2016, CVE had yet to complete an operating plan for the verification program. Without a plan that contains actions and milestone dates for the multiple efforts CVE has been undertaking, VA may face difficulties in managing these efforts to completion. Furthermore, without engaging in effective operational planning moving forward, VA lacks assurance that it can achieve its longer-term objectives for the verification program. VA has made significant improvements in its verification program since our 2013 report, including application processing time, quality control, and communication with veteran applicants. Nonetheless, the agency continues to face challenges in making the program less resource- intensive and more efficient and veteran-friendly. VA has acknowledged these issues and has begun to transform the program to address them. VA’s efforts to restructure the verification process, realign organizational structure, and acquire a new case-management system represent significant efforts for CVE’s team of 16 federal employees. But these efforts began and have continued in the absence of a detailed operational plan to guide and integrate them with VA’s strategic objectives. And the agency has faced challenges with planning—both strategic and operational—as we found in previous reviews of the program dating to 2010. By putting such a plan in place to guide the program’s transformation, VA could obtain reasonable assurance that these efforts will be properly sequenced, managed to completion, and help VA accomplish its longer-term goals. Moreover, having a detailed plan to accomplish multiple ongoing efforts is critical given the repeated delays in VA’s efforts to acquire a new case-management system and the lack of continuity in CVE leadership. Such a plan is also critical in the context of VA’s efforts to carry out an organizational transformation and its long-term goals to expand the program’s capacity to serve more veterans. Without a policy to review and update the operating plan to reflect current conditions and priorities, VA would continue to be at risk for delays in implementing its initiatives and achieving its long-term goals. To improve the management and oversight of VA’s SDVOSB and VOSB verification program, we recommend that the Secretary of Veterans Affairs direct OSDBU to complete its fiscal year 2016 operating plan and include an integrated schedule that addresses key implementation goals and the actions and milestone dates for achieving them, such as the coordination of the redesign of the verification process and the design, acquisition, and deployment of a new case-management system; and establish a process to review and update the operating plan for the verification program on a timely basis to address new VA initiatives, other changing conditions, and long-term goals. We provided a draft of this report to the Department of Veterans Affairs for comment. In its written comments, VA agreed with our recommendations. Specifically, VA said that it completed a draft of the fiscal year 2016 operating plan, which received input from the major OSDBU program areas (including the verification program). A final version of the plan, which will incorporate key implementation goals and milestones, will be released by March 31, 2016. VA also stated that OSDBU has implemented a process to review and update the operating plan for the verification program and all other program areas on a timely basis. According to VA, the process will allow OSDBU to address VA initiatives and programmatic contributions linked to realizing those initiatives and articulating how other changing conditions and long-term goals will be managed. VA also provided timeframes for completing its planned actions. VA provided technical comments and updates on the status of some of its ongoing initiatives, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report assesses progress by the Department of Veterans Affairs (VA) in (1) establishing a timely and consistent verification program and improving communication with veterans, and (2) the steps VA has taken to identify and address program challenges and longer-term goals. We and others previously identified verification program challenges, including application processing timelines and quality controls, communication with veterans, case-management system, and strategic and operational planning. To assess VA’s progress in establishing a timely and consistent verification program, we reviewed relevant statutes, regulations, and procedures for the verification program. We also reviewed the verification program quality manual to understand what the Center for Verification and Evaluation’s (CVE) quality management standards were and how CVE ensured quality of the program, and reviewed internal audit reports from December 2014 to March 2015. We interviewed officials in VA’s Office of Small and Disadvantaged Business Utilization (OSDBU) and CVE about their policies and procedures for processing applications, for quality control, for investigating allegations of noncompliance with program regulations, and changes to the verification program since our last report (2013). We conducted a case file review to determine the extent to which VA had followed its policies, procedures, and quality controls for processing applications, as well as the extent to which VA had processed applications within regulatory time limits. We selected a stratified probability sample of all verification applications (initial and renewal applications) submitted to VA between June and September 2014. We chose this time period so we could obtain a sample of applications for which CVE had completed its application review and so that the applications in our sample would have been processed under a recent and similar set of verification procedures. The applications were stratified by two groups of decision outcomes: (1) withdrawals and (2) approvals and denials. The sample was designed to make generalizeable estimates for approvals and denials only, with the sample of withdrawals providing nongeneralizeable examples. We used simple random sampling methods to select 96 of the 1,306 applications submitted to VA during this time frame that resulted in an approval or denial. We developed and pre-tested an instrument to collect data from the case files on application processing time frames, completeness of VA’s review, and documentation of key decisions and rationales. We assessed the reliability of these data by interviewing VA officials knowledgeable about the data, reviewing documentation related to the data systems, and checking the data for illogical values or obvious errors and found them to be sufficiently reliable for estimating population values. The sample allowed us to estimate the proportion of cases for which VA consistently followed its policies and procedures and met regulatory time frames for reviewing applications for the verification program for applications submitted from June through September 2014. Because we used simple random sampling methods to select approvals and denials, our estimated proportions did not require weighting. We used hypergeometric methods to estimate 95 percent confidence intervals, which account for the small size of the sample and population, estimated proportions near 0 or 100, and a nonignorable sampling fraction. We reviewed administrative program data obtained from VA on application processing times for fiscal years 2012 through 2015 and compared those numbers to our findings from the case file review. We also reviewed administrative data from VA on the number and type of site visits conducted in fiscal years 2014 and 2015. We assessed the reliability of CVE’s administrative program data by interviewing VA officials and reviewing documentation related to VA’s data system, and we found the data to be sufficiently reliable for describing VA’s reported processing statistics. We also reviewed the files of 10 verified businesses for which CVE received allegations of noncompliance with program regulations to identify the steps CVE took to investigate the allegations. To select these files, we reviewed more than 100 allegations of noncompliance with program regulations sent to VA between June 2014 and May 2015 by a veteran-owned small business advocate. We catalogued tips that were relevant to the verification program (tips that dealt with potential problems in the verification process versus tips that dealt with VA contracting issues) and selected 10 firms for which to conduct a more in-depth review of how VA reviewed or addressed the alleged fraud. We purposefully selected these firms to obtain variation in the type of allegation (e.g., that the firm was not owned by a service-disabled veteran but instead was a “pass-through” or that the firm did not meet the criteria for a small business), whether an official protest was filed, and results (whether the firm remained in the verification database). We reviewed the files associated with each of these firms and collected information about the steps VA took to respond to allegations of noncompliance and status protests. To assess VA’s progress in addressing communication challenges with veterans, we reviewed VA’s work instructions for processes that involve communicating with veterans. We also reviewed VA’s website to identify guidance available for applicants, such as an applicant guide, frequently asked questions, verification assistance briefs, and an online self- assessment tool for prospective applicants. We interviewed VA officials to determine what procedures they have in place to communicate with applicants and verified businesses and obtain feedback from these entities on VA’s verification process and communication efforts. We interviewed representatives of two veteran service organizations and four Procurement Technical Assistance Centers (PTACs)—which provide verification assistance to veterans—to obtain information about their opinions of VA’s procedures to verify applications and communicate with veterans. We selected the veterans groups based on our prior work in the area and the PTACs based on recommendations from the Defense Logistics Agency and the Association for Procurement Technical Assistance Centers and to obtain geographic diversity. There are 98 PTACs in the United States with more than 300 local offices. We interviewed counselors at the Florida, Missouri, Nevada, and Washington PTACs. We also analyzed data collected through the case file review to determine the extent to which VA complied with its procedures for communicating with applicants and verified businesses. We reviewed methods VA used to collect feedback from program participants, such as documents relating to VA’s help desk customer satisfaction surveys, survey instruments approved by the Office of Management and Budget, results from surveys that had been deployed as of October 2015, and results from focus groups conducted to identify areas for improvement in the verification process. We also assessed the readability of five determination letters that VA sent to veteran applicants from August through December 2014 to corroborate testimonial evidence from veterans groups indicating that these letters can be difficult to understand. We selected these letters by taking the first five cases from our case file review sample that had been issued either a predetermination or denial decision. To determine the reading level at which determination notices were written, we used an automated readability tool, the Flesch-Kincaid Grade-Level test, which rates text on a U.S. school grade level. To assess the steps VA has taken to identify and address verification program challenges and longer-term goals, we reviewed prior work on the verification program that we and VA’s Office of Inspector General (OIG) conducted. We also reviewed VA’s planning, organizational, and budget documents, such as OSDBU’s 2014-2018 Strategic Plan, OSDBU’s 2016 Operating Plan, CVE organizational charts, and CVE’s budget for fiscal years 2014 and 2015. We compared these planning documents with useful practices and lessons learned on organizational transformations, as identified in previous GAO work. We interviewed VA officials to determine what steps, if any, they have taken to address issues we or the OIG identified, and identify and address other challenges associated with the verification program. We also discussed VA’s plans to restructure the verification process with officials from VA’s OSDBU and CVE. We also reviewed documents pertaining to the new verification process, such as process maps and work instructions for the pilot verification process, the reverification policy issued in October 2015, and the revised program regulations posted for public comment in November 2015. We reviewed VA’s human capital and staff management practices, including CVE’s organizational structure, leadership, and reliance on contractors to conduct verification activities. We analyzed VA data on the number of contractors and federal staff working on the verification program and compared these numbers to those found in our 2010 and 2013 reports. We used testimonial evidence obtained during interviews with contractors and agency officials to describe the responsibilities of contractor and federal staff. We also reviewed VA’s policy and planning documents and position descriptions to describe the changes it plans to make to its organizational structure. To assess the progress VA has made in modifying or replacing its information technology system for case management, we reviewed project planning documents that included a list of technical requirements, as well other contract documents. We interviewed VA officials about their plans for developing the new case-management system, including plans for issuing a solicitation, fully transitioning to the new system, and ensuring the system supports the pilot verification process. We conducted this performance audit from December 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Harry Medina (Assistant Director); Katie Boggs (Analyst-in-Charge), Meghana Acharya, Mark Bird, Charlene Calhoon, Pamela Davidson, Kathleen Donovan, Beth Faraguna, John McGrail, Barbara Roesmann, and Jeff Tessin made key contributions to this report.
In fiscal year 2014, the VA made contract awards totaling $4.0 billion to veteran-owned small businesses, including $3.6 billion to service-disabled veteran-owned small businesses. VA must verify the ownership, control, and status of firms seeking such preferences. GAO found in January 2013 ( GAO-13-95 ) that VA faced challenges verifying firms on a timely and consistent basis, communicating with veterans, enhancing information technology systems, and developing and implementing long-term strategic plans. GAO assessed (1) VA's progress in establishing a timely and consistent verification program and improving communication with veterans, and (2) the steps VA has taken to identify and address program challenges and longer-term goals. GAO reviewed VA's verification procedures and strategic plan, reviewed a generalizable random sample of 96 verification applications, and interviewed VA officials and representatives from two veterans' organizations selected from prior work and four verification assistance counselors selected to obtain geographic representation. Since GAO's 2013 report, the Department of Veterans Affairs (VA) took significant steps to improve how it verifies and communicates with veteran-owned small businesses, consistent with several of GAO's previous recommendations. VA reported that due to process improvements, it reduced average application processing times by more than 50 percent—from 85 days in 2012 to 41 in 2015. VA reported that it generally met its regulatory goals for application processing, and GAO's review of randomly selected application files generally corroborated this statement. VA refined the program's quality controls and implemented an internal audit process. Veterans' organizations and verification counselors with whom GAO spoke noted improvements in VA's communications and interactions with veterans, although three of the four verification counselors with whom GAO spoke suggested the program's website could be clearer and all four said the agency's letters to veterans could be clearer. In response, VA officials said they have been redesigning the website to make documents easier to locate. Officials also said the regulatory language in the letters was necessary and they encourage veterans to obtain free assistance with their applications from VA-certified counselors. VA has been undertaking multiple efforts to address continuing verification program challenges (such as an outdated case-management system) and long-term goals (making processes more veteran-friendly). However, the agency has not had a comprehensive operational plan for managing these efforts to completion. GAO previously recommended that VA's Office of Small and Disadvantaged Business Utilization (OSDBU), which oversees the verification program, establish a strategic plan for the program and integrate efforts to replace an outdated case-management system with agency strategic planning. VA developed a strategic plan for 2014–2018 that included longer-term goals for the program, such as making the verification process more veteran-friendly. In August 2015, VA began piloting a restructured process that allows veterans to communicate directly with the individual processing their application. VA plans to fully transition to the new process by September 2016. VA recently hired a new director for the program, which has had four directors since 2011, including two acting directors in 2015. VA also has continued its efforts to replace the outdated case-management system for the program, but has faced delays due to contractor performance and funding issues. As a result, VA officials do not anticipate the replacement system will be in place until early 2017. While VA has developed a high-level operating plan for OSDBU, the plan does not integrate schedules or specify actions and milestone dates for achieving the multiple changes under way or discuss how to integrate the efforts. VA officials told GAO they were working on developing a detailed operating plan but were waiting to evaluate preliminary results of the verification pilot. GAO's work on organizational transformations states that organizations should set implementation goals and develop timelines to show progress. A detailed plan to guide multiple ongoing efforts is critical given repeated delays in VA's efforts to acquire a new case-management system and the lack of continuity in the program's leadership. Once such an operating plan is developed, it also will be important to update it on a timely basis. Otherwise, VA would continue to be at risk for delays in implementing its initiatives and achieving its long-term goals. GAO recommends that VA: (1) complete its fiscal year 2016 operating plan and include an integrated schedule addressing key actions for the verification program and milestone dates for achieving them, and (2) establish a process to review and update the operating plan to address changing conditions. VA agreed with these recommendations.
the labor force—23 million workers—is employed by companies with federal contracts and subcontracts, according to fiscal year 1996 estimates of the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP). Federal law and an executive order place greater responsibilities on federal contractors, compared with other employers, in some areas of work place activity. For example, federal contractors must comply with Executive Order 11246, which requires a contractor to develop an affirmative action program detailing the steps that the contractor will take and has already taken to ensure equal employment opportunity for all workers, regardless of race, color, religion, sex, or national origin. In addition, the Service Contract Act and the Davis-Bacon Act require the payment of the area’s prevailing wages and benefits on federal contracts in the service and construction industries, respectively. Furthermore, Labor may debar contractors in the construction industry under the Contract Work Hours and Safety Standards Act for “repeated willful or grossly negligent” violations of safety and health standards issued under the OSH Act. Under federal procurement regulations, agencies may deny an award of a contract, or debar or suspend a contractor, for a variety of reasons, including safety and health compliance problems. Before awarding a contract, an agency must make a positive finding that the bidder is “responsible,” as detailed in federal procurement regulations. Also, federal agencies can debar or suspend companies for any “cause of so serious or compelling a nature that it affects the present responsibility of a government contractor or subcontractor.” In determining whether a federal contractor is “responsible,” agency contracting officials can consider compliance with applicable laws and regulations, which could include the OSH Act or the NLRA. adverse contracting action. At its monthly meetings, the committee also helps interpret regulations on debarment or suspension issued by OMB and determines which agency will take lead responsibility for any actions taken against a federal contractor. Most firms—regardless of whether they are federal contractors—must comply with safety and health standards issued under the OSH Act of 1970, which was enacted “to assure safe and healthful working conditions for working men and women.” The Secretary of Labor established OSHA to carry out a number of responsibilities, including developing and enforcing safety and health standards; educating workers and employers about work place hazards; and establishing responsibilities and rights for both employers and employees for the achievement of better safety and health conditions. The NLRA provides the basic framework governing private sector labor-management relations. This act, passed in 1935, created an independent agency, NLRB, to administer and enforce the act. Among other duties, NLRB is responsible for preventing and remedying violations of the act—unfair labor practices (ULP) committed by employers or unions. NLRB’s functions are divided between its general counsel and a five-member Board. The Office of the General Counsel investigates and prosecutes ULP charges, while the Board reviews all cases decided by administrative law judges in NLRB’s 33 regions. Management Information System (IMIS), which contains detailed information on all OSHA inspections conducted by federal OSHA or the state-operated programs. It includes detailed data on penalty amounts, the severity of the violation, the standards violated, whether fatalities or injuries occurred, and other information. In using OSHA’s IMIS database, which includes many thousands of inspections annually, we focused only on those inspections resulting in significant penalties—proposed penalties of at least $15,000—regardless of the amount of the actual penalty recorded when the inspection was closed. Using this definition, inspections involving significant penalties represented only 3 percent of the 72,950 inspections closed in fiscal year 1994. We matched the NLRB case data and OSHA’s IMIS inspection data with the database of federal contractors maintained by GSA, the Federal Procurement Data System (FPDS). FPDS tracks firms awarded contracts of $25,000 or more in federal funding for products and services. Although it is difficult to estimate the number of federal contractors, GSA reports there may be as many as 60,000 federal contractors because this is the number of unique corporate identification codes in FPDS. FPDS contains a variety of information, including the contractor’s name and location, agency awarding the contract, principal place of contract performance, and the dollar amount of the contract awarded. FPDS does not contain information on contractors’ safety and health or labor relations’ practices. Because the lack of corporate identification numbers in both the NLRB and OSHA databases precluded our use of an automated matching procedure, we had to manually match these data. We manually compared each firm name from the Executive Secretary and IMIS databases and with the larger FPDS file, identifying those firms that were identical or nearly identical. After this manual match, to ensure that the firms listed in the Executive Secretary or IMIS databases were the same as those listed in the FPDS, we telephoned the firm at the location the OSHA or labor violation occurred. We then verified that the firm number and location identified in the Executive Secretary or IMIS database and the FPDS database referred to the same firm. regard to OSHA’s characterization of information on its corporatewide or individual facility settlement agreements negotiated with employers. In response, we recommended to the Secretary of Labor that the quality of the IMIS data be assessed as they relate to settlement agreements, and that steps be taken to correct any detected weaknesses. Since that time, OSHA has taken some action to address these concerns, including its introduction of a special code to identify administrative actions taken under corporatewide settlements in IMIS, the inclusion of an additional field to flag cases that are atypical, and additional information added to the IMIS “report explanation” field about the treatment of penalties in certain cases. It should be noted that our approach probably understated, in a number of ways, the number of federal contractors violating the laws. In some cases, firms had gone out of business or relocated, or the location information in the IMIS or FPDS databases was inaccurate or incomplete, or the employer refused or was unable to confirm or deny key information over the telephone, preventing us from verifying a potential match. In other instances, firms may have split, merged, changed names, or operated subsidiaries, so that different names would have appeared among the three databases, thus resulting in matches escaping our detection. We also focused our analysis on violations committed by primary contractors. We did not determine the extent to which contract dollars were awarded by primary contractors to subcontractors with violations, or the degree to which the contractors we identified were also subcontractors on other awards. Concerning IMIS in particular, many employers we identified as violators in OSHA’s database were construction companies. Because construction work sites are temporary, the employer could not always remember whether the work place existed or when the inspection was conducted. Regarding the NLRB data, many firms were involved in cases that were withdrawn or settled and our analysis does not include such cases in assessing violations committed, remedies ordered, and number of workers affected. A total of 80 firms that violated the NLRA received over $23 billion from more than 4,400 federal contracts during fiscal year 1993—about 13 percent of total fiscal year 1993 contract dollars. These contract dollars were concentrated among only a few violators, with six such firms receiving about $21 billion. Firms receiving more than $500 million each in contracts got about 90 percent of these federal contract dollars. About 73 percent of the $23 billion was awarded by the Department of Defense, with NASA and the Department of Energy as the other major sources of these contract moneys. About two-thirds of these dollars went to manufacturing firms. Most of the violators were large firms. Of the 77 violators for which data on workforce size were available, 35 had more than 10,000 employees. Of the 64 violators for which sales data were available, 32 had over $1 billion in sales, and 10 firms had over $10 billion in sales. In 35 of the 88 NLRB-related cases we identified as involving the 80 federal contractors, the Board required firms to reinstate workers or restore workers to their prior positions as the remedy for violations. In 32 of these 35 cases, firms were ordered to reinstate unlawfully fired workers. In 6 of the 35, firms were ordered to restore workers who had been subjected to another kind of unfavorable change in job status. An unfavorable change in job status could mean that the worker, for example, was suspended, demoted, transferred, or not hired in the first place because of activities for or association with a union. Some cases involved both an order to reinstate fired workers and an order to restore workers who were subjected to another kind of unfavorable change in job status. These remedies affected a sizable number of specific individual workers and a far larger number of workers who were part of a particular bargaining unit. The Board ordered firms to reinstate or restore 761 individual workers to their appropriate job positions. In 44 of the 88 cases, the Board ordered the firm to pay back wages to 801 workers and ordered the firm to restore benefits to 462 workers in 28 cases. In most cases, back wages or benefits were owed to individual workers who had been illegally fired or subjected to another kind of unfavorable change in job status. However, in 12 cases, wages or benefits were ordered restored to all workers in the bargaining unit because the firm failed to pay wages or benefits as required under its contract with the union. Some cases involved both a remedy for individual workers owed back wages or benefits as well as the same type of remedy for the entire bargaining unit. union. In 24 cases, firms were ordered to stop threatening employees with the loss of their jobs or the shutdown of the firm. Firms were ordered in 33 cases to stop other kinds of threats, such as interrogating employees and circulating lists of employees associated with the union. To facilitate the bargaining of a contract, the Board ordered firms to provide information to the union in 16 cases. We found 261 federal contractors that were the corporate parents of facilities that had received proposed penalties of $15,000 or more from OSHA for violations of safety and health regulations in fiscal year 1994. These contractors received $38 billion in contract dollars, about 22 percent of the $176 billion in federal contracts, valued at $25,000 or more, awarded that year. About 75 percent of the total dollar value of these contracts was awarded by the Department of Defense, with large amounts of contract dollars also awarded by the Department of Energy and NASA. About 5 percent of these 261 federal contractors (12 firms) each received more than $500 million in federal contracts in fiscal year 1994. In total, this group received over 60 percent of the $38 billion awarded to violators. A majority of the 345 work sites (56 percent) penalized for safety and health violations were engaged in manufacturing. An examination of the violators’ standard industrial classification codes showed that many of these work sites manufactured paper, food, or primary and fabricated metals. Although most violators were engaged in manufacturing, a significant percentage of work sites (18 percent) were engaged in construction. Many (68 percent) of the work sites where the violations occurred were relatively small, employing 500 or fewer workers. Just over 15 percent of the work sites employed 25 or fewer workers. Although few work sites employed large numbers of workers, the federal contractors that own these work sites often employed large numbers of workers in multiple facilities across the country. serious physical harm to workers, or willful (69 percent)—situations in which the employer intentionally and knowingly committed a violation. At work sites of 50 federal contractors, a total of 35 fatalities and 85 injuries occurred. Most of the violations (72 percent) were of general industry standards, including failure to protect workers from electrical hazards and injuries resulting from inadequate machine guarding. OSHA compliance officers assessed a total of $24 million in proposed penalties and $10.9 million in actual penalties for all violations in these 345 inspections. In some cases, these federal contractors were assessed proposed penalties that were especially high. In 8 percent of the 345 inspections, the contractor was assessed a proposed penalty of $100,000 or more. In addition, some of these 261 federal contractors were assessed a significant penalty more than once in fiscal year 1994 for violations that occurred at different work sites owned by, or associated with, the same parent company. Finally, a search for prior inspections of the same work sites that had been assessed significant penalties for safety and health violations revealed a number of additional inspections of parent company facilities, including some additional significant penalty inspections. We did not evaluate the general safety and health inspection records of federal contractors. However, some of the contractors who were assessed significant penalties also operated facilities with exemplary health and safety records, while others maintained facilities that participated in other OSHA-sanctioned voluntary compliance programs that suggest a proactive approach to work place safety and health. Management Services Department, which is developing its comprehensive database on federal contractors. In our report on federal contractors who violated OSHA regulations, we concluded that contracting agencies could use information on a contractor’s safety and health record during the procedure for the awarding of federal contracts as a vehicle to encourage a contractor to undertake remedial measures to improve work place conditions.However, agency contracting authorities have not done so, at least partially because they did not have the information to determine those federal contractors who are violating safety and health regulations, even when they have been fined significant penalties for willful or repeated violations. Thus, we recommended that the Secretary of Labor direct OSHA to develop and implement policies, in consultation with GSA and the Interagency Committee on Debarment and Suspension, on how safety and health records of federal contractors could be shared to better inform agency awarding and debarring officials in their decisions. We noted, however, that OSHA should work closely with the contracting agencies to help them interpret and use inspection information effectively. We also recommended that OSHA consider the appropriateness of extending these policies and procedures to cover companies receiving other forms of federal assistance such as loans and grants. Finally, we urged OSHA to develop procedures on how it will consider a company’s status as a federal contractor in setting its own priorities for inspecting work sites. At this time, OSHA officials have stated that the agency has conducted discussions with members of the Interagency Committee on Suspension and Debarment regarding possible policies and procedures for sharing safety and health records, although no final decisions have yet been made. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or Members of the Subcommittee may have. Project Labor Agreements: The Extent of Their Use and Related Information (GAO/GGD-98-82, May 29, 1998). Beverly Enterprises, Inc. (GAO/HEHS-97-145R, June 3, 1997). OSHA’s Inspection Database (GAO/HEHS-97-43R, Dec. 30, 1996). Occupational Safety and Health: Violations of Safety and Health Regulations by Federal Contractors (GAO/HEHS-96-157, Aug. 23, 1996). Worker Protection: Federal Contractors and Violations of Labor Law (GAO/HEHS-96-8, Oct. 24, 1995). National Labor Relations Board: Action Needed to Improve Case-Processing Time at Headquarters (GAO/HRD-91-29, Jan. 7, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed federal contractors' noncompliance with federal labor laws, focusing on: (1) federal contractors' noncompliance with the National Labor Relations Act (NLRA) during fiscal years (FY) 1993 and 1994 and with the Occupational Safety and Health (OSH) Act during FY 1994; and (2) the status of recommendations GAO made to the National Labor Relations Board (NLRB) and to the Occupational Safety and Health Administration (OSHA) in those reports involving the use of information on federal contractors to enhance workplace health and safety and workers' rights to bargain collectively. GAO noted that: (1) federal contracts worth many billions of dollars had been awarded to employers who had been found in violation of NLRA or the safety and health regulations issued under the OSH Act; (2) the 80 firms that had violated the NLRA during FY 1993 and FY 1994 had received $23 billion, or about 13 percent of the total dollar value of federal contracts awarded during FY 1993; (3) there were 261 federal contractors that had work sites at which OSHA had assessed proposed penalties of $15,000 or more for noncompliance with health and safety regulations; (4) these firms received $38 billion in federal contracts awarded during FY 1994; (5) both of these totals probably underestimate the number of violators and contract dollars received during both years; (6) in both cases, most of the contract dollars were awarded to violators that were large firms, and a majority of these firms were in manufacturing industries; (7) about 75 percent of the dollar value of these awards came from the Department of Defense, although many dollars also came from the Department of Energy and the National Aeronautics and Space Administration; (8) although agencies can consider employers' labor-management relations and health and safety records in the awarding of contracts under current procurement regulations, agency officials responsible for awarding contracts and debarring contractors from receiving future contracts have generally not taken actions against contractors with safety and health or labor-relations law violations; (9) this is at least partially because they do not have adequate information to determine those federal contractors in noncompliance with these laws, even when the contractors have been assessed severe penalties or remedies under the respective acts; (10) in its reports, GAO made recommendations to both NLRB and OSHA that could enhance the effectiveness of their enforcement through the use of information on federal contractors; and (11) although NLRB has taken action in implementing GAO's recommendations, OSHA has not yet done so.